The FDA’s adoption of an AI tool named Elsa intended to streamline drug approval processes has raised significant concerns after reports revealed the system fabricates studies, a phenomenon known as hallucination. Six FDA employees shared experiences in a CNN report, noting Elsa’s generation of nonexistent studies and misrepresentation of research findings. Although the tool aids in creating summaries and meeting notes, its unreliability carries risks for public health oversight. This development follows recent controversies involving AI-generated inaccuracies in governmental health commissions, underscoring the critical need for human verification in AI-assisted regulatory functions.