The Food and Drug Administration's AI tool, Elsa, intended to expedite drug approval processes, has come under scrutiny after reports revealed that it generates fabricated studies and misrepresents research data. Six FDA employees shared experiences of Elsa confidently hallucinating nonexistent studies, raising alarms about the reliability of AI outputs in regulatory contexts. Despite being cost-effective and deployed ahead of schedule, the inaccuracies necessitate rigorous human verification to maintain quality and safeguard public health. These revelations underscore ongoing challenges in integrating AI into critical pharmaceutical regulatory workflows.