Concerns have surfaced regarding the reliability of 'Elsa,' an AI tool deployed by the FDA to facilitate drug approval processes. Although some FDA employees find Elsa useful for tasks such as meeting summaries, others report that it fabricates studies and misrepresents research findings, a phenomenon known as 'hallucination' in AI parlance. The problem poses potential risks to public health, emphasizing the necessity for rigorous human verification of AI-generated outputs. This revelation follows reports of fake studies marred in related commissions, highlighting critical challenges in adopting generative AI technologies within regulatory frameworks.