The U.S. Food and Drug Administration's AI tool, Elsa, aimed at expediting drug approvals, has been found generating fabricated scientific studies and misrepresenting research. Interviews with current and former FDA employees reveal that while Elsa assists with routine tasks like meeting notes, its hallucination of non-existent studies undermines reliability and poses risks to public health. The technology's limitations highlight the need for rigorous verification processes when deploying AI in regulatory settings, as noted by insiders and recent AI critiques in programming efficiency.