The FDA's deployment of the AI tool Elsa, intended to expedite drug approval processes, has resulted in the generation of fabricated research studies and misrepresented data, raising serious concerns among agency insiders. CNN reports that while Elsa has been useful for note-taking, its tendency to hallucinate information creates reliability issues, undermining trust in AI-assisted regulatory work. This problem highlights the urgent need for rigorous validation in AI applications within sensitive healthcare domains to avoid potentially hazardous public health outcomes.