Recent research highlights significant vulnerabilities in large language models (LLMs) used as AI chatbots, which can be manipulated to disseminate inaccurate and misleading health information. Evaluation of five prominent LLMs revealed up to 88% of chatbot-generated responses contained misinformation on critical health topics including vaccine safety and HIV. These findings raise concerns about public health risks posed by AI-driven disinformation and the insufficiency of current model safeguards. The study underscores the urgent need for enhanced regulations and technological protections to ensure reliable and trustworthy AI applications in healthcare communication.