20 January, 2026
A Guardian report has highlighted how inaccurate information that directly contradicts official health advice is being presented to internet users seeking medical guidance. According to the investigation, Google’s AI Overview returned responses suggesting that pancreatic cancer patients should avoid high-fat foods, the opposite of what doctors advise. This information could directly harm patients.
Following the investigation, the Guardian reports that Google has reviewed and removed some of the troubling responses, including those about liver function tests. However, liver specialists who responded to the update expressed concerns that slight variations in the same searches could still trigger potentially misleading summaries.
A blog published by the BMJ Journal of Medical Ethics in November 2025 echoes these concerns, warning that AI-generated health summaries appear by default in search engines, with users unable to disable them permanently. The authors say AI models produce inaccurate responses (known as ‘hallucinations’) in up to 48% of cases, and these systems reduce click-through rates to actual medical websites by 40 to 60%, replacing the process of consulting peer-reviewed sources. The blog reports that Google’s AI Overview recommended unproven dental practices, such as ‘oil pulling’, by combining information about it with proven oral hygiene advice.
“These findings are troubling. When AI systems present unverified medical advice with the same authority as peer-reviewed guidance, it is hard for the public and patients to distinguish between what will help them and what could harm them,” said Dr Eva Polverino, European Respiratory Society (ERS) Director of Scientific Relations with the European Union.
Eva added, “False health information is a serious problem. We already know that vaccination rates are falling across Europe because of mis- and disinformation spreading online, even though evidence overwhelmingly shows that vaccines are safe and reduce hospitalisations and save lives from viruses like flu and COVID-19. When people are presented with AI-generated information instead of trusted health sources, they are at risk of following incorrect advice on prevention, treatments or how to manage their symptoms, which can have serious consequences.”
Where does AI get its information?
Concerns about the credibility of sources extend beyond Google. OpenAI, the company behind ChatGPT, reports that 230 million people globally ask the AI chatbot health and wellness questions every week. But data which tracked over 230,000 prompts found that the most-cited sources on ChatGPT are Reddit and Wikipedia.
Reddit is a message-board style social media platform where users manage content moderation, and Wikipedia is an online encyclopedia that uses an open-editing model. Neither of these models meets the high ethical and peer-review standards needed to verify health and scientific information.
“When people are searching for health information, they need to be confident that the answers come from credible, peer-reviewed sources. Some AI tools connect to live internet searches and pull information from unverified sources like social media. Others rely on training data that may be months or years out of date,” said Dimitris Kontopidis, European Lung Foundation (ELF) Chair.
Dimitris added, “Both approaches pose risks when medical knowledge advances constantly, and accessing the most current, recommended information can be critical for patient safety. AI tools must meet the same rigorous standards we expect from medical professionals and health organisations.”
OpenAI announced a policy change in October 2025 stating that users cannot use the platform for the “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional”, yet the company recently launched ChatGPT Health. This tool allows users to input data from their medical records and wellness apps directly to the AI platform.
OpenAI says ChatGPT Health is designed with physicians and intended only to help people understand patterns in their health, rather than replacing doctors. Arguably, this puts users in a difficult position: they are told not to rely on the platform for medical advice whilst being given a tool specifically designed to analyse their personal health data. Under OpenAI’s policy, users are responsible for how they use health information provided by the platform, rather than accountability resting with the company providing the information.
Why does this matter, and what can you do?
As AI tools reshape how people access health information, regulators must work with medical experts and patient representatives to establish proper safeguards. Scientific integrity is non-negotiable when public health is at stake.
Health and scientific communities must speak up against inaccurate health information. The costs of staying silent are measured in preventable illness, wasted healthcare resources and lives lost.
Take action: ERS and ELF’s campaign to defend and protect scientific integrity identifies misinformation and disinformation as clear risks to public health and trust in science. Download social media messages in eight languages, email templates to contact policymakers and resources to spread the word in your community.