Google’s AI-generated search summaries are serving misleading health information that experts say could put people at risk, according to a Guardian investigation.
AI Overviews, which appear at the top of some Google search results, use generative AI to produce short answers drawn from multiple sources. Google has promoted them as “helpful” and “reliable”, but health organisations and specialists say some responses contain errors with potentially serious consequences.
In one example described by experts as “really dangerous”, an AI Overview advised people with pancreatic cancer to avoid high-fat foods. According to Pancreatic Cancer UK, this is the opposite of standard guidance, which stresses the need for high-calorie intake so patients can maintain weight and tolerate treatment.
Anna Jewell, director of support, research and influencing at Pancreatic Cancer UK, said advising patients to avoid high-fat foods was “completely incorrect”.
“If someone followed what the search result told them then they might not take in enough calories, struggle to put on weight, and be unable to tolerate either chemotherapy or potentially life-saving surgery,” she said.
Another search for “what is the normal range for liver blood tests” produced a dense list of figures without explaining how results differ by factors such as country, sex, ethnicity or age.
Pamela Healy, chief executive of the British Liver Trust, called the summaries “alarming”.
“What the Google AI Overviews say is ‘normal’ can vary drastically from what is actually considered normal,” she said. “It’s dangerous because it means some people with serious liver disease may think they have a normal result then not bother to attend a follow-up healthcare meeting.”
In women’s health, a search for “vaginal cancer symptoms and tests” incorrectly listed a pap test as a diagnostic test for vaginal cancer.
Athena Lamnisos, chief executive of the Eve Appeal cancer charity, said:
“It isn’t a test to detect cancer, and certainly isn’t a test to detect vaginal cancer – this is completely wrong information. Getting wrong information like this could potentially lead to someone not getting vaginal cancer symptoms checked because they had a clear result at a recent cervical screening.”
Lamnisos also noted that repeating the same search produced different AI summaries, drawing on varying sources.
“That means that people are getting a different answer depending on when they search, and that’s not good enough,” she said. “Some of the results we’ve seen are really worrying and can potentially put women in danger.”
Mental health organisations raised similar issues. The charity Mind said some AI Overviews for conditions such as psychosis and eating disorders contained “very dangerous advice”.
Stephen Buckley, Mind’s head of information, said some responses were “incorrect, harmful or could lead people to avoid seeking help”, and that others lacked important nuance.
“They may suggest accessing information from sites that are inappropriate … and we know that when AI summarises information, it can often reflect existing biases, stereotypes or stigmatising narratives,” he said.
The concerns follow wider scrutiny of AI-generated content. Previous research has shown that chatbots across several platforms can offer inaccurate financial guidance, and that automated news summaries may distort reporting.
Sophie Randall, director of the Patient Information Forum, which promotes evidence-based health information, said the findings showed the risks of placing AI-generated answers at the top of search results.
“Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health,” she said.
Stephanie Parker, director of digital at end-of-life charity Marie Curie, said:
“People turn to the internet in moments of worry and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health.”
Google said that, in many of the examples shared, it had only seen partial screenshots, but that the summaries appeared to link to reputable sources and encourage users to seek expert advice.
The company said the “vast majority” of AI Overviews were factual and useful, and that their accuracy was comparable with other search features, such as featured snippets. It said it continuously updated the system and would act when AI Overviews misinterpreted web content or missed important context.
A Google spokesperson said:
“We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information.”
