Subscribe Now

By entering these details you are signing up to receive our newsletter.

The age of dAIgnosis: How is AI impacting access to health information?

Estimated reading time: 7 minutes

Image of an internet search bar with digital imagery in the background to denote AI technology

With artificial intelligence (AI) being increasingly utilised and relied upon across multiple industries, such as healthcare, life sciences, finance and entertainment, just how trustworthy is the information it is generating and are users interrogating it or accepting it without question? While some unsubstantiated searches may prove harmless what is the impact and risk to our healthcare of this new era of growing reliance on AI?

“Eat one small rock per day,” for essential vitamins and minerals – a notable piece of health advice offered by the Google AI Overview back in 2024.

Though the tool has come a long way since then (eating rocks is highly dangerous and not advisable for any reason), I was keen to understand how the widespread use of AI is impacting access to health information, and how the scientific community is having to adapt.

In May 2024, Google officially launched its AI Overview—an AI-generated summary that appears at the top of search results. It aims to provide a quick and accurate overview of the search topic, using short and often bullet-pointed answers, to spare users from scrolling, clicking and collating the information themselves.

The impact? Well, evidence shows that this AI summary is replacing the role of websites in providing information.

Research from Authoritas analytics revealed that a website previously ranked as the first search result could lose around 79% of its visits if displayed below the AI Overview.1 Furthermore, Pew Research conducted a month-long survey, finding that out of 69,000 Google searches, users only clicked the website under an AI summary once in every 100 times.2

But why is this happening?

I spoke with Deanna, science communications professional and avid AI user, to get an insight into how people are using these tools to access information about health and medicine. Deanna says:

“I think it would depend on the situation, but I find I now use AI search instead of traditional searches when looking up health-related things, like symptoms for example. I feel it gives a more level-headed response that considers more information.”

It is clear that AI is diverting traffic away from websites, perhaps due to the instant, comprehensive answer it provides by pulling information from lots of sources. 

This then begs the questions, how does AI choose which sources to use in its answers and are they always right?

According to Google, the overview tool was designed to “optimise for accuracy,” with extensive testing on typical user queries and a proportion of search traffic. Using an AI system called a ‘Large Language Model’, the tool analyses and condenses large volumes of text from multiple different sources.3

When testing the accuracy of the Google Overview with a simple health question, the summaries produced are seemingly concise, accessible and accurate when compared to reputable sources.

For example, the query, ‘What is a blood test?’, prompted the response:

“A blood test is a common, quick medical procedure where a small sample of blood is drawn—usually from the arm—to be analysed in a laboratory.”

The summary then goes on to cite trusted informational resources, including the NHS, Johns Hopkins Medicine and Cancer Research UK.

The result was similar for other simple questions I searched. For example, ‘How to treat an injured shoulder?’ and ‘What is genomics?’, both pulled from reliable resources such as the World Health Organisation, the NHS and the National Institute of Health.

However, this wasn’t always the case.

The dependability of AI summaries becomes less solid when searching for more specialised terms.

For example, ‘What is bioinformatics?’ produced a summary with Wikipedia as the first cited source. As did questions including ‘What is RNA?’ and ‘What are protease enzymes?’, making it difficult for a non-expert to verify whether a summary is sound or not.

When talking about whether she deems AI reliable, Deanna says:

“AI can be really helpful in guiding you towards reliable health information at the start of your search, pointing you towards credible research, organisations and guidance. However, it is important to be critical.”

She explains:

AI tools are more likely to cite less reputable sources when there is not enough high-quality information available on the topic. This explains why we see it happen for queries that are highly specialised and specific.

It also explains how, back when it was first launched, Google’s AI overview made the viral mistake of recommending rocks as part of a healthy diet.

On that fateful day in May, prompted by the query ‘How many rocks should I eat per day?’ Google’s AI Overview used a satirical article from The Onion as an informational source, subsequently recommending ‘a serving of gravel, geodes or pebbles with each meal, or hiding rocks in foods like ice cream and peanut butter.’

A Google blog addressing the dangerous advice explained that these inaccuracies usually occur due to the tool “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available”.4

Whilst this example might be comical, there have also been other, less amusing mistakes, such as recommending those with pancreatic cancer avoid high fat foods. This advice was branded by experts as not only untrue, but also harmful, potentially even increasing the risk of a patient dying.5

With AI summaries going nowhere any time soon, these incidents make it even more vital for scientific organisations to act as trusted, accessible information sources. Creating accurate content that gets picked up by AI tools can help ensure that online medical information stays evidence-based, clear and safe.

According to Google, there is no ‘magic tag’ that means a webpage will be featured in an AI Overview. However, there are several things organisations can do to maximise their chances.

AI tools pull from content they have branded as accurate and reliable. Linking to other trustworthy websites in your content, for example the NHS or high impact journals, helps to position your website as another trusted source.

Furthermore, it is also vital that content is kept up to date. This might include replacing any old statistics, removing dead links and adjusting the ‘Last modified,’ date when changes are made to a page.

Content that is broken up into structured, logical sections, each focussing on one major idea, is more likely to be favoured. Using questions as subheadings and then answering this question with the opening sentence is also a great way to be picked up.

There are plenty of online guides about optimising website content for AI,this blog from Microsoftis a good place to start.

In the life sciences space, it is hard not to feel the impact of AI. Whether it’s the hit to website visits, or the added pressure to provide trusted information.

But AI also provides us with opportunity. Not only to use as a tool for our own research and content, but to also be cited by AI, receiving exposure and positioning ourselves as trusted organisations in the space.

Either way, it appears that AI summaries are here to stay, and so we must evolve with them.

Connect with Florence

References
[1] https://www.authoritas.com/blog/ai-overview-user-intent-research
[2] https://www.pewresearch.org/short-reads/2025/07/22/google-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results/
[3] https://blog.google/products-and-platforms/products/search/ai-overviews-update-may-2024/
[4] https://blog.google/products-and-platforms/products/search/ai-overviews-update-may-2024/
[5] https://www.theguardian.com/technology/2026/jan/02/google-ai-overviews-risk-harm-misleading-health-information

Skip to content