Social media algorithms risk creating deeper health divide, according to new Grayling research

New Grayling research has revealed social media algorithms are not only perpetuating false information on health issues among sceptical audiences, but are also limiting some users from accessing certain health information at all.

The study found that as algorithms promote health content similar to that which users have previously engaged with, this increasingly becomes the only content they see, effectively locking people out of accurate information and into a perpetual cycle of misinformation and propaganda.

This type of content is also more likely to have a ‘clickbait’ and emotive edge – for example using language such as ‘deadly’ when it comes to describing new covid variants, once again raising distrust. This was especially the case on X, compared to Instagram, Facebook or LinkedIn, when it comes to topics such as vaccine or conspiracy theories.

Users with a historically low level of interest in health issues were significantly less likely to see health information on their feeds at all.

Grayling Ignite carried out a series of qualitative, observational studies with three groups: Health Interested/Trusting (those with active interest in health content and high trust in ‘official’ organisations like the NHS and regulators), Health Disinterested (those actively disinterested in health content) and Health Sceptics (those with low trust in ‘official’ organisations).

Today, Grayling published a new report, Is ‘the algorithm’ making us ill?, summarising the key takeaways from the study:

  • People who are already sceptical about health information are more likely to be served up anti-vaccination and anti-medication health content than people who are more trusting of official sources, reinforcing a cycle of misinformation.
  • People who are more sceptical from the outset are also more likely to be served up content that is emotive or alarmist, increasing their feelings of apprehension and distrust. One Health Sceptic participant commented, “The covid search term was pretty scary because I got a lot of stuff about a new wave that’s ‘deadlier than ever’. I couldn’t tell you what the news sites were – there weren’t a lot of names I recognised.”
  • Regardless of the group they fell into, women (especially under 45) felt more inundated with ‘bogus’ diet and exercise related content on social media platforms, whether or not they had shown interest in this type of content.
  • The Health Disinterested group saw very little to no healthcare content in the days following the group exercise. By contrast, the Health Interested & Trusting participants were served official public health content, suggesting people’s search histories put them at risk of missing out on vital health information.
  • All participants were, to some degree, aware of the potential for misinformation online but their approaches to vetting health content varied considerably. Health Interested & Trusting participants were more likely to turn to ‘official’ and known media outlets for health content, whilst Health Sceptics were more likely to use ‘gut instinct’ to judge whether a headline or article was trustworthy (e.g. based on language, grammar and how professional it looked).
  • On X, the prevalence of anti-vaccine and conspiracy theory content in search results was much higher than other platforms. Even those with high levels of trust were frequently served up pieces of far-right content or content with conspiracy theories or anti-vaccine sentiment.

For the PR industry, these findings suggest the influence of algorithms is increasingly impacting the content that target audiences see. But it isn’t all negative. In the report, Grayling Global Health and Ignite Digital experts propose ten ways health communication specialists can mitigate these risks and exploit the algorithms to reach target audiences.

Ross Laird, Director, Grayling UK comments: “Our research underlines the importance of taking a strategic approach when using social media to reach interested and less-interested users. Incorporating early channel and content planning, broadening outreach and creating easily digestible content that can be widely shared across multiple channels will be key. Understanding how these channels prioritise content and formats, such as video and jumping on trending content to respond and reassure is now all part of the reality of modern health communications.”

Many social media providers are already making considerable efforts to verify health and other content. YouTube has a dedicated channel for the creation of content by approved medical experts, while Meta has also sought to reassure users of its dedication to fact checking and removal of potentially harmful content.

Jessica Brobald, Managing Director, Grayling Brussels adds: “As the COVID-19 crisis has shown us, access to neutral and verified information is more important than ever to maintain a solid foundation for public health. Yet, as election season nears, our report highlights that there is still much work to be done to ensure information neutrality on online platforms.”