Uncategorized

Tackling disinformation and misinformation on social media: what’s next?

“Social media have given the right to speak to legions of fools who used to only speak at the bar and did not cause any harm to the community […] Now their words have the same weight as a Nobel Prize winner.” said controversially Italian academic and writer Umberto Eco. While some deemed his words elitist and yet another attack to freedom of speech on the Internet, others have seen it as a realistic depiction of the risks posed by public discourse on social media.

Among other issues, Eco’s analysis is particularly suitable to describe the phenomenon of disinformation and misinformation on the Internet. While misinformation is the accidental spread of fake information, disinformation is spread on purpose – with the word considered by some a loan translation of the Russian dezinformatsiya derived from the title of a KGB black propaganda department. Both phenomena are not new, but they have reached a whole new level of influence in the online sphere.

As Eco put it, disinformation and misinformation in the past had a much more limited reach and were most of the time harmless – unless they came from a small category of influential stakeholders such as governments and traditional media. Now, on the other hand, anyone can circulate fake news online to a potentially unlimited audience with little to no consequences. With a consistent lack of peer review anyone’s words can be as influential as an expert’s. This is also aggravated by groups of like-minded individuals that work as echo chambers that amplify messages and polarize political discussions.

The crisis of journalism has also played a role in the spread of misinformation. Online media usually offer free content and make money by hosting paid ads. For this reason, they tend to favour polemic and controversial content to lead more traffic to the ads. This does not only reduce competition in journalism to ensure high-quality reports, but it also produces distrust on mainstream media, which in turn generates a search for alternative, more unreliable sources.

In this context, fake news is often far from being a harmless joke and can be truly harmful. Over the past few years, disinformation has been increasingly spread on a range of topics including Covid-19, 5G connectivity, national elections, climate change, and the war in Ukraine, with severe consequences in the offline world, such as vaccine scepticism, arson attacks to telecom towers, and the Capitol Hill riot.

Yet, efficiently tackling disinformation on online platforms is difficult because the moderation of such content puts platforms in the position of deciding what is true and what is false – with unsettling similarities to Orwell’s Ministry of Truth. Policymakers around the world are aware of the issue and are considering solutions to regulate in a transparent manner the moderation of fake news. For instance: empower fact-checkers to flag fake news and put in place a more effective notice-and-action mechanisms for disinformation; label fake news as disinformation; demonetize dissemination of fake news; and introduce more transparency on recommender systems (e.g. social media newsfeeds) and targeted advertising to weaken the system that favours controversial content.

While policy solutions are still at a very early stage, the EU has included rules to increase transparency of advertising and recommender systems in its new Digital Services Act. While the Act per se does not focus on disinformation, but rather on purely illegal content, such transparency requirements might have an indirect effect in tackling the spread of fake news. More transparency obligations will also be contained in the ongoing initiative on political advertising, currently under consideration among the EU institutions. The UK’s Online Safety Bill proposal has put forward a different solution, with services with the largest audiences and a range of high-risk features to be required to set out clear policies on harmful disinformation accessed by adults

Finally, an ongoing initiative targeted at online disinformation is the EU’s new Code of Practice on Disinformation. This code has 34 signatories such as platforms, tech companies and civil society organisations and sets out extensive commitments by platforms and industry to fight disinformation. Some commitments concern a cut of financial incentives for spreading disinformation, it also covers new manipulative behaviours, in the light of the recent war in Ukraine, it expands fact-checking and ensures transparent political advertising, among other things. Signatories now have 6 months to implement these new rules and will have to provide a first implementation report to the Commission at the beginning of 2023.

This is only the beginning and obligations of some form to have policies around disinformation are likely to be adopted in several countries. Only the time will tell us how effective they will be – or if additional solutions will be needed to tackle new and emerging problems. What is sure for the moment is that pressure is on, and online platforms have to be aware that they will have to take a more active role in moderating fake information and adopt clear policies on how to do that.