Archive

Sector: Social and Digital

Elon Musk’s Twitter move re-awakens the debate on freedom of speech and content moderation online

  |   By  |  0 Comments

In April 2022, businessman and Tesla owner Elon Musk stated in a letter to Twitter’s chair that the site was not thriving as a tool for improving freedom of speech and needed to be transformed as a private company. He then made his offer to purchase the social media company for $44 billion. While later in July Musk declared that he wanted to end his deal to buy Twitter, his plan and related statements still sparked a wave of concerns that highlighted unresolved issues around content moderation and freedom of speech online. This could lead to stricter rules for online platforms not only on illegal, but also on harmful content moderation.

After Musk’s statement, industry experts and civil rights advocates raised the alarm noting that the executive’s past critiques of social media suggest he would favor a hands-off approach to content moderation, at odds with Twitter’s current approach to curb hate speech, misinformation, harassment and other harmful content.

This reflects an ongoing policy debate on whether online platforms should only be obliged to remove illegal content or whether they should go further and moderate harmful (but legal) content such as, for example, disinformation, online harassment, cyberbullying, threats, and self-harm. While Musk seems to believe that moderation of harmful content could threaten free speech, most social media platforms tend to remove such content even if they are not legally obliged to do so.

Ethics and business interests behind moderation of harmful content
One could argue that social media companies are private entities which should be allowed to decide which kind of content they want to host – in the same way a newspaper chooses their editorial line. On social media however, this can have extremely dangerous and far-reaching consequences – Capitol Hill events and Covid disinformation are just two examples. Today, just four companies, Alphabet (Google and YouTube), Meta (Facebook, Instagram and WhatsApp), ByteDance (TikTok), and Twitter, dominate the social media space.

The concentration of this industry has raised numerous concerns and led many to call for the breaking up some or all of these companies. In a world where a growing number of people get their news online, these firms have extraordinary, state-like power to shape social and political views on a wide range of issues.

Musk’s concerns therefore reflect the highly problematic issue of leaving a few private businesses decide what content should be displayed online. This could be used to effectively censor political ideas or to silence vulnerable groups.

Musk’s proposed solution of halting all content moderation except for illegal content is, however, a tricky one and risks creating a more unsafe online environment. Ironically, this is not considered to be good for businesses either as moderation of harmful content is often based on commercial considerations. Without it, social media platforms like Twitter, Facebook, Instagram, YouTube, or TikTok would be quickly overwhelmed by spam, pornography, hate speech and violent incitement, misinformation, and conspiracy theories, which would drive away both users and advertisers. That happened for instance with Parler, Gettr and now Truth Social.

Some kind of moderation is therefore necessary not only to avoid a dangerous online environment, but it is also in the interest of mainstream platforms’ business model – including a hypothetical Musk’s Twitter.

A balance between free speech and content moderation – is transparency enough?
The real problem is not whether to moderate or not, but how to make sure that moderation actually targets harmful content and is not abused to pursue an arbitrary agenda of a handful of social media company. While the discussion around Twitter, Musk and freedom of speech has been extremely politicized, concrete solutions are needed, and fast.

Policymakers around the world are working on initiatives to tackle illegal, but also increasingly harmful content online – for instance with the Digital Services Act in the EU and the Online Safety Bill in the UK. The Digital Services Act has a focus on illegal content, but also contains provisions that could make moderation of harmful content more transparent, making it more difficult to arbitrarily remove certain content. Transparency requirements also apply to recommender systems such as newsfeeds, which often favour polemic and controversial content – making platforms accountable when they promote such harmful content.

The UK’s Online Safety Bill on the other hand also takes into consideration harmful content such as abuse, harassment, or exposure to content encouraging self-harm or eating disorders. Companies will need to make clear in their terms and conditions what is and is not acceptable on their site, and enforce it.

While transparency is one of the solutions put forward so far to ensure a safer online space, more is likely to be explored depending on the specific issues. Policymakers are already considering specific measures against particular kinds of harmful content, such as for instance the EU Code of Conduct on Disinformation. The debate is therefore likely to continue and involve different categories of harmful content, from cyberbullying to online harassment, from disinformation to self-harm – with the possibility of new rules for online platforms to comply with.

New problems call for new solutions, which we will explore in our series of articles around harmful content moderation online, starting with disinformation. Stay tuned!