ARC Research

View Original

Freedom of Speech and Big Tech: Liability and Regulation

Summary of Research Paper

In this paper:

  • The War on Free Speech

  • Giving users control of moderation

  • A Bill of Rights for Online Freedom of Speech

The Right to Speak Freely Online

The advent of the internet gave us a double-edged sword: the greatest opportunity for freedom of speech and information ever known to humanity, but also the greatest danger of mass censorship ever seen.

Thirty years on, the world has largely experienced the former. However, in recent years, the tide has been turning as governments and tech companies become increasingly fearful of what the internet has unleashed. In particular, regulation of speech on social media platforms such as Facebook, X (formerly Twitter), Instagram, and YouTube has increased in attempts to prevent “hate speech”, “misinformation”, and online “harm”.

Pieces of pending regulation legislation across the West represent a fundamental shift in our approach to freedom of speech and censorship. The proposed laws explicitly provide a basis for the policing of legal content, and in some cases, the deplatforming or even prosecution of its producers. Beneath this shift is an overt decision to eradicate speech deemed to have the potential to “escalate” into something dangerous—essentially prosecuting in anticipation of crime, rather than its occurrence. The message is clear: potential “harm” from words alone is beginning to take precedence over free speech—neglecting the foundational importance of the latter to our humanity.

This shift has also profoundly altered the power of the state and Big Tech companies in society. If both are able to moderate which views are seen as “acceptable” and have the power to censor legal expressions of speech and opinion, their ability to shape the thought and political freedom of citizens will threaten the liberal, democratic norms we have come to take for granted.

Citizens should have the right to speak freely—within the bounds of the law—whether in person or online. Therefore, it is imperative that we find another way, and halt the advance of government and tech regulation of our speech.

 

Network Effects and User Control

There is a clear path forward which protects freedom of speech, allows users to moderate what they engage with, and limits state and tech power to impose their own agendas. It is in essence, very simple: if we believe in personal freedom with responsibility, then we must return content moderation to the hands of users.

Social media platforms such as Facebook, X, Instagram, and YouTube could offer users a wide range of filters regarding content they would like to see, or not see. Users can then evaluate for themselves which of these filters they would like to deploy, if any. These simple steps can allow individuals to curate the content they wish to engage with, without infringing on another’s right to free speech.

User moderation also provides tech companies with new metrics with which to target advertisements and should increase user satisfaction with their respective platforms. Governments can also return to the subsidiary role of fostering an environment in which people flourish and can freely exchange ideas.

These proposals turn on the principle of regulating social media platforms according to their “network effects”, which are generated when a product or service delivers greater economic and social value the more people use it. Many network effects, including those realised by the internet and social media platforms are public goods—that is, a product or service that is non-excludable, where everyone benefits equally and enjoy access to it. As social media platforms are indispensable for communication, the framework that regulates online discourse must take into account the way in which these private platforms deliver a public good in the form of network effects.

 

A Bill of Rights for Online Freedom of Speech

This paper provides a digital “Bill of Rights”, outlining the key principles needed to safeguard freedom of speech online:

  1. Users decide their own content moderation by choosing “no filter”, or content moderation filters offered by platforms.

  2. All content moderation must occur through filters whose classifications are transparent to users.

  3. No secret content moderation.

  4. Companies must keep a transparent log of content moderation requests and decisions.

  5. Users own their data and can leave the platform with it.

  6. No deplatforming for legal content.

  7. Private right of action is provided for users who believe companies are violating these legal provisions.

 

If such a charter were embraced, the internet could once again fulfil its potential to become the democratiser of ideas, speech, and information of our generation, while giving individuals the freedom to choose the content they engage with, free from government or tech imposition.