Limiting Lies: The Need for Greater Regulation of the Tech Industry in Europe

Although Europe exercises some of the world’s strictest policies towards the technology sector, the EU is considering passing new regulations aimed at ‘gatekeeper’ platforms, including Amazon, Facebook, Apple, Google, and Microsoft, to force big tech companies to remove dangerous content, hate speech, and misinformation. Renewed efforts by the EU to curb the spread of hate speech and misinformation are prompted by concerns over the recent growth of extremist groups, both within Europe and internationally, that are strengthened by their online communities.

Moreover, increased time spent online during the pandemic sparked a resurgence in efforts to combat far-right and extremist radicalisation through social media and the internet. The insurrection at the U.S. Capitol earlier this year and the subsequent action taken by social media companies against those inciting violence demonstrated the importance of regulating content and deplatforming to combat extremism. The content moderation efforts by individual companies, however, do not go far enough in stemming the spread of extremist content online, which enables extremists to cultivate dangerous transnational relationships.

The new EU proposals, introduced in December of last year, would place significant pressure on tech companies, as they would concern 27 countries and 450 million people. Additionally, the proposals would serve as a regulation model for the rest of the world. These proposals include the Digital Services Act, which stipulates large fines for platforms, like Facebook and Twitter, if they fail to prevent the spread of illegal content, such as hate speech. In addition, tech companies would be held legally responsible for the content published on their platforms, forcing them to more aggressively combat problematic posts like fake news.

Due to the recent rise in incidents of terrorism, the European Commission established a new ‘Counter-Terrorism Agenda’ in December 2020. The Agenda directs EU member states to adopt a regulation, first proposed in 2018, mandating that online platforms in member states remove terrorist content within one hour of publication. It also gives member states the ability to sanction platforms for non-compliance, as well as, mechanisms to repost wrongly removed content. However, these strategies present significant challenges, such as the difficulty of identifying grey zone content.

For example, extremist groups publish vast quantities of online content, some of which does not specifically incite violence. Humour and irony are often used by these actors to maintain plausible deniability, although such content may still contribute to radicalisation. As a result, determining whether content should be categorized as extremist is a challenge, particularly as removing accounts and content often fuels the extremist discourse on censorship.
The impact of fake news and online extremism is not restricted to inciting terrorism, as demonstrated by how false information about the Covid pandemic circulated on social media and the internet is significantly undermining efforts to fight the pandemic. A wide range of actors are spreading fake news, from proxies of authoritarian states seeking to weaken democracies, such as the EU, to extremist groups capitalizing on the crisis to recruit followers. As a result, the WHO identified an associated ‘infodemic’ making it difficult for people to access reliable information.

Thus, it has become compounded by collective societal anxiety surrounding health, wellbeing, and the economy, rendering individuals more susceptible to misinformation. This phenomenon is also worsened by the drastic rise in internet usage during the pandemic, particularly by young people, increasing exposure to fake news through social media and gaming platforms. False information on the coronavirus can come in many forms, such as conspiracy theories regarding vaccines, online scams exploiting peoples’ vulnerability, and hoaxes asserting fake cures. All these methods target individual insecurity bred by the pandemic, while endangering society by sabotaging governmental efforts to address the coronavirus.

Therefore, enacting stricter regulations on tech companies is especially important given the current climate and their potential role in ending the pandemic. Preventing the spread of misinformation regarding Covid-19 and related issues, such as vaccines, is integral to halting the real-life spread of the virus and depends on aggressive content moderation.

In order to stem the expansion of extremism in Europe, the EU must pass these new regulations and, in particular, the Digital Services Act, as soon as possible. The EU should also enact a broader strategy in order to combat the full extent of fake news and online extremism by marginalising extremist and grey zone content through demonetising it, disabling comments, and expelling it from algorithm-based social media recommender systems, in addition to rapid content removal. Furthermore, the EU should prioritize collaborating with smaller fringe platforms that typically escape strict regulation, for example gaming platforms, to prevent the use of those services for disseminating extremist content. Ultimately, the EU must seize the opportunity to set a new precedent and become a leader in regulating the tech industry.

By Catherine Burke
Catherine is a working group member in the European Affairs Policy Centre. Her research interests include international law, human rights, immigration, decolonization, and refugee rights.

Image by: Today Testing

Bibliography
Bentzen, Naja and Thomas Smith. “The evolving consequences of the coronavirus ‘infodemic,’” European Parliamentary Research Service, (September 2020).

Birnbaum, Michael. “E.U. proposes sweeping new rules for online business that could force fundamental changes for digital giants.” Washington Post, December 15, 2020.

European Commission. “A Counter-Terrorism Agenda for the EU and a stronger mandate for Europol: Questions and Answers.” December 9, 2020.

Satariano, Adam. “Big Fines and Strict Rules Unveiled Against ‘Big Tech’ in Europe.” New York Times, December 15, 2020.

Wallner, Claudia. “Against the Clock: Can the EU’s New Strategy for Terrorist Content Removal Work?” RUSI, January 26, 2021.

Leave a comment