Who Has The Power To Regulate Extremist Content On The Internet?

25.09.2019

Photo: Addressing internet regulation - Credit: Getty Images

Photo: Addressing internet regulation - Credit: Getty Images

Earlier this month, the Department for Digital, Culture, Media, and Sport (more commonly known by its acronym DCMS) in the United Kingdom released a new ‘online harms’ white paper to regulate social media, search, messaging, and file sharing platforms. Among other things, the proposal called for the creation of an independent regulator to police harmful content online, including implementing fines and holding individual executives to account if problematic material is not removed within a specified time period.

Vague international definitions of terrorism and extremism have partially contributed to the struggle that technology companies face in moderating online content that is seen as extremist. As a result, multiple stakeholders have had to balance the power of policing content online to protect citizens, while upholding the tolerance inherent within liberal societies that allows citizens to express their rights of expression. After all, decisions around policing for extremism have important implications on citizen privacy, freedom of speech, and the ways in which governments and technology companies interact to define a term such as ‘extremism’ and navigate the boundaries of their power.

In the United Kingdom, the revised Prevent Duty Guidance defines radicalization as the process by which a person comes to support terrorism and extremist ideologies associated with terrorist groups. However, the terms ‘extremism’ and ‘terrorism’ have often been used interchangeably in government policy, and publicly available documents outlining the difference between these terms and their inherent relationship are lacking. The UK Government’s Channel Duty Guidance, for example, defines extremism as the “vocal or active opposition to fundamental British values, including democracy, the rule of law, individual liberty and mutual respect and tolerance of different faiths and beliefs. We also include in our definition of extremism calls for deaths of members of our armed forces, whether in this country or overseas”. However, this definition has met its fair share of criticism for being unclear and inapplicable, leading to the creation of a Counter Extremism Commission in early 2018 to propose a new definition of extremism for the United Kingdom.

When it comes to defining and moderating extremist content, the online space is an important case study to examine how power is distributed between government and technology companies. Some, such as Huszti-Orban, have argued that States are effectively ‘outsourcing certain law enforcement-linked tasks to private outlets, especially in the counter-terrorism context’. Social media companies have been increasingly described as the ‘keepers of public discourse’. While extremist content can fit into the hate speech category, it usually requires a unique approach given that such content dehumanizes those groups seen as ‘others’, makes statements on their inferiority, or calls for their exclusion and/or segregation, all of which are often linked to violence. Given the possibility of security threats stemming from extremist activity, faster decisions have to be made than those on hate speech alone.

As such, technology companies have exercised and distributed their power in new ways. The first is  by going beyond what is legally required by governments and forging ‘their own space of responsibility’. This is certainly the case with Facebook, which has created its own definition of terrorism, and Google, which has defined extremism on its own terms. These are perhaps a reaction to a lack of governmental clarity on these terms. Moreover, as these technology companies operate global platforms, national laws on extremism and terrorism make the issue more contentious, as definitions vary from state to state.

The second way that technology companies have exercised their power is through the use of artificial algorithms. 98% of videos removed from YouTube for violent extremism are flagged by machine-learning algorithms before they can be seen by users. Bulk removals by artificial intelligence present several ethical problems, including the removal of content that depicts human rights violations (as automated programs can tag torture as graphic or violent content). Moreover, there is no effort made on the part of technology companies, yet, to share information on how they understand such patterns of behavior or make decisions on what material should remain and what should be removed. There is, therefore a lack of accountability on the part of those that exercise power over citizens in controlling their right to expression and the consumption of information.

The third, and crucial way in which technology companies have exercised their power to police extremist content online is by sharing this power with citizens. Through citizen coproduction, the citizen is allowed to perform the role of a partner rather than a customer in the recognition and removal of problematic content. As such, the inclusion of citizens in the process of content removal has allowed technology companies to share responsibility, but also to involve citizens in defining what ‘extremism’ means. Crawford and Gillespie (2014), for example, have used the case study of ‘flagging’ offensive content as a way for users and platforms to negotiate around contentious public issues. This argument rests on the assumption that power is shared between technology companies and citizens, and the visibility of the decision making process is in itself a way to control users of platforms.

However, one could argue that citizen coproduction can also be a way for technology companies to increase their power over citizens. Foucault, for example, often discussed the invisibility of power, moving away from the need to keep subjects of disciplinary techniques visible – be it through visibility within schools with glass walls and doors, or visibility of prisoners being punished in town squares – to the idea of biopower, an increase in social control through self-discipline. When discussing the online space and extremism, one can therefore question whether the power to police is something that institutions such as technology companies, governments, and individuals possess and exercise to oppress individuals, or whether Foucault’s conception of ‘biopower’ has lead individuals to regulate themselves through self-disciplinary practices that subjugate them. As such, individuals can “voluntarily control themselves by self-imposing conformity to cultural norms through self-surveillance and self-disciplinary practices”.

In this way, the constant self-disciplining of speech online due to vague conceptions of what constitutes ‘extremism’ has implications on expression and privacy. It also throws light on important questions regarding who holds the power to police: is it governments, technology companies, or citizens themselves? How do the three interact, and what role does the conceptualization of ‘extremism’ play in formalizing the construction of this power in practice through policy?

Author: Nikita Malik* (currently the Director of the Centre on Radicalisation and Terrorism (CRT) at the Henry Jackson Society, where I serve as a research expert in countering violent extremism, terrorism, and hate-based violence. My work has been regularly featured in the media, and my findings and policy recommendations have been discussed in the House of Commons, House of Lords, European Parliament, the US State Department, the United Nations, and other key government departments across the worl)

Source: Link