The use of online services to incite and radicalise vulnerable people, including children, towards hate and violence poses a major risk. It can have horrific consequences and in the severest of cases can lead to mass murder, often targeting minorities and protected groups.
On 15 March 2019, a far-right terrorist killed 51 Muslim worshippers at two mosques in Christchurch, New Zealand. The tragic attack was livestreamed for 17 minutes and was viewed over 4,000 times. The UK Government has recently taken steps to proscribe a number of far-right groups as terrorist organisations, highlighting the increasing threat from far-right terrorism. ISIS propaganda online was also instrumental in convincing Britons to leave the UK for Raqqa, Syria as well as inspiring terrorist attacks in several countries.
Our report into the racially-motivated Buffalo attack in May 2022 in the US further emphasised the global and multiplatform nature of the risk to UK internet users. It showed that terrorist and hate groups form part of borderless networks and jump between online services, inciting others, mobilising or planning attacks.
The Buffalo attacker said he was radicalised on the imageboard 4Chan, stored his diary and manifesto on Discord, and livestreamed the attack on Twitch. Even though the stream of the attack lasted for only two minutes, copies of the footage were still disseminated globally across multiple platforms, significantly exposing users of online services in the UK to harm in the form of trauma and an increased risk of having hate, violence and terrorism being incited against them.
Evidence is key to our work in this area
Building our evidence base and scaling our teams in this important harms area has been, and will continue to be, vital for us. We have commissioned two reports from the Institute of Strategic Dialogue (ISD) to build our understanding of user experiences of online terrorism, incitements to violence and hate. We have also used formal information-gathering powers against the UK’s video-sharing platforms (VSPs) to put the systems and processes of services such as TikTok, BitChute, Twitch, Vimeo and Snapchat under the microscope. A recent report into VSPs’ terms and conditions, including those for terrorism, and incitement to hatred and violence, concluded that many adults would struggle to understand them – children even less so.
In November 2023, we published draft proposals setting out steps we expect relevant online platforms to take to assess and mitigate the risk of illegal content such as hate and terrorism, which we will develop and elaborate over time.
- Set clear and accessible terms and conditions that explain how users will be protected from illegal terrorist and hateful content .
- Assess the risk of terrorist and hateful content being disseminated and to take steps to mitigate the risk they’ve identified.
- Design content moderation systems to swiftly take down illegal content that may be terrorist, violence inciting and hateful. Prioritisation policies for content moderation should factor in the potential viral nature and severity of content.
- Content moderation teams must be adequately resourced and trained to deal with hateful and terrorist content, including to meet increases in demand caused by external events, such as crises and conflicts.
- When services are making changes to their recommender systems, they should test them to assess the impact the changes will have on the dissemination of illegal hateful and terrorist content.
- User reporting and complaints processes for illegal terrorist and hateful content should exist on all services and be easy to find, access and use.
- Accounts should be removed if there are reasonable grounds to infer they are run by or on behalf of a terrorist organisation proscribed by the UK Government.
- For search services, content moderation systems should result in illegal terrorist or hateful content being de-indexed or de-prioritised.
Responding to emerging issues
Following the beginning of the crisis in Israel and Gaza in October 2023, we reached out to several key civil society, independent research organisations and international regulatory counterparts through the Global Online Safety Regulator’s Network. We sought to understand the scale of illegal and harmful content on online services relating to the crisis.
They shared their concerns about reductions in trust and safety teams and the knock-on effect on online services being able to cope with the scale of harmful content being posted, in particular spikes in anti-Muslim and antisemitic hatred. Some expressed worries around whether terms and conditions were clear and accessible especially in relation to illegal terrorist and hateful content and whether they were being swiftly enforced. We also sent a letter to regulated video-sharing platforms about the increased risk to their users encountering harmful content stemming from the crisis in Israel and Gaza, and the need to protect users from such content.
The continued development of regulation in respect of illegal hateful and terrorist content continues to be an ongoing priority for us. This will be particularly challenging as emerging technologies, such as generative AI, evolve at pace and we strive to make sure our proposals remain effective and current. We will also be stepping up our engagement with specific regulated services to better understand, assess and improve their systems pertaining to illegal hate and terrorism.
Lastly, international engagement will be a key area of work for us, particularly with other regulators and cross-industry initiatives such as the Christchurch Call, Tech against Terrorism, Global Internet Forum for Counter Terrorism and the EU Internet Forum. We are keen to identify opportunities for collaboration and partnership to better protect UK users from a fast changing, global and multi-platform harms area.