
In the latest in a series of explainers on specific online harms, Ofcom sets out what online service providers operating in the UK need to do to protect people from suicide and self-harm content.
Warning: This article contains distressing content relating to suicide and self-harm
A number of deaths in the UK have been linked with online material where detailed information is shared on methods of suicide and self-harm, or where suicidal and self-harm behaviours are actively encouraged. Adults and children may actively seek out this kind of content, with tragic results. But they may also encounter it accidentally or have it recommended to them by algorithms.
Ofcom’s research suggests that four per cent of UK internet users have seen online content promoting suicide in the last month, and children are more likely to see it than adults.
Online service providers must protect UK users
Under the UK’s Online Safety Act, in-scope service providers have a duty to make their sites and apps safer by design and protect both children and adults from illegal and harmful content, including content that encourages or assists suicide and self-harm.
On 16 December, Ofcom published its first-edition codes of practice and guidance for illegal content. That means providers of online services – large and small – have until 16 March to assess the risk of UK users encountering illegal material on their sites and apps. After that, they will have to start implementing appropriate safety measures to reduce the risk of it appearing in the first place. They will also have to remove illegal content quickly when they become aware of it, including illegal forms of suicide and self-harm content.
In our codes, we have set out specific measures that ‘user-to-user’ services can take to protect adults and children from illegal content. Some of them apply to all providers, and others to certain types of providers, or to the providers of larger or riskier services. These measures include:
- having content moderation systems and processes that enable sites to take down illegal suicide and self-harm material swiftly when they become aware of it;
- making sure content moderation functions are appropriately resourced and individuals working in moderation are trained to identify – and where relevant, take down – content including illegal suicide and self-harm material;
- allowing users to report illegal suicide and self-harm material through reporting and complaints processes that are easy to find, access and use;
- when testing their algorithms, checking whether and how design changes impact the risk of illegal content, including illegal suicide content, being recommended to users; and
- setting clear and accessible terms and conditions explaining how users will be protected from illegal content, which includes suicide and self-harm material.
In addition, providers of search services should:
- take appropriate moderation action against illegal suicide or self-harm material when they become aware of it, which could include deprioritising its overall ranking or it not appearing in search results; and
- provide crisis prevention information in response to search queries regarding suicide methods or suicide generally.
Additional protection for children
Our regulation will provide an additional layer of protection for children, given the vital importance of keeping them safe online from both harmful content and behaviour.
This is already a key area of focus in our illegal harms codes. For example, sites and forums with direct messaging functionality and a risk of grooming should implement safety defaults making sure children can only be contacted by people to whom they are already connected.
In April, we will publish our decisions on additional protections for children relating to content that is legal but harmful to them. Providers of services likely to be accessed by children will have three months to assess the risks of under-18s being exposed to certain types of content designated in the Act that is harmful to them. This includes content that promotes, encourages or provides instructions for suicide or self-harm.
Providers will then have to implement safety measures to mitigate those risks. Our proposals for ‘user-to-user’ services include recommending that services:
- design and operate recommender systems so that content likely to be suicide or self-harm content is excluded from children’s feeds;
- have in place content moderation systems and processes that ensure swift action is taken when they identify suicide or self-harm content to prevent children from seeing it – where services allow this kind of content, they should implement highly effective age assurance to secure this outcome;
- make sure their content moderation functions are adequately resourced and individuals working in moderation are trained to identify and action suicide or self-harm content in line with their internal content policies; and
- signpost children who report, post, share or search for suicide or self-harm material online to appropriate support.
In addition, search services should:
- take appropriate moderation action against suicide or self-harm content when they become aware of it, which could include deprioritising its overall ranking, as well as excluding it from search results of child users via safe search settings; and
- provide children with crisis prevention information in response to search requests relating to suicide, self-harm and eating disorders.
Some of these apply to all providers, and others to certain types of providers, or to the providers of larger or riskier services.
We are already working on additional measures for a consultation in the coming months, to supplement those in our first-edition codes on illegal harms and children’s safety. For example, we are considering proposals on the use of automated detection measures to protect children from suicide and self-harm content, and on recommender systems to prevent adults and children from encountering illegal suicide content.
Evidence is key to our work
Continuing to build our evidence base in this area is one of our top priorities, and we have a well-established programme of research that helps provide important insights and evidence to support our work. That includes hearing from people and organisations who specialise in supporting and protecting adults and children from suicide and self-harm.
We have also engaged directly with those with lived experiences of online harms as well as coroners conducting inquests. We will continue – and increase – this type of engagement to ensure such perspectives are incorporated into our work.
Taking action
We expect all providers to assess the risks associated with their service, and to start taking steps to protect their users from harmful content as soon as the duties take effect. We have already issued formal information requests to providers of a number of services that may present particular risks of harm from illegal content, setting them a deadline of 31 March by which to submit records of their illegal harms risk assessments to us.
Where it appears that a provider is not taking steps to protect its users from harmful content and there is a risk of serious harm to users – especially children – we won’t hesitate to take enforcement action. This can include issuing significant financial penalties, requiring providers to make specific changes, and – in the most serious cases – applying to the courts to block sites in the UK.
If you are struggling to cope, please call Samaritans for free on 116 123 (UK and the Republic of Ireland) or contact other sources of support, such as those listed on the NHS help for suicidal thoughts web page. Support is available 24 hours a day, every day of the year, providing a safe place for you, whoever you are and however you are feeling.