OS News Centre HERO (1336 × 560px) (1)

Time for tech firms to act: UK online safety regulation comes into force

Published: 16 December 2024
  • First codes of practice and guidance published, firing starting gun on new duties for tech firms
  • Providers have three months to complete illegal harms risk assessments
  • Ofcom sets out more than 40 safety measures for platforms to introduce from March

People in the UK will be better protected from illegal harms online, as tech firms are now legally required to start taking action to tackle criminal activity on their platforms, and make them safer by design. 

Ofcom has today, four months ahead of the statutory deadline[1], published its first-edition codes of practice and guidance on tackling illegal harms – such as terror, hate, fraud, child sexual abuse and assisting or encouraging suicide[2] – under the UK’s Online Safety Act.

The Act places new safety duties on social media firms, search engines, messaging, gaming and dating apps, and pornography and file-sharing sites.[3] Before we can enforce these duties, we are required to produce codes of practice and industry guidance to help firms to comply, following a period of public consultation.

Bold, evidence-based regulation

We have consulted carefully and widely to inform our final decisions, listening to civil society, charities and campaigners, parents and children, the tech industry, and expert bodies and law enforcement agencies, with over 200 responses submitted to our consultation.

As an evidence-based regulator, every response has been carefully considered, alongside cutting-edge research and analysis, and we have strengthened some areas of the codes since our initial consultation. The result is a set of measures – many of which are not currently being used by the largest and riskiest platforms – that will significantly improve safety for all users, especially children.  

What regulation will deliver

Today’s illegal harms codes and guidance mark a major milestone in creating a safer life online, firing the starting gun on the first set of duties for tech companies. Every site and app in scope of the new laws has from today until 16 March 2025 to complete an assessment to understand the risks illegal content poses to children and adults on their platform.

Subject to our codes completing the Parliamentary process by this date, from 17 March 2025, sites and apps will then need to start implementing safety measures to mitigate those risks, and our codes set out measures they can take.[4] Some of these measures apply to all sites and apps, and others to larger or riskier platforms. The most important changes we expect our codes and guidance to deliver include:

  • Senior accountability for safety. To ensure strict accountability, each provider should name a senior person accountable to their most senior governance body for compliance with their illegal content, reporting and complaints duties. 
  • Better moderation, easier reporting and built-in safety tests. Tech firms will need to make sure their moderation teams are appropriately resourced and trained and are set robust performance targets, so they can remove illegal material quickly when they become aware of it, such as illegal suicide content. Reporting and complaints functions will be easier to find and use, with appropriate action taken in response. Relevant providers will also need to improve the testing of their algorithms to make illegal content harder to disseminate. 
  • Protecting children from sexual abuse and exploitation online. While developing our codes and guidance, we heard from thousands of children and parents about their online experiences, as well as professionals who work with them. New research, published today, also highlights children’s experiences of sexualised messages online[4], as well as teenage children’s views on our proposed safety measures aimed at preventing adult predators from grooming and sexually abusing children.[5] Many young people we spoke to felt interactions with strangers, including adults or users perceived to be adults, are currently an inevitable part of being online, and they described becoming ‘desensitised’ to receiving sexualised messages.

Taking these unique insights into account, our final measures are explicitly designed to tackle pathways to online grooming. This will mean that, by default, on platforms where users connect with each other, children’s profiles and locations – as well as friends and connections – should not be visible to other users, and non-connected accounts should not be able to send them direct messages. Children should also receive information to help them make informed decisions around the risks of sharing personal information, and they should not appear in lists of people users might wish to add to their network.

Onine Safety Info Graphic

Our codes also expect high-risk providers to use automated tools called hash-matching and URL detection to detect child sexual abuse material (CSAM). These tools allow platforms to identify large volumes of illegal content more quickly, and are critical in disrupting offenders and preventing the spread of this seriously harmful content. In response to feedback, we have expanded the scope of our CSAM hash-matching measure to capture smaller file hosting and file storage services, which are at particularly high risk of being used to distribute CSAM.

  • Protecting women and girls. Women and girls are disproportionately affected by online harms. Under our measures, users will be able to block and mute others who are harassing or stalking them. Sites and apps must also take down non-consensual intimate images (or “revenge porn”) when they become aware of it. Following feedback to our consultation, we have also provided specific guidance on how providers can identify and remove posts by organised criminals who are coercing women into prostitution against their will. Similarly, we have strengthened our guidance to make it easier for platforms to identify illegal intimate image abuse and cyberflashing.
  • Identifying fraud. Sites and apps are expected to establish a dedicated reporting channel for organisations with fraud expertise, allowing them to flag known scams to platforms in real-time so that action can be taken. In response to feedback, we have expanded the list of trusted flaggers.
  • Removal of terrorist accounts. It is very likely that posts generated, shared, or uploaded via accounts operated on behalf of terrorist organisations proscribed by the UK government will amount to an offence. We expect sites and apps to remove users and accounts that fall into this category to combat the spread of terrorist content.

Ready to use full extent of our enforcement powers

We have already been speaking to many tech firms – including some of the largest platforms as well as smaller ones – about what they do now and what they will need to do next year.

While we will offer support to providers to help them to comply with these new duties, we are gearing up to take early enforcement action against any platforms that ultimately fall short.

We have the power to fine companies up to £18m or 10% of their qualifying worldwide revenue – whichever is greater – and in very serious cases we can apply for a court order to block a site in the UK.

Dame Melanie Dawes, Ofcom’s Chief Executive, said:

For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritise people’s safety over profits. That changes from today.

The safety spotlight is now firmly on tech firms and it’s time for them to act. We’ll be watching the industry closely to ensure firms match up to the strict safety standards set for them under our first codes and guidance, with further requirements to follow swiftly in the first half of next year. 

Those that come up short can expect Ofcom to use the full extent of our enforcement powers against them.

This is just the beginning

This first set of codes and guidance, which sets up the enforceable regime, is a firm foundation on which to build. In light of the helpful responses we received to our consultation, we are already working towards an additional consultation on further codes measures in Spring 2025. This will include proposals in the following areas:

  • blocking the accounts of those found to have shared CSAM;
  • use of AI to tackle illegal harms, including CSAM;
  • use of hash-matching to prevent the sharing of non-consensual intimate imagery and terrorist content; and
  • crisis response protocols for emergency events (such as last summer’s riots).

And today’s codes and guidance are part of a much wider package of protections – 2025 will be a year of change, with more consultations and duties coming into force, including:

  • January 2025: final age assurance guidance for publishers of pornographic material, and children’s access assessments;
  • February 2025: draft guidance on protecting women and girls; and
  • April 2025: additional protections for children from harmful content promoting, among other things – suicide, self-harm, eating disorders and cyberbullying.

Technology Notices consultation

The Act also enables Ofcom, where we decide it is necessary and proportionate, to make a provider use (or in some cases develop) a specific technology to tackle child sexual abuse or terrorism content on their sites and apps. We are consulting today on parts of the framework that will underpin this power.

Any technology we require a provider to use will need to be accredited – either by Ofcom or someone appointed by us – against minimum standards of accuracy set by Government, after advice from Ofcom. 

We are consulting on what these standards should be, to help inform our advice to Government. We are also consulting on our draft guidance about how we propose to use this power, including the factors we would consider and the procedure we will follow. The deadline for responses is 10 March 2025.

END

NOTES TO EDITORS

  1. UK Parliament set Ofcom a deadline of 18 months after the Online Safety Act was passed, which happened on 26 October 2023, to finalise its illegal harms and children’s safety codes of practice and guidance.
  2. The Online Safety Act lists over 130 ‘priority offences’, and tech firms must assess and mitigate the risk of these occurring on their platforms. The priority offences can be split into the following categories:
    • Terrorism
    • Harassment, stalking, threats and abuse offences
    • Coercive and controlling behaviour
    • Hate offences
    • Intimate image abuse
    • Extreme pornography
    • Child sexual exploitation and abuse
    • Sexual exploitation of adults
    • Unlawful immigration
    • Human trafficking
    • Fraud and financial offences
    • Proceeds of crime
    • Assisting or encouraging suicide
    • Drugs and psychoactive substances
    • Weapons offences (knives, firearms, and other weapons)
    • Foreign interference
    • Animal welfare
  3. Information on which types of platforms are in scope of the Act can be found here.
  4. Research was conducted by Ipsos UK between June 2023 and March 2024 and consisted of: 11 in-depth interviews with children and young adults (aged 14-24) with experience of sexualised messages online; 1 interview with parents of a child that had experienced online grooming; and 9 in-depth interviews with professionals working with children and young adults who have experienced receiving these messages online.
  5. We commissioned Praesidio Safeguarding to run deliberative workshops in schools with 77 children aged 13-17.
Back to top