The sexual exploitation and abuse of children online is a persistent and growing threat with devastating consequences for those affected. Child sexual exploitation and abuse (CSEA) encompasses a range of different behaviours, including sharing of child sexual abuse material (CSAM), online grooming of children - which can include coercing a child to send sexual images of themselves, sexual extortion, and arranging in-person child sexual abuse of a child.
In 2022, the US National Center for Missing and Exploited Children (NCMEC) received over 32 million reports of potential instances of online child exploitation, with over 99.5% of these being suspected child sexual abuse material (CSAM) being uploaded and shared online, across over 200 online sites and providers. The NSPCC found that from 2017 to 2023, UK police forces recorded over 34,000 online grooming crimes against children, across 150 different platforms. The risk is particularly high where offenders can abuse online features like anonymity or create a false identity (like lying about their age or gender) to manipulate a child. New risks are emerging as the way we interact online continues to evolve, including through extended reality, end-to-end encryption and generative AI.
Our work to address the CSEA risk to children
In order to tackle this threat, the Online Safety Act has set out safety duties so that, among other things, online services must carry out risk assessments to understand the likelihood and impact of child sexual exploitation and abuse (CSEA) appearing on their service. They must also take steps to mitigate the risks identified in their risk assessment and to identify and remove illegal content where it appears. The higher the risk on a service, the more measures and safeguards they will need to take to keep their users safe from harm, and prevent their services being used as a platform to groom and exploit children.
In our recently-published Illegal Harms Consultation we suggested codes of practice that services can adopt, which we think will make a meaningful difference in protecting children from CSEA.
- Hash-matching technology , which automatically detects known CSAM images shared by users in their public content. (For the codes of practice, this wouldn’t apply to private or end-to-end encrypted content.)
- URL detection technology, which scans public posts to remove illegal URLs that lead to material depicting the abuse of children.
- Prevention of CSAM URLs from appearing in results by search engines and applying warning messages on search services when users search for content that explicitly relates to CSAM.
- Measures to tackle the online grooming of children, including safer default settings that make it harder for strangers to find and interact with children online.
- Supportive prompts and messages for child users during their online journey, to empower them to make safe choices online, such as when they turn off default settings or receive a message from a user for the first time.
Everybody is welcome to respond to the consultation on our proposals, including with additional supporting evidence for current or potential future measures. We value feedback from anyone affected by the regulation, including learning from the knowledge and experience of professionals, people with experience of the harm, and online services working on the complex issues in this sector. We are working to engage with children on the proposed measures and are supporting relevant stakeholders in understanding and responding to the consultation.
We believe services should be doing their best to protect users, and in particular children, from the most severe forms of online harm. These proposed measures are a first step towards establishing a baseline level of protection, but we don’t want services to feel discouraged from going over and above this, and we know some services are already doing so. However, for some services this might be their first time implementing measures to reduce harm. We hope that this presents a set of first steps that services can take when protecting children from CSEA, consistent with wider proposals for how illegal content is handled online.
We are always looking to develop our research and evidence base to strengthen and add to our current measures, and we plan to iterate our codes over time to strengthen the protections against CSEA. We are currently building our evidence base on a range of online CSEA issues, including ‘first-generation’ or ‘novel’ CSAM (CSAM images that have not been previously identified and hashed), additional interventions to disrupt CSAM-sharing, and measures to strengthen our anti-grooming proposals. We will continue working collaboratively with stakeholders to make the biggest possible impact on the safety of children online.
Our programme of research and engagement with regulated services, especially small and medium-sized ones, helps us to design new resources and tools to support services protect their users and comply with the new rules. You can take part, submit enquiries and also sign up for email updates on our website.