As the regulator for online safety, Ofcom is currently consulting on the codes of practice which set out how online services should meet their new duties from the Online Safety Act to protect people from illegal content.
This consultation has three weeks left to run – it ends at 5pm on 23 February - and we’ve had some queries about proposed measures for services in scope.
Services of all sizes
We estimate the new duties cover more than 100,000 services, which range in size.
And, when looking at how these services will be regulated, we consider both the size of a service and the risks of encountering illegal content on a service.
So, a one-size-fits-all approach would not be appropriate. Instead, our codes of practice take a proportionate approach and recommend different measures for different types of services.
There is a core of measures which we recommend for all services. This includes all services having an easy-to-use complaints process and taking appropriate action in response to complaints they receive, as well services having clear, accessible terms and conditions.
More stringent measures for larger and riskier services
Given our commitment to proportionality, we’re proposing that more demanding measures should apply to services that are both large and risky. The benefits of large services adopting measures tend to be greater because more users will be protected by the measure. Also, these services are more likely to be able to implement the most demanding measures.
There are different ways of defining ‘large’. We propose to focus on the number of users, and to define a large service as one which has more than seven million monthly UK users. This is roughly 10% of the UK population. This is like the approach adopted by the EU in the Digital Service Act for the definition of ‘very large online platforms’. An example of a proposed measure that only applies to services that are both large and risky is allowing users to block people.
We also propose some measures for large services even if they have not identified any significant risks. For example, recommending that the boards of all large services should review the service's risk management activities in relation to illegal harms annually. Any failure in assessing those risks could affect a large number of users, so these risk management processes must be sufficient. The likely complexity of large services means it will be important to review the approach to risk management and make sure risks have been examined properly.
We have also been asked how our proposals deal with small services which pose a high risk. We know that small services can sometimes be risky. Bad actors use both large and small services to spread illegal content. For example, terrorists often use small services for more covert activities such as recruitment, planning and fundraising. Grooming occurs on small services as well as large services, and child sexual abuse material is spread on services of all sizes.
So, our draft codes propose significant expectations of services that are small but high risk. For example, there are a series of measures in our codes of practice which are designed to help combat grooming. These apply to any service where there is a high risk of grooming taking place, regardless of its size. Similarly, some smaller services where there is a high risk of child sexual abuse material being shared should use an automated technology called ‘hash-matching’ to detect known examples of such images and remove them.
How is ‘risk’ determined?
The level of risk from a service will be established by the service’s individual risk assessment. All services in scope – large and small – are required by the Online Safety Act to undertake a ‘suitable and sufficient’ risk assessment to understand the risk to users.
We recognise that services might understate the risks they pose. To mitigate this, we will produce guidance explaining how services should do their risk assessment, and we are seeking views on this guidance as part of the consultation. This includes guidance on the characteristics that would make a service high risk for a particular harm (for example, if users who are children can easily be identified and contacted by strangers this increases the risk of grooming). We can take enforcement action if a service’s risk assessment is not suitable and sufficient. In addition, we can require a service to take steps to mitigate any risks that we identify when we review its risk assessment, even if the service itself had not identified these risks itself when it did its assessment.
It’s important to remember that this will be the first version of the illegal harms codes. We expect to build on it over time. This could include recommending measures to a wider range of services, regardless of their size. In the first version of the codes, we have recommended measures only to large services where we do not yet know whether it is proportionate to extend a measure to smaller services. This can be because of uncertainty on whether a measure is effective enough to reduce harm materially on smaller services, or whether the costs to these businesses or the inconvenience to their users might be disproportionate to the harm the measure can address. As our understanding develops, we might recommend measures for a wider range of services.
This is why it’s vital that we receive all relevant evidence in response to our illegal harms consultation. This will help us to build our evidence base to make further iterations of the codes.
A reminder that the deadline for our consultation is 5pm on Friday, 23 February.