As Ofcom prepares to take on new powers to protect online users from harm, Anna-Sophie Harling, Declan Henesy and Eleanor Simmance from our online safety team discuss the power of transparency reporting.
Their full paper on this topic has been published in the Journal of Online Trust and Safety.
Online services have not faced comprehensive regulatory oversight of their trust and safety practices, leaving the public in the dark on how platforms make decisions and design their products, and how they affect people. The UK’s Online Safety Bill, which is currently making its way through Parliament, will give Ofcom new regulatory tools, including mandatory transparency reporting.
Many online platforms already publish voluntary transparency reports, with companies choosing how, what and when they report. These reports provide only a partial account of what’s happening inside companies and across the platforms they operate.
Under the Online Safety Bill, Ofcom will be required to issue transparency notices to a subset of in-scope platforms; these may be tailored to each particular service, specifying the information and data different platforms must publish, the methodology used, and the format in which the information is gathered and published. Ofcom will also be obligated to publish its own transparency reports each year, based on the information published in platform transparency reports.
We believe there is a benefit in rethinking the approach to transparency reporting. The publication of key information can help drive change in regulated services by exposing good and bad practice and ensuring the industry learns from this. Revelations about online platforms failing to prioritise user safety can have immediate impacts on user numbers, advertising spend, and share prices. Targeted transparency requirements will therefore be a major tool for driving behaviour change.
What should we measure?
We’re thinking carefully about what information we want platforms to publish in their transparency reports. We may require them to publish metrics around the prevalence and dissemination of illegal or harmful content, and the number of users who have encountered this content. We could ask them to explain how they enforce their policies and community guidelines, publish information about user reporting systems and user empowerment tools, or disclose details about content moderation technologies and user identity verification. Other areas of focus might be corporate governance structure and decision-making, risk assessment outcomes, and internal key performance indicators across teams.
The information that platforms currently publish provides some insight, but it has its limitations. For example, a transparency report might report that “140,000 pieces of hate speech were removed in Q1.” If this figure goes up in Q2, does that mean there was more hate speech on the platform than in Q1? Does it mean that the systems in place to identify this content became more effective? Does it mean that the platform changed its definition of hate speech in Q2, resulting in a greater number of pieces of content violating its rules? What was the impact of major international, national, or local events on the amount of hateful content uploaded or resurfaced by users in Q2? Even if overall levels of content violating a platform’s policies are low, there could still be a risk of harm if users with particular vulnerabilities or characteristics, such as children, are more likely than average to be exposed on a repeated basis.
There are a lot of challenges associated with metrics, so we’ll have to work hard to get them right. We will also consider how transparency reporting can go beyond content moderation to address the different ways that services protect their users from online harms and highlight good and bad practice, all the while keeping in mind the potential risks around arming bad actors with information on how to circumvent safety systems.
International alignment
Other regulators around the world are in the process of implementing their own transparency reporting regimes. Ofcom will have to think about the extent to which we want to align our transparency reporting requirements with those of other regulators.
The UK Online Safety Bill gives Ofcom the power to tailor transparency reporting requirements to each platform, which means that we will have the ability to go above and beyond other standardised reporting regimes. Product changes can happen at a global level, meaning that a successful transparency regime might nudge platforms to make systemic changes that impact users around the world.
Conclusion
Transparency will be a powerful and essential tool in our regulatory arsenal. As the future online safety regulator, we plan to think long and hard about the numerous challenges and trade-offs associated with mandatory transparency reporting. A carefully designed transparency regime could transform Ofcom’s ability to hold platforms accountable and fundamentally change the way the industry prioritises the safety of its users.