Teenage students using smartphones on a school break

Announcing the winners of our social video platform competition

Published: 5 May 2022
Last updated: 16 March 2023

We’re delighted to announce the winners of our recent competition, in which we asked young people how they felt social video platforms could be made safer for the people who use them.

Social video platforms like TikTok, Snapchat and Twitch are a huge part of our online lives – and this is particularly true for young people.

Part of Ofcom’s role is to regulate video-sharing platforms established in the UK and make sure they take steps to protect their users from certain types of harmful video content.

So, we wanted to hear the thoughts of young people on this topic. We asked people aged 16 to 18 the following question:

What changes are needed to make social video platforms a kinder, safer place for young people?

The competition was open to written, video and audio entries, and we’ve chosen two written entries and one video entry as the winners.

There were some really excellent entries and it was tough to choose the winners. I was very impressed by the quality of the work and effort the entrants put into them.

Helping people to live a safer life online is a priority for us, and it was really inspiring to see how thoughtful and creative the written and video entries were.

Sachin Jogia, Ofcom’s chief technology officer

Our winners:

Ellie_VSP_Winner

Social media has proven to us over the past few years that it is an essential and growing phenomenon. It has meant the line of communication that was otherwise severed by the COVID-19 pandemic was salvaged, providing us with direct contact with family and friends and acting as an outlet during the harder times. This is a clear positive factor.

However, at the risk of sounding cliché, we must also address the dark side.

Let me use Tik Tok as an example. The largest portion of users belong to the 12 – 18 age group. This is an age where you barely even know yourself, and yet you are swarmed with constant videos promoting unhealthy lifestyle, unrealistic body image, promotion of self-harm and an array of cruel and racist comments. Young people are notoriously impressionable, and that is where the danger lies.

We cannot just tell them to switch off their phones, or delete the apps that ‘everyone’ is using. Real change is needed and if it is not implemented now, we will be actively encouraging young people’s viewing of harmful and threatening video content. These young people can be naïve, impressionable and easy targets, but also deserve to use social video platforms as much as the rest of us. Everyone’s lives are now online. We cannot just pretend this doesn’t apply to children as well. Instead of hiding from the problem, we must face it.

So, I am proposing this:

We ban the creation of fake accounts. This will prevent people from hiding behind the protection of their keyboards. If they know that their information is out for the world to see,  there will be a lull in this abusive content, as it’s a well-known fact that the security of a hidden identify is an enthralling prospect for a coward.

I also suggest that we have ‘child friendly’ versions of these various apps. There is no point excluding children from the internet now; we have tried that and it has only revealed to us the extent at which social media has and will continue to shape our lives. However, if we create versions of apps solely for children’s use, we will be able to monitor the content more easily and be able to ensure that hateful, abusive content does not occur to the scale that it does currently.

Now, I am not for one minute suggesting that children do not have the power to share awful things as well. What I am saying though, is that an app with fewer users can be monitored more closely. We can put a mass ban on harmful hashtags and make sure users cannot get on unless they are verified of a certain age, personal information and all.

This is extreme, you may be thinking?

I disagree.

What is extreme is allowing easily influenced children to be exposed to the horrors that lie within the internet, targeted by never ending harmful or derogatory information with just the single tap of a button.

Billy_VSP

The extent of video sharing media hegemony in the youth entertainment industry has rendered its regulation more than troublesome. With TikTok alone racking up over 1 billion active users, and many of these users posting multiple short videos a day, it seems unrealistic - and unreasonable - to suggest that one single body could moderate these platforms. But in regulation, social media's greatest hinderance – its monolithic user count - can also be its greatest asset.

Why then, has the current system of reporting harmful content failed to safeguard users?

As cynical as it may seem, reluctance to use the report functions may come down to indifference. Other than a desire to protect other users – which may be both transient and unenthusiastic – there exists no incentive for taking action against damaging content.

For designing an alteration to video sharing media to combat this moral inertia, we can look to Nobel Prize-winning economist Richard Thaler and his thesis regarding ‘Nudge Theory’. His theory asserts that changing regular behaviour relies on small, nudge-like, prompts and positive reinforcement via small, regular affirmations.

We can apply this principle to our quest for moderation by utilising small rewards when users report harmful content. A points-based system whereby users who actively participate in the protection of the online community are prioritised in search algorithms and have their posted content promoted above others with lower community protection scores could incentivise moderation on an individual scale. This collective action could form the basis of a bottom-up attack on dangerous content.

To ensure the efficacy of this policy however, it would need to be immune to exploitation.

Firstly, for some, the introduction of this system would seem a fruitful opportunity for artificially gaining undeserved attention through increasing their community protection score (perhaps by reporting safe content). The point-allocation protocols must then ensure that only content that is confirmed to be dangerous – through existing mechanisms – leads to the granting of points.

Secondly, the community protection score must not interfere too strongly with current algorithms regarding content distribution. While the community protection score system should provide a substantial incentive for individual moderation, it must not supersede the accepted arrangement of content based on views and followers. It could, however, act as a sub-criterion through which content is aligned within groups of accounts with comparable audiences.

Lastly, to ensure this policy fully encourages active protection in online communities, implementation of this change must be both vocal – in the form of extensive advertisement campaigns – and adaptive – frequent tinkering of points’ impact on media must be tailored to analytical data.

In conclusion, motivating online users to actively participate in the moderation of content can only be done with the use of incentives. The system I have proposed can be the most effective deployment of the economic principles discussed. However, these changes must accompany a radical shift in societal attitudes around the responsibilities of social media platforms. As John Maynard Keynes asserted, ‘The difficulty lies, not in the new ideas, but in escaping the old ones.’

You can view Dom's video entry on VImeo.

The winners have each won a £100 voucher prize and will be invited to spend a day at Ofcom to find out more about the work that we do.

Please note – these competition entries reflect the views of our entrants and do not reflect the view of Ofcom.

Back to top