Deepfakes are audio-visual content that has been generated or manipulated using AI, and that misrepresents someone or something. New generative AI tools allow users to create wholly new content that can be life-like and make it significantly easier for anyone with modest technical skill to create deepfakes.
Deepfakes are causing harm to well-known public figures, celebrities, political candidates, as well as ordinary people. They may humiliate or abuse a victim-survivor – overwhelmingly women and girls - by falsely depicting them in a non-consensual sexual act. They may misrepresent a loved ones’ identity to assist financial scams. They may also spread disinformation to influence opinion on key political or societal issues.
Ofcom’s recent poll on deepfakes found that almost half of child respondents aged 8-15 and teenagers and adults aged 16+ believed they had encountered a deepfake at least once in the last six months, with over 1/10 believing they had encountered deepfakes over ten times in this period.
Under the Online Safety Act, regulated online services are required to take steps to address the sharing of illegal and harmful content, which may include certain types of deepfakes.
In our discussion paper, Deepfake Defences, we:
- explore the impact of GenAI of the proliferation of deepfakes;
- map out the different types of deepfakes in existence, including those which demean, defraud and disinform;
- share new survey results looking at children and adults’ experiences of deepfakes online;
- analyse measures that actors across the technology supply chain can take to respond to deepfakes, from deploying watermarking techniques and deepfake detection classifiers, to filtering content and supporting media literacy efforts.
We will continue to examine the measures outlined in our paper as we regulate online services via the Online Safety Act.
Discussion papers allow us to share our research and encourage debate in areas of our remit. This discussion paper does not constitute official guidance.
Read our research
Deepfake Defences-Mitigating the Harms of Deceptive Deepfakes (PDF)