The story of Cara Hunter strikingly illustrates the dangers of deepfakes. A deepfake was used to superimpose her face onto a body in a pornographic video, which circulated just weeks before an election. This digital attack posed a severe threat to her reputation and career. This episode highlights how a deepfake can upend a person’s life, raising urgent questions about digital security, legislation, and online harassment. In a world where manipulative technologies are spreading rapidly, understanding deepfakes has become essential.
The Cara Hunter Case: Trauma Caused by a Deepfake

In April 2022, Cara Hunter discovered a deepfake featuring her face circulating on social media just weeks before an election. The video was shared via WhatsApp and Messenger, causing immediate shock. The consequences were devastating: threats, insults, and public humiliation. The deepfake was not only an invasion of privacy but a direct attack on her integrity and dignity. In this context, deepfakes prove to be a particularly effective tool for harassment. For Cara Hunter, the experience was profoundly traumatic.
The timing of the deepfake was strategic, just weeks before the election. Its spread could potentially influence public opinion and harm her political career. Cara Hunter described the experience as a nightmare, feeling powerless against the viral nature of the video. At the time, both technology and legislation were insufficient to combat this kind of content effectively. This case illustrates how technology can be weaponised to attack the reputation of public figures.
Beyond its personal impact, the case highlights the broader phenomenon of pornographic deepfakes targeting women. Such deepfakes have become a systematic tool for harassment and manipulation. Cara Hunter used her experience to speak out against the misuse of deepfakes and to call for better protection for victims. This case underlines the urgent need for improved detection and regulation of deepfakes.
Online Harassment and a Threat to Democracy
A deepfake is not just a doctored video: it is a tool that can destroy a person’s social and professional life. In Cara Hunter’s case, the deepfake triggered massive harassment, showing how manipulated videos can impact emotional wellbeing. Most pornographic deepfakes target women, adding a gendered dimension to the phenomenon. A deepfake is therefore more than an image; it becomes a vector of digital and social violence. The rapid spread of deepfakes amplifies their destructive effect.
When public figures are targeted, deepfakes pose a real threat to democracy. A widely circulated deepfake can sow doubt among the public and manipulate voters. Cara Hunter emphasises that deepfakes can undermine trust in institutions and the electoral process. In the age of artificial intelligence, deepfakes are multiplying, making control increasingly difficult. Understanding their role in misinformation is crucial to protecting democratic systems.
Governments are beginning to respond. In the UK, the creation and distribution of sexually explicit deepfakes is now criminalised. Yet legislation alone is not enough: awareness, development of detection tools, and citizen vigilance are essential. Deepfakes represent a major technological and social challenge. Cara Hunter’s case demonstrates that combating deepfakes requires coordinated efforts across technology, law, and education.
Towards Regulating Deepfakes: Law, Technology, and Education
Strengthening legislation is essential in the fight against deepfakes. Creating, sharing, or commissioning pornographic deepfakes can now lead to criminal penalties. This legal shift aims to protect victims and deter perpetrators. However, enforcing the law requires training, adapting judicial procedures, and international cooperation. Deepfakes must be treated as a matter of public safety and individual protection.
Technology also plays a key role. AI-powered tools capable of detecting manipulated videos are emerging, able to identify deepfakes even when faces are partially altered. Yet the rapid spread of deepfakes and easy access to creation tools complicate enforcement. It is an ongoing race between innovation and digital crime.
Finally, awareness and education are crucial to mitigating the impact of deepfakes. Citizens need to learn how to spot manipulated content and avoid sharing it. Social media platforms must act quickly to moderate and remove doctored videos. Victims also need psychological and legal support. When legislation, technology, and education work together, it is possible to limit the harm caused by deepfakes and protect individual dignity.