The Dangers of Deepfakes

1st March 2023

Deepfake /ˈdiːpfeɪk/: a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information.

Developments in AI and technology are often a double-edged sword. Whilst we benefit from the advances of computer-generated images and enhanced visuals in the content we consume including face-altering apps and special effects in films, bad actors (not in the films, but in the online space) are ready to capitalise on this technology, to the detriment of others. In the age of smartphones and easily accessible online tech and apps, the propensity and reach for potential abuse is ever-increasing.

At its worst, “deepfake” technology is being mis-used to superimpose the face of an individual onto the body of another engaged in a sex act and then shared online. This has devastating and irreversible effects on the individual, whose likeness is being shared, especially if the content looks realistic. “Nudifying” is the use of AI technology to digitally remove clothing from an image or video, making the person falsely appear naked. There are already a number of “deepfake” porn sites and programs which are accessible for users to create their own deepfakes or “nudified” images/videos. Unfortunately, this type of abuse disproportionately affects women, and in some cases is a type of revenge porn, for example where an ex-partner creates a sexually explicit deepfake with the intent to cause embarrassment or distress. Revenge porn was criminalised in 2015 (after an amendment to the Criminal Justice and Courts Act), but the creation/sharing of deepfakes and nudification is not explicitly covered under existing legislation.

In July 2022 the Law Commission recommended reforms to create a clearer and broader legal framework for the taking or sharing of intimate images without consent. The proposed reforms include the criminalisation of deepfake pornography or nudified imagery without consent. The proposed ‘base’ offence would be the taking/sharing of intimate images without consent even if there is no intent to cause harm or distress. This offence would be rendered more serious if the intimate image was taken (or shared) with (i) the intention of causing the victim humiliation, alarm or distress, or (ii) the intention that the image will be looked at for the purpose of obtaining sexual gratification, or (iii) a threat to share. The non-consensual sharing of manufactured intimate images (deepfakes) would be a standalone offence. The victims of these new offences would automatically be eligible for lifetime anonymity.

In November 2022 the Government announced that it would make pornographic deepfakes and “downblousing” criminal offences which could carry a custodial sentence. A series of amendments will be introduced to the Online Safety Bill (OSB) to include these new offences.

What does this mean?

The criminalisation of deepfakes is limited to intimate/pornographic images/videos. Companies such as Meta and Twitter will have to identify the infringing content and take appropriate action. In most cases, we expect the nudity/pornographic element to be flagged by existing measures tech companies have to detect infringing content (including AI, moderators and reporting). The difficulty will come with more borderline content, for example, ‘toileting’ where the image (whether it is a deepfake or not) depicts a person on the toilet, or engaged in a similar intimate act, which does not overtly involve nudity or a sexual act.

However, some deepfakes contain no intimacy/privacy element. Deepfakes have also gained notoriety as a vehicle for disseminating fake news and hoaxes. They can amount to ‘fake fake news’ and so, by their very nature, it is often difficult to discern whether such content is genuine. It may also not be clear whether falsely attributing a statement to a different individual would amount to ‘misinformation’ or ‘disinformation’ (if for example the message being conveyed is not particularly harmful or nefarious). Companies like Meta already seek to enforce against such content by way of its ‘Manipulated Media’ policy and its other Community Standards dealing with issues like hate speech, whilst still protecting parody and satire.

Under the OSB there is currently a duty on companies to remove illegal disinformation (e.g. a direct incitement to violence), protect children from mis/disinformation, and for Category 1 services to have clear policies on harmful disinformation accessed by adults. However, as the technology improves, deepfakes appear more genuine and harder to detect. In 2017, the University of Washington created a realistic synthetic video of Obama using AI. Now, the technology is more accessible to users through online websites and apps – these claim to be legal/ethical but are potentially open to abuse by users. In order for companies to comply with both the forthcoming criminal legislation and amendments under the OSB, they will have to devote significant resources to technology and other verification checks to stay ahead of the bad actors and protect online users from the harm caused by deepfakes.