Deepfake Scams

4th March 2023

I watched The Masked Scammer on Netflix recently which tells the story of Gilbert Chikli, one of the most notorious and successful scam artists in French history. I became intrigued by this conman’s audacious scams after listening to the excellent 9-part Wondery podcast ‘Persona: The French Deception’.

Chikli’s scam was simple, yet effective: he targeted people in middle management (often in French banks), calmly and confidently posing as a senior figure in the organisation. He made his targets think they had been specifically chosen to execute a top-secret anti-terrorism mission, involving the important and urgent transfer of funds. This scam was so effective as it flattered its victims, giving the impression they had been headhunted by a superior to deliver the money as part of an MI5-style operation (straight into the scammer’s hands). Who doesn’t want to be a real-life James Bond, fighting terrorism? Chikli’s scams later became more outrageous, including having a mask made of the then French Minister of Defence – Jean-Yves Le Drian; a rudimentary (yet successful) early incarnation of a ‘deepfake’ which enabled him to defraud individuals out of hundreds of millions of Euros.

So far, so Netflix-worthy, but the reality is that others wanting to carry out such serious scams now have the benefit of deepfake technology, including to realistically mimic voices. AI voice-cloning technology is being used to target employees and convincing them to transfer money to the scammers, by impersonating their boss. If somebody sounding just like your boss called you up and told you to do something, would you just do it? If you had no reason to suspect it was not your boss, they sounded just like your boss, and acted like your boss, many people would.

In my earlier post, I mainly discussed image/video deepfakes, but audio deepfakes rely on sound alone, using text-to-speech audio to synthesise and clone speech, so it can be easier and quicker to generate. Whilst there are ways to detect artificial imposter voices, the most effective barrier is to educate people that this is happening and have stricter protocols in place.

Audio-deepfakes do not just pose a security risk, the whole concept of deepfakes creates legal difficulties as it can be hard to establish the author/publisher of the information, and who can therefore be held liable over this content.

The so-called ‘Duck Test’ from 1738 is being brought back into focus. Instead of a mechanical duck (which looked, swam, quacked and even excreted like a duck), we have deepfakes. Our natural (‘abductive’) reasoning is to assume something which bears the hallmarks of being genuine, is. We can assume nothing, especially as deepfakes and impersonation scams are becoming more prevalent and sophisticated; these not issues limited to podcasts and Netflix series.

“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.”

If you need advice navigating this space, please get in touch.