Deepfakes are the most worrying AI crime, researchers warn

Deepfakes are the most concerning use of AI for crime and terrorism, according to a new report from University College London.

The research team first identified 20 different ways AI could be used by criminals over the next 15 years. They then asked 31 AI experts to rank them by risk, based on their potential for harm, the money they could make, their ease of use, and how hard they are to stop.

Deepfakes — AI-generated videos of real people doing and saying fictional things — earned the top spot for two major reasons. Firstly, they’re hard to identify and prevent. Automated detection methods remain unreliable and deepfakes also getting better at fooling human eyes. A recent Facebook competition to detect them with algorithms led researchers to admit it’s “very much an unsolved problem.”

Secondly, Deepfakes can be used in a variety of crimes and misdeeds, from discrediting public figures to swindling cash out of the public by impersonating people. Just this week, a doctored video of an apparently drunken Nancy Pelosi went viral for the second time , while deepfake audio has helped criminals steal millions of dollars .

In addition, the researchers fear that deepfakes will make people distrust audio and video evidence — a societal harm in itself.

Study author Dr Matthew Caldwell said the more our lives move online, the greater the dangers will become:

The study also identified five other major AI crime threats: driverless vehicles as weapons, AI-powered spear phishing, harvesting of online data for blackmail, attacks on AI-controlled systems, and fake news.

But the researchers weren’t overly alarmed by “burglar bots” that enter homes through letterboxes and cat flaps, as they’re easy to catch. They also ranked AI-assisted stalking as a crime of low concern — despite it being extremely harmful to victims — because it can’t operate at scale.

They were far more worried about the dangers of deepfakes. The tech has been grabbing alarm-raising headlines since the term emerged on Reddit in 2017, but few of the fears have been realized thus far. However, the researchers clearly think that is set to change as the tech develops and becomes more accessible.

Leave a Comment

Your email address will not be published. Required fields are marked *