Would you act on an apparent stock tip from Martin Wolf, the FT’s veteran chief economics commentator?
His expert analysis features in many legitimate financial videos published on the FT’s social media accounts, but scammers have generated a spate of convincing deepfake video images on Instagram where he appears to be offering investment advice.
“Right now, these three stocks are at a critical turning point and could see significant gains in the next two months,” says a convincing looking, yet digitally manipulated Wolf, in a fake advert inviting people to join his “exclusive WhatsApp investment group” to find out more.
Meta, owner of WhatsApp and Instagram, has told the FT it has removed and disabled the ads, but readers would be wise to watch out for more scams of this nature as deepfake scams go mainstream.
What’s behind the rise in deepfake scams?
The rapid rise of generative AI (artificial intelligence). The technology needed to create synthetic videos and imagery is cheap, readily available and easy for scammers to use to create convincing-looking content.
Deepfakes of celebrities including Taylor Swift, Elon Musk and the stars of Dragons’ Den have been created to promote everything from kitchenware to crypto scams and diet pills. Last year, a British man lost £76,000 to a deepfake scam where Martin Lewis, the founder of Money Saving Expert, appeared to be promoting a non-existent bitcoin investment scheme.
The scammers have now one crucial advantage, said Nick Stapleton, presenter of the award-winning BBC series Scam Interceptors and author of the book How to Beat Scammers.
“Deepfakes are working like a charm for the scammers, because many social media users simply don’t know what generative AI is capable of when it comes to making convincing impersonation videos,” he said. “They see a video like Martin Wolf’s deepfake and believe it is real because they just don’t have the information to question it.”
Lewis, who claims to have the “weird accolade” of being the most-scammed face in Britain, warned about the rise in deepfake adverts on ITV’s Good Morning Britain this week.
“I wouldn’t trust an advert with a celebrity if you’ve only seen it on social media if it’s talking about investment or dieting or any of the other scam areas,” he said. “If it’s got me in it, I don’t ever do adverts, so it’s a fake. Anything rushing you to make money . . . fake, fake, fake, don’t trust them, they’re criminals.”
What other forms can deepfake videos take?
Celebrities are not the only ones whose images can be cloned. Fraud experts say deepfakes are increasingly being used on video calls to impersonate senior staff at corporate organisations, persuading other staff members to process payments which turn out to be fraudulent.
Last year, UK engineering firm Arup lost $25mn (£20mn) when a staff member in Hong Kong was persuaded to make 15 bank transfers after scammers digitally cloned the company’s chief financial officer on a video conference call.
Online influencers who post videos and images of their faces on social media platforms are particularly vulnerable, as the more content of an individual there is to train the AI, the more realistic the copycat will be.
As the technology progresses, deepfake videos have the potential to make romance scams even more convincing, and could potentially be used to manipulate images of friends and family members making requests for money.
What more should social media platforms do?
While social media platforms say they will use facial recognition to spot and take down fake ads, they don’t have to stay up for long to gain traction.
As Martin Wolf himself has said: “How is it possible that a company like Meta with its huge resources, including artificial intelligence tools, cannot identify and take down such frauds automatically, particularly when informed of their existence?”
Meta told the FT: “It’s against our policies to impersonate public figures and we have removed and disabled the ads, accounts, and pages that were shared with us.
“Scammers are relentless and continuously evolve their tactics to try to evade detection, which is why we’re constantly developing new ways to make it harder for scammers to deceive others — including using facial recognition technology,” Meta added.
“The simple fact is that when these videos are placed on social media as ads, they go through a vetting process,” Stapleton said. “If the likes of Meta would simply consider investing more of their vast profits into better vetting, and better moderation of general posts, this would become less of an issue very quickly.”
Under the UK’s new Online Safety Act, tech companies must set performance targets to remove illegal material quickly when they become aware of it and test algorithms to make illegal content harder to disseminate.
How can you spot a video is likely a deepfake?
Stapleton’s top tip for spotting digitally manipulated images is to look at the mouth of the person supposedly talking on camera: is it really making the shapes of the words? Next, look at their skin: is it flat in texture, without definition or wrinkles? And look at their eyes: do they blink at all, or far too much?
Finally, listen to the tone of their voice. “AI struggles with the range of human voices, so deepfakes will tend to sound very flat and even in tone and lacking emotion,” he said.
The deepfake video of the FT’s Wolf did not sound much like him, but as so many social media users watch videos on silent and read the captions, the scammers gain a further advantage.
Finally, be especially wary of adverts on social media. If you can’t find the information reported anywhere else, it’s almost certainly a fake.
What to do you if have been impersonated online
Report the scam to the social media outlet, using the platform’s reporting tools. Also, let your friends and followers know about the fake account to prevent them from being misled.
If you have been a victim of this deepfake scam, or continue to see the deepfake video of Martin Wolf on social media platforms, please share your experience with the FT at [email protected]