Five verified incidents that prove no corner office is safe from AI impersonation.
Case Study 1: The $25M Arup Catastrophe
Company: Arup (Global engineering firm behind Sydney Opera House)
Loss: $25.6 million
Method: Multi-person video conference with deepfaked CFO and staff
Date: Early 2024
Security experts call this one of the most sophisticated corporate deepfake attacks that has ever happened. In Hong Kong, an Arup employee joined what looked like a routine video call with the company’s UK-based CFO and other colleagues. The meeting seemed legitimate; the participants looked and sounded exactly like the real executives.
During the call, the employee was instructed to execute 15 separate transactions, totaling HK$200 million, to five different bank accounts.
Never Worry About AI Fraud Again. TruthScan Can Help You:
- Detect AI generated images, text, voice, and video.
- Avoid major AI driven fraud.
- Protect your most sensitive enterprise assets.
Only after checking with the company’s head office, the employee noticed that every person on that video call had been an AI-generated deepfake.
This wasn’t a simple voice clone or static image. Fraudsters had created a real-time, interactive video of multiple executives at the same time, creating a level of complexity that marked a new evolution in corporate cybercrime.
“Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes. What we have seen is that the number and sophistication of these attacks have been rising sharply in recent months.” Rob Greig, Arup CIO
Case Study 2: Ferrari’s $40 Million Question
Company: Ferrari
Target: CEO Benedetto Vigna
Attempted Loss: Undisclosed (rumors suggest millions)
Method: Deepfake AI voice with a Southern Italian accent
Date: July 2024
Outcome: Prevented
Ferrari executives received messages via WhatsApp that appeared to come from their CEO, Benedetto Vigna, along with his profile photo and company branding. The messages talked about a big upcoming acquisition and pushed for the immediate sharing of confidential financial information. In a follow-up call, the deepfake even copied Vigna’s Southern Italian accent.
Fortunately, one executive started to get suspicious and asked a simple question: ‘What was the title of the book you recommended to me last week?’ When the AI couldn’t answer, the scam fell apart.
Sometimes, the most sophisticated technology can be defeated by the simplest human protocols. Ferrari’s close call demonstrates both the quality of modern deepfakes and the power of personal verification methods.
Case Study 3: WPP’s Microsoft Teams Trap
Company: WPP (World’s largest advertising group)
Target: CEO Mark Read
Method: WhatsApp account + Teams meeting with voice clone and YouTube footage
Date: May 2024
Outcome: Prevented
Cybercriminals set up a fake WhatsApp account using publicly available photos of WPP’s CEO, Mark Read. They then used this material to schedule a Microsoft Teams meeting with another senior executive, using the account, asking for access to immediate funding and personal information for a “new business”.
During the video call, fraudsters used a combination of voice cloning technology and recorded YouTube footage to impersonate Read.
CEO Mark Read’s Response: “Fortunately, the attackers were not successful. We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI, and deepfakes.”
The WPP case shows how executives with a big media presence are even more vulnerable. The large amount of public photos and videos of them gives criminals the perfect material to create deepfakes.
Case Study 4: The Binance “AI Hologram” Scheme
Company: Binance (World’s largest cryptocurrency platform)
Target: Patrick Hillmann, Chief Communications Officer
Method: Video conference “hologram” using TV interview footage
Date: 2022 (early major case)
Outcome: Multiple crypto projects deceived
Sophisticated hackers built what Hillmann called an ‘AI hologram,’ using clips from his TV and news appearances. The deepfake was so convincing that it successfully deceived multiple crypto representatives during Zoom calls.
The criminals used this technology to impersonate Hillmann in project meetings seeking Binance listings. These are one of the most valuable endorsements in the crypto industry.
According to Hillmann, “…other than the 15 pounds that I gained during COVID being noticeably absent, this deep fake was refined enough to fool several highly intelligent crypto community members.”
The Binance case marked an early warning that criminals were moving beyond simple voice clones to sophisticated video impersonations targeting specific business processes.
Case Study 5: LastPass’s Internal Wake-Up Call
Company: LastPass (Cybersecurity/Password Management)
Target: Company CEO
Method: WhatsApp calls, texts, and voicemail impersonation
Date: Early 2024
Outcome: Prevented
Deepfake scammers targeted LastPass over WhatsApp (calling, texting, and leaving voicemails) while convincingly posing as the company’s CEO.
The employee who was targeted noticed several red flags:
- Communications occurred outside normal business hours;
- The request carried unusual urgency (a common scam tactic);
- The channel and approach deviated from standard company communication protocols.
If cybersecurity professionals can be targeted successfully, every organization must assume they’re vulnerable.
The Pattern Behind the Cases
Analyzing these verified incidents reveals a consistent criminal methodology:
The Executive Vulnerability Profile
Research shows certain executive characteristics increase deepfake targeting risk:
- Significant media presence (TV interviews, podcasts, conferences)
- Public speaking footage available online
- Financial authorization authority
- Cross-border business operations
- High-profile industry positions
Geographic Hotspots:
- Hong Kong (financial hub, regulatory complexity)
- North America (high digital adoption, large economy)
- Europe (GDPR compliance creates verification challenges)
The Human Factor
Even with all the technological sophistication, many attacks still rely on basic human psychology:
- Authority bias (orders from perceived superiors)
- Urgency pressure (artificial time constraints)
- Confidentiality appeals (special access, insider information)
- Social proof (multiple “colleagues” present)
The executives who survived these attacks share common traits: healthy skepticism, verification protocols, and team communication about potential threats.
As Ferrari’s example proves, sometimes a simple question can defeat million-dollar technology.
References
CNN (February 4, 2024) – “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer'”
Fortune (May 17, 2024) – “A deepfake ‘CFO’ tricked the British design firm behind the Sydney Opera House in $25 million fraud”
The Guardian – Rob Greig quote about Arup attacks
MIT Sloan Management Review (January 27, 2025) – “How Ferrari Hit the Brakes on a Deepfake CEO”
The Guardian (May 2024) – WPP CEO Mark Read deepfake attempt
Incode Blog (December 20, 2024) – “Top 5 Cases of AI Deepfake Fraud From 2024 Exposed”
Binance Blog – “Scammers Created an AI Hologram of Me to Scam Unsuspecting Projects” by Patrick Hillmann
Euronews (August 24, 2022) – “Binance executive says scammers created deepfake ‘hologram’ of him”
Eftsure US – “7 Deepfake Attacks Examples: Deepfake CEO scams” – LastPass case