22 Oct The New Phishing: Executive Impersonation in the Age of Synthetic Media
How do you define “trust” in this new age?

Introduction
What happens when the person on a video conference looks like your CEO, sounds like your CEO, yet isn’t the CEO at all? That scenario is no longer science fiction — it is the reality facing boards and executives around the world today. A wave of fraud using synthetic audio and video — so-called deepfakes — is targeting C-suite executives, board members, and their direct reports, turning once-relatively low-tech “CEO phishing” into something far more sinister.
The Threat Unfolds
In January 2024, the global engineering firm Arup became one of the most publicised victims of this trend. A finance employee joined a video conference, believed they were interacting with the company’s senior officers, and ultimately authorised wire transfers totalling roughly USD $25 million. All the “senior officers” were artificial: cloned voices, fabricated video, and a compelling narrative of secrecy. World Economic Forum+2Trustpair+2
In Singapore, authorities issued alerts after perpetrators posing as CFOs or CEOs recruited victims via messaging apps and video calls to enact wire transfers under the guise of urgent restructurings. Cyble+1
And in the UK, another incident involved a voice deepfake that fooled a company into transferring several hundred thousand dollars. Avast Blog+1
Cybersecurity professionals now warn that these attacks represent “the new phishing” — not merely spoofed email but full-blown synthetic impersonation. Reality Defender
Why It Works — and Why Boards Must Care
There are several reasons this trend hits executives and boards so hard:
-
Authority + urgency = trust. When someone purportedly senior says “do this now and don’t tell anyone”, the pressure is real. The synthetic media simply amplifies that.
-
Deepfakes lower the friction. Thanks to advances in AI, voice clones, synthetic video, and digital avatars are cheaper and easier than ever. The barriers to entry for attackers are dropping. World Economic Forum+1
-
Humans take the bait. These attacks aren’t about hacking technology per se — they’re about hacking human psychology, exploiting the chain of command and the culture of confidentiality in corporate settings.
-
Under-prepared leadership. Boards often focus on system-hacking, malware, ransomware — deepfake-led social engineering is still under-appreciated in many risk frameworks.
Key Case Lessons
-
At Arup, the fraud was not about a vulnerable IT system being breached; the system was intact. The breach was human: through a video call they believed to be genuine. World Economic Forum+1
-
In the WPP case, the CEO Mark Read reported that scammers used a WhatsApp account, voice clone and manipulated video feed to impersonate him and a senior colleague in a virtual meeting. The Guardian
-
In Singapore, the approach often begins with a WhatsApp message, switches to video (Zoom/Teams), and ends with a request for secrecy and a transaction. Cyble
Implications for Boards and Corporate Governance
-
Board oversight of cyber risk needs to evolve. The notion of “cyber risk” must include not only system intrusion and data breach, but also synthetic manipulation of trusted leadership voices.
-
Communication protocols must be revisited. When a senior leader calls or messages saying “urgent confidential transaction”, the assumption should not simply be “this came from the CEO” — verification must still apply.
-
Insurance and audit coverage may lag. Many Directors & Officers (D&O) and cyber insurance policies may not yet explicitly cover deepfake-based fraud. Boards should press for clarity.
-
Reputation risk looms large. Beyond the direct financial loss, the fact that a company’s leadership voice (literally) was hijacked is a board-level reputational issue.

Tactical Mitigations
Here are practical steps companies can adopt:
-
Multi-factor verification: If a senior executive requests a fund transfer or major decision via video or message, require secondary confirmation via an independent channel (in-person, verified phone call, or secure identity-verified messaging).
-
“Pause and help” culture: Instil a protocol where employees are encouraged to stop and verify when something feels off — especially under urgency or secrecy.
-
Board-level tabletop exercises: Simulate a deepfake scenario in a board meeting (with legal, IT and audit present) so that the organisation experiences the response live.
-
Update cyber-risk frameworks: Ensure deepfake/social-engineering scenarios are in risk registers, internal audit scopes and incident response plans.
-
Technology support: Leverage vendor tools that can detect voice-cloning, deepfake video anomalies and synthetic-media manipulations. arXiv
M2 Take
The era of “CEO email from Africa”-style fraud has evolved. What boards now confront is far more subtle and far more dangerous: the synthetic impersonation of leadership. When your own CEO’s face and voice can be cloned and used to authorise multi-million-dollar transfers, the traditional safeguards — email filters, phishing training, firewalls — are no longer sufficient. Boardrooms must treat deepfake fraud as a strategic risk, not just an IT problem. The leadership voice is the next frontier of corporate attack.