The Deepfake Threat to Salesforce: When Trust Becomes a Weapon

In June 2025, a Google employee received what appeared to be a routine call from IT support.

The voice on the line sounded professional, confident, and completely familiar. 

The technician asked the employee to approve a new app in the company’s Salesforce system.

Within minutes, the attackers had access and stole 2.55 million customer records from Google’s CRM.

What made this possible was the use of deepfake audio technology, with AI-generated voices so convincing that they fooled one of the most trusted forms of authentication, recognizing a colleague’s voice.

This incident, tied to the ShinyHunters group, shows how attackers are now using artificial intelligence to break into company systems.

As the backbone of customer relationship management for millions of organizations worldwide, Salesforce has become one of the main targets for a new generation of AI-powered social engineering attacks.

Why Salesforce Became a Deepfake Target

Salesforce’s growth has also turned it into a big target for data theft.

Since everything is centralized, a single breach can expose millions of customer records from many different companies.

As WithSecure’s Head of Threat Intelligence Tim West notes:

AI Detection AI Detection

Never Worry About AI Fraud Again. TruthScan Can Help You:

  • Detect AI generated images, text, voice, and video.
  • Avoid major AI driven fraud.
  • Protect your most sensitive enterprise assets.
Try for FREE

“Hacking groups like Scattered Spider deploy social engineering to gain access to SaaS environments. Their attacks may look technically simple, but that doesn’t make them any less dangerous.”

The Deepfake Threat to Salesforce: When Trust Becomes a Weapon deepfake threat to salesforce

According to new WithSecure research, malicious activity inside Salesforce environments rose sharply in the first quarter of 2025, with a twenty-fold increase in detections compared to late 2024.

How Deepfakes Fueled a Salesforce Breach

What makes recent attacks especially dangerous is how deepfake technology has shifted from a niche tool to something anyone can use as a weapon.

Unlike traditional data breaches that break directly into databases, cybercriminals are now using voice-based social engineering, or “vishing.” With the rise of deepfakes and AI voice cloning, these attacks are becoming much harder to spot.

The Deepfake Threat to Salesforce: When Trust Becomes a Weapon deepfake threat to salesforce

The ShinyHunters campaign against Salesforce customers follows an effective playbook that combines traditional social engineering with cutting-edge AI deception:

Phase 1: Voice Intelligence Gathering

Attackers begin by harvesting audio samples from public sources, executive presentations, conference calls, company videos, or social media posts. 

With as little as 20-30 seconds of clear audio, they can create convincing voice clones.

Phase 2: The Deepfake Vishing Call

During a vishing call, the attacker convinces the victim to go to Salesforce’s connected app setup page and approve a fake version of the Data Loader app, disguised with slightly altered branding.

With that, the victim unknowingly lets the attackers steal sensitive data from Salesforce.

The sophistication is remarkable. In some cases, attackers used deepfake audio to impersonate employees and persuade help desk staff to authorize rogue access. 

This represents a significant evolution from traditional voice phishing, where attackers simply pretended to be authority figures, to AI-enhanced impersonation, where they can actually sound like specific individuals.

Phase 3: OAuth Exploitation

Once the malicious application is authorized, attackers bypass multi-factor authentication entirely. 

After the fake app is approved, attackers obtain long-lived OAuth tokens, which allow them to bypass multi-factor authentication and operate without setting off normal security alerts.

Phase 4: Silent Data Extraction

Google’s Threat Intelligence Group warned that a threat actor used a Python tool to automate the data theft process for each organization that was targeted, with researchers aware of over 700 potentially impacted organizations.

Why We’re Vulnerable to AI Voices

The success of these attacks exploits a fundamental human trait: our tendency to trust what we hear.

According to one recent global study, 70 percent of people say they’re not confident they can identify a real versus a cloned voice.

This vulnerability is compounded in corporate environments where help desk staff are trained to be helpful and accommodating, and remote work has normalized audio-only interactions.

According to CrowdStrike’s 2025 Global Threat Report, there was a 442% increase in voice phishing (vishing) attacks between the first and second halves of 2024, driven by AI-generated phishing and impersonation tactics.

The $25 Million Wake-Up Call

The Deepfake Threat to Salesforce: When Trust Becomes a Weapon deepfake threat to salesforce

The implications of deepfake-enhanced attacks extend far beyond Salesforce. 

The $25 million deepfake heist at engineering firm Arup in early 2024, where attackers used AI to impersonate multiple executives on a video call, demonstrated that no organization is immune to this threat. 

Similar attacks have also targeted executives in different industries, including an attempt to impersonate Ferrari CEO Benedetto Vigna using AI-cloned voice calls that mimicked his southern Italian accent.

These incidents represent what security experts call “CEO fraud 2.0”, attacks that go beyond simple email impersonation to create multi-sensory deceptions that can fool even experienced executives.

Platform Security vs. Human Vulnerability

Salesforce has been quick to emphasize that these breaches don’t represent vulnerabilities in their platform itself. 

Salesforce acknowledged UNC6040’s campaign in March 2025, warning that attackers were impersonating IT support to trick employees into giving away credentials or approving malicious connected apps. 

The company emphasized that these incidents did not involve or originate from any vulnerabilities in its platform.

This shows a major challenge for all SaaS providers: figuring out how to stop attacks that play on human trust instead of technical flaws.

Salesforce has implemented several defensive measures:

  • Connected app hardening: Automatically disabling non-installed connected apps for new users
  • OAuth flow restrictions: Disabling connections obtained using certain authorization processes
  • Enhanced monitoring: Improved detection of suspicious application authorization patterns
  • User education: Publishing guidance on recognizing social engineering attempts

In August 2025, Salesforce shut down all integrations with Salesloft technologies, including the Drift app, after finding that OAuth tokens had been stolen in related attacks. 

Security experts now warn that teams need to listen for subtle clues, like odd pauses, background noise, or audio glitches, that can reveal when a voice has been generated by AI.

The Enterprise Response: Beyond Technology Solutions

Organizations are beginning to recognize that defending against deepfake-enhanced attacks requires more than technological solutions; it demands a fundamental rethinking of trust and verification processes.

Zero-Trust Communication

RealityCheck by Beyond Identity gives every participant a visible, verified identity badge that’s backed by cryptographic device authentication and continuous risk checks, currently available for Zoom and Microsoft Teams. 

Such solutions represent a shift toward “never trust, always verify” models for enterprise communication.

Enhanced Training Programs

Organizations using Resemble AI’s deepfake simulation platform report up to a 90% reduction in successful attacks after implementing the platform, which utilizes hyper-realistic simulations to illustrate how deepfake attacks unfold in the real world.

Multi-Channel Verification

Leading organizations are implementing protocols that require verification through multiple channels for any high-risk requests, regardless of how authentic the initial communication appears.

When Voice Is No Longer Truth

The deepfake threat to Salesforce and other enterprise systems represents an inflection point in cybersecurity history. 

For the first time, attackers can not just impersonate authority figures; they can actually sound like them, look like them, and convince even trained security professionals to take actions that compromise organizational security.

In an age of AI-generated deception, trust has to be earned through verification, not assumed through familiarity.

Companies that understand this and invest in the right tools, processes, and culture will be in the best position to protect both security and trust in an increasingly uncertain digital world.

In this new reality, the voice on the other end of the line may not be who it seems.

In the age of deepfakes, constant vigilance is the price of security.

Copyright © 2025 TruthScan. All Rights Reserved