Concise Cyber

Subscribe below for free to get these delivered straight to your inbox

Advertisements
AI Supercharges Social Engineering: A $25M Deepfake Scam Reveals New Business Threats
Advertisements

Cybercriminals are now leveraging artificial intelligence to dramatically increase the sophistication of social engineering attacks, moving beyond simple phishing emails to highly convincing deepfakes. A recent, high-profile incident highlights the severity of this threat, where a finance worker at a multinational firm was deceived into transferring $25 million after participating in a deepfake video conference. Attackers used AI to digitally recreate the company’s UK-based chief financial officer and other employees, successfully persuading the target to authorize the payment.

Initially, the employee had been suspicious of an email request but was convinced of its legitimacy after the video call, where the voices and appearances of colleagues were convincingly impersonated. This event demonstrates a significant evolution in attack methods, making it more challenging for employees to identify fraudulent communications.

The Escalation of AI-Powered Attacks

The use of generative AI allows attackers to craft social engineering campaigns at an unprecedented scale and level of personalization. AI tools can create phishing emails that are free of the spelling and grammatical errors that once served as red flags. These tools analyze public data from sources like LinkedIn to tailor messages that are specific and believable to the intended victim. According to a 2024 report from Darktrace, there was a 135% increase in novel social engineering attacks between 2022 and 2023.

Beyond text-based attacks, AI-powered voice cloning is being used in vishing (voice phishing) campaigns. The hacking group known as Scattered Spider utilized this technique in its attack on MGM Resorts, successfully impersonating employees to gain access to internal systems. AI can replicate a person’s voice with only a small audio sample, enabling attackers to bypass voice-based identity verification and deceive individuals over the phone.

Defensive Strategies for Businesses

In response to these AI-driven threats, businesses are advised to adopt a multi-layered security posture. This involves deploying advanced technological defenses, such as email security gateways that use their own AI to detect anomalies and malicious content that traditional filters might miss. These systems can analyze communication patterns to flag unusual requests, even if they appear to originate from a legitimate source.

Employee education remains a critical component of defense. Security awareness training must evolve to address these new threats. This includes conducting phishing simulations that utilize AI-generated content to better prepare staff for real-world scenarios. Additionally, establishing and enforcing strict internal processes is vital. Adopting a Zero Trust security framework, which operates on the principle of “never trust, always verify,” helps mitigate risk. For high-stakes transactions, businesses should mandate out-of-band verification, such as requiring a phone call to a pre-verified number before any funds are transferred.

Source: https://www.techradar.com/pro/how-ai-is-supercharging-social-engineering-and-what-businesses-can-do-about-it