The Weak Link: Us
In the era of GenAI, there has been a sharp increase in both the volume and sophistication of social engineering attacks that exploit human vulnerabilities. With deepfake technology, it’s easier than ever to fall for scams, and criminals are taking full advantage of it. For example, in the second half of 2024, there was a 442% increase in voice phishing (vishing) attacks using real voices. When these attacks succeed, the consequences can be particularly severe. Recently, an employee at a financial firm in Hong Kong transferred $25.5 million to attackers following a fake video call with the CFO. So, what can we do? Strengthen our defenses!
In 2024, there was a significant rise in social engineering attacks, where criminals exploit human psychology to trick people into revealing sensitive information, clicking on malicious links, or transferring money to fake accounts. Unlike traditional hacking methods, the focus here is on the person, not the computer: the hacker doesn’t bypass security systems; they bypass the victim’s judgment.
The troubling part is that the advent of generative artificial intelligence has given attackers new opportunities with vast potential for exploitation. AI enables them to expand their attack reach and increase the sophistication of their tactics. According to a report by SlashNext, since the widespread adoption of GenAI platforms in 2022, there has been a 1,265% increase in phishing attacks via email, and a 967% rise in phishing attempts aimed at stealing login credentials.
AI has made cybercrime more accessible and appealing than ever before. It’s easy to see why: today, anyone can easily draft fluent emails in multiple languages, free from spelling mistakes, and even incorporate personal information from target profiles on social media. With tools like ChatGPT, not only can more people create these emails, but they can also generate multiple versions of them, tailored to each target’s online presence.
AI doesn’t necessarily create new criminals, but it allows those already involved in other types of crime to transition into cybercrime. Small crime groups, once deterred by the technical complexities of cyber attacks, are now adopting AI-driven tactics. This allows individuals with limited technical knowledge to launch convincing phishing campaigns or create malicious code with minimal effort. AI has made cybercrime not only more accessible but also more attractive—thanks to the lower risks and costs compared to traditional criminal activities. Now, everyone is a target: individuals as well as organizations.
According to a report from the World Economic Forum, in collaboration with Accenture, published in January 2025, approximately 72% of organizations report an increase in cyber risks, with ransomware remaining a major concern. Nearly 47% of organizations cite advances from malicious actors driven by generative AI as their top concern.
With Deepfake Technology, Scams Are Easier Than Ever to Fall For
Deepfake technology enables the manipulation of sound, images, and videos to impersonate others. While it was once complex and expensive, it’s now far more accessible. Criminals can replicate voices using just an hour of YouTube recordings and an $11 subscription, which explains the surge in phone scams. For instance, in the second half of 2024, voice phishing attacks (vishing) using AI to replicate real voices saw a 442% increase, with attackers mimicking the voices of executives, family members, or technical support staff to deceive victims.
The DFIR team from 2BSecure, a subsidiary of Matrix, was called in to investigate one such case, where an employee of a major financial institution fell victim. It began with a flood of spam emails that overwhelmed a salesperson’s inbox. Naturally, he reached out to the IT department for assistance. The following day, he received a call from someone claiming to be an IT staff member offering to fix the spam issue. When the real IT staff member contacted him later, the employee responded: “Everything’s already sorted, thanks anyway.” The investigation revealed that the employee had downloaded a file named spam_filter.txt from the attacker and ran it through PowerShell. This action allowed the attackers to take control of the computer, potentially leading to a double ransomware attack and significant damage to the organization. You can read more about the case here.
Additionally, the use of deepfake videos has spread—fake videos showing famous individuals (like Elon Musk or former presidents) speaking lines they never said. These videos have been used for investment fraud, fake fundraising, and spreading false information with political and social implications. In a serious case uncovered in February this year, attackers faked an entire video meeting, including the CFO and other employees of a financial company in Hong Kong, convincing an employee to transfer $25.5 million in an urgent bank transaction.
So, what’s the solution? Build both human and technological resilience
The combination of advanced technology and emotional manipulation has made defense a complex challenge. As AI improves, the ability to create convincingly fake communications—emails, phone calls, text messages, and videos that look completely real—grows stronger. Cybersecurity experts emphasize the need for increased awareness, the development of new detection tools, and targeted employee training to counter the next wave of social engineering attacks.
It’s crucial to combine technological measures with awareness and clear organizational protocols. First, multi-factor authentication (MFA) should be implemented, and any unusual request for money transfers, even from a senior person, should be verified via a separate communication channel—such as a phone call or in-person meeting. It’s important to understand that even a video call isn’t necessarily proof of the speaker’s identity in an era where faces and voices can be convincingly faked.
Additionally, investing in training employees to recognize suspicious signs is crucial: language that is unusual for the sender, pressure to expedite a process, or secretive requests that don’t match established procedures. Alongside this, it’s important to adopt technological solutions that help detect fake content—such as deepfake detection tools, behavioral analysis, and anomaly detection systems in organizational communication. At Matrix, we have a dedicated team, CyberShield_AI, that develops automation to strengthen the organization’s first line of defense. Furthermore, our DFIR team is available to assist with defense consulting and is on hand if you suspect a breach. It’s important to remember that, even in the age of artificial intelligence, human vigilance remains the first line of defense.
By: Nomy Borenstein, Matrix