The Cyber GenAI Arms Race

Zehavit Hazout, head of the automation department at 2Bsecure image
Zehavit Hazout, head of the automation department at 2Bsecure

The good news: GenAI technology has added some powerful tools to the cyber defender’s toolbox. The bad news: cyber attackers are also benefiting from the huge potential of the technological revolution. We sat down for a conversation with Zehavit Hazout, head of the automation department at 2Bsecure, about how to protect an organization’s systems in the age of machine learning.

It goes like this: an interface named Gandalf has been instructed to protect a secret password at all costs, and you must do all you can to convince him to reveal it to you. If you succeed, everyone wins: you advance to the next level of the game, and Gandalf learns how to make his protective armor even stronger. This game, which was developed by an information security startup in order to strengthen protection systems, allows the user to experiment with prompt injection processes, whereby the user tries to extract information from the model, and the model tries to protect it.

Prompt injection attacks have become very popular among hackers, who try to exploit weaknesses in models like ChatGPT to steal knowledge or influence the model’s behavior, and mislead users. This is an example of the new types of attacks born as a result of the rise of GenAI technology, and how defense systems can be strengthened in order to defend against them. Over the past year, technology giants and cyber security companies have been releasing GPT-based tools for information and cyber security professionals. Among others, Microsoft unveiled its Microsoft Security Copilot, which uses generative artificial intelligence to beef up human analysts in investigation and response, Airgap Networks announced ThreatGPT, a machine learning model for Zero Trust Firewall, and SentinelOne released a new platform that combines several models.

 

“A machine can go through millions of lines of code in a few minutes or a few hours; for a human to do so would take forever.”

 

“GenAI has significantly strengthened efforts to protect information security in cyberspace, although even before the latest developments in GenAI technology, artificial intelligence was already being widely used in the worlds of information security and cyberspace,” says Zehavit Hazout, Head of Automation at 2Bsecure. “Take for example the issue of code review. It used to be the accepted practice for an information security expert to work with companies to go over their lines of code, looking for loopholes. But today the preference is for automation. When there are millions of lines of code, there is no doubt that the capabilities of a machine surpass those of the human eye. In fact, we have been using automation systems for code review purposes for a long time. A machine can go through millions of lines of code in a few minutes or a few hours, which for a human would take forever, or at least a very long time. So, we activate automation systems, and then a human goes through and makes sure there are no errors. As mentioned, it worked like this even before the latest developments, but the GenAI revolution undoubtedly expands and refines the use of automation in the world of information security and cyber.

 

“For example, by continuously training AI algorithms with diverse data sets, we can improve the level of accuracy of threat detection, and stay ahead of emerging cyberthreats. Behavioral analysis by AI algorithms provides cyber teams with a deep understanding of user behaviors, so that suspicious activities that may indicate a cyberattack can be quickly identified. In addition, automated response systems based on artificial intelligence can quickly neutralize threats, minimizing their impact on organizations and individuals by responding in real time. Generative AI can recommend to SOC analysts the best course of action for future protection and prevention needs. As a rule, automation can be integrated into all the key points in the protection of an organization’s systems and strengthen information security personnel, thereby significantly fortifying the defense system. Also, thanks to the ability of these models to generate synthetic data, security professionals can create simulations of cyberattacks more easily, which also enables them to strengthen their defense strategies. So, there are many applications.”

Attackers are embracing the new technology. The number of phishing attacks using emails jumped by 135% in the first two months of 2023 (according to a study by Darktrace), and WormGPT, the dark twin of ChatGPT, is on the rise. ​

 

“Besides the promising progress in cyber security protection, GenAI also carries quite a few challenges and risks,” says Zehavit. “Attackers are early adopters of innovative technologies, and GenAI presents them with a new space that has rich potential for their activities. According to a study by Darktrace, the number of phishing attacks using emails jumped by 135% in the first two months of 2023. In general, social engineering attacks (the type of attack that take advantage of ‘human bugs’, for example by tempting the user to download or click on something) are thriving this year, thanks to GenAI technology. Think how easy it is today to draft eloquent emails, without spelling errors and in different languages, and even incorporate within them personal information based on the target’s social media pages. Using ChatGPT, not only can more people create such emails, but they can also be duplicated in infinite versions, and connected to the digital footprints of the targets for the attack. And that’s even before we talked about the rich potential of Deep Fake, and the ability to change the content of photos and videos, and thereby pretend to be another person. WormGPT is just one example that has made headlines. It is an artificial intelligence-based tool, without the ethical limitations that ChatGPT has, and it was specially created for the purpose of creating phishing messages, malware and advising on illegal activities.

 

Against this background, the warning against AI attacks published by the FBI last July is not really surprising. In a press conference, FBI officials warned of a series of threats, including: attempts to steal innovative developments from technology companies and research centers in the fields of AI for illegal use, fraud and phishing and malware attacks, and the use of AI models for the purpose of consulting and assisting in terrorist attacks. The rate of attacks is only expected to increase. We will have more hackers, because with the help of AI models, even those who do not have the knowledge will be able to carry out an attack. In addition, the professional level and abilities of attackers will soar and, as a result, so will the potential damage caused. An ordinary user without any technological or technical knowledge will be able to carry out sophisticated attacks. And we haven’t even begun to discuss other aspects concerning the very legitimate use of the technology, such as issues of ethics, transparency and maintaining privacy – you can read about this in our on Responsible AI.

 

Protecting organizations’ systems in the age of machine learning: training, updating procedures, and finally fighting AI with AI

 

As we navigate tomorrow’s cyber world, it is essential to remain vigilant and establish responsible guidelines for leveraging the power of AI for protection, while mitigating the risks of advanced cyber threats. Cyber security experts need to continually update their knowledge and skills to effectively combat evolving threats. At the same time, it should be remembered that most risks related to AI/ML should be covered by existing corporate policies, for example, policies regarding emails or the sharing of data with third parties. Training for employees regarding the dangers of phishing attacks, for example, already takes place in organizations on a regular basis, and should continue, but must be updated regarding the special risks arising from GenAI. (For example, if in the past spelling errors could be a sign of phishing, today it is more verbose emails, which include a lot of text, that may be suspicious). In addition, it is possible to fight AI with AI, strengthening the organization’s defense system through a combination of automations and advanced models.

Finally, collaboration and information sharing between industries and cybersecurity experts will lead to the cultivation of collective knowledge and facilitate the development of strong defense strategies. In this context, it is worth noting that last August, DARPA, the American agency for advanced defense research, which is responsible for some of the technological developments that have changed the world, issued a call for computer science researchers, artificial intelligence experts and software engineers to participate in the AI Cyber Challenge (AIxCC), a two-year competition for the development of the next generation of cyber security tools. This is a first-of-its-kind collaboration between AI companies, born in order to face the challenges posed by the GenAI revolution, and it will be fascinating to follow the developments.”

 

 

People who viewed this page also viewed
OPEN BANKING REVOLUTION
The rise of the super-developer
AI and Deep Learning
Machine Learning
Find out more
Please complete your details and we will contact you
*
*
*
*
JOIN THE MATRIX NATION Back to home page - Matrix