As the old expression goes, “speed kills,” and the world of cybersecurity is no different. Artificial intelligence (AI) cyber attacks enable hackers to break into networks and find critical data assets before security analysts can spot them.?
Unfortunately, AI-driven attacks aren’t a science fiction invention but a reality that security teams face daily.?
For instance, the widespread adoption of generative AI tools, like ChatGPT and Bard, appears to have led to a dramatic increase in phishing attacks. A report produced by cybersecurity vendor SlashNext found that there’s been a 1,265% increase in malicious phishing emails since the launch of ChatGPT.?
The State of AI in Cyber Attacks in 2024
For years, defenders have discussed how AI can be used in cyber attacks, and the rapid development of large language models (LLMs) has increased concerns over the risks presented.?
In March 2023, anxiety over automated attacks was high enough that Europol issued a warning about the criminal use of ChatGPT and other LLMs. Meanwhile, NSA cybersecurity director Rob Joyce warned companies to “buckle up” for the weaponization of generative AI.
Since then, threat activity has been on the rise. One study, released by Deep Instinct, surveyed over 650 senior security operation professionals in the U.S., including CISOs and CIOs, and found that 75% of professionals witnessed increased attacks over the past 12 months.?
READ MORE: Google Cloud CISO Phil Venables Talks Ethical Hackers
Furthermore, 85% of respondents attributed this increase to bad actors using generative AI.?
If we identify 2023 as the year that generative AI-led cyber attacks moved from a theoretical to active risk, then 2024 is the year organizations need to be prepared to adapt to them at scale. The first step toward that is understanding how hackers use these tools.
How Generative AI Can Be Used for Bad
There are several ways that threat actors can exploit LLMs, from generating phishing emails and social engineering scams to generating malicious code, malware, and ransomware.??
Data risk and privacy leader at PwC US, Mir Kashifuddin, told Techopedia:
“The accessibility of GenAI has lowered the barrier to entry for threat actors to leverage it for malicious purposes. According to PwC’s latest Global Digital Trust Insights Survey, 52% of executives say they expect GenAI to lead to a catastrophic cyber attack in the next year.
“Not only does it allow them to rapidly identify and analyze the exploitability of their targets, but it also enables an increase in attack scaling and volume. For example, using GenAI to quickly mass triage a basic phishing attack is easy for adversaries to identify and entrap susceptible individuals.”
Phishing attacks are widespread for attackers because they must jailbreak a legitimate LLM or use a purpose-built dark LLM like WormGPT to generate an email convincing enough to trick an employee into visiting a compromised website or downloading a malware attachment.?
Using AI for Good
As concerns over AI-generated threats rise, more organizations are looking to invest in automation to protect against the next generation of fast-moving attacks.?
According to a study by the Security Industry Association (SIA), 93% of security leaders expected to see generative AI impact their business strategies within the next five years, with 89% having AI projects active in their research and development (R&D) pipelines.?
In the future, AI will be an integral part of enterprise cybersecurity. This is demonstrated by research from Zipdo, which finds that 69% of enterprises believe they cannot respond to critical threats without AI.?
After all, if cybercriminals can create phishing scams at scale via language models, defenders need to upscale their ability to defend against them, as relying on human users to spot scams every time they encounter them simply isn’t sustainable in the long term.
At the same time, more organizations are investing in defensive AI because these solutions offer security teams a way to decrease the time taken to identify and respond to data breaches while releasing the manual administration needed to make a security operation center (SOC) function.
Organizations can’t afford to manually monitor and analyze threat data in their environments without the assistance of automated tools because it’s too slow – particularly when considering there is a 4 million shortfall in the cybersecurity workforce.
READ MORE:
- The World Needs 4M More Cybersecurity Experts — Now
- 5 CISOs Share Their Top Security Predictions for 2024
- The Best Cybersecurity Certifications for 2024
- The Best Cybersecurity Schools and Classes
Thus, AI provides security teams with a solution to automate tasks ranging from threat hunting to malware analysis, vulnerability detection, network inventorying, phishing email containment, or even workflows themselves.?
Part of these defenses can involve using generative AI to sift through threat signals, one of the core values of LLM-driven security products released by vendors, including Microsoft, Google, and SentinelOne.?
The Role of LLMs in the Cybersecurity Market
One of the most significant developments in cybersecurity AI came last April when Google announced the launch of SEC-PaLM, an LLM designed specifically for cybersecurity use, which can process threat intelligence data to offer detection and analytics capabilities.?
This launch resulted in the development of two exciting tools: VirusTotal Code Insight, which can analyze and explain the behavior of scripts to help users identify malicious scripts, and Breach Analytics for Chronicle, which automatically alerts users about active breaches in the environment alongside contextual information so they can follow up.?
Likewise, Microsoft Security Copilot uses GPT4 to process threat signals taken from across a network and generates a written summary of the potentially malicious activity so that human users can investigate further.?
While these are just two products using LLMs in a security context, more broadly, they highlight the role they have to play in the defensive landscape as a tool to reduce administrative burdens and enhance contextual understanding of active threats.?
The Bottom Line
Whether AI is a net positive or negative for the threat landscape, it will come down to who does it better: the attackers or the defenders.?
Suppose defenders aren’t prepared for a rise in automated cyber attacks in the future. In that case, they will be vulnerable to exploitation. However, organizations that embrace these technologies to optimize their SOCs not only have the option to stave off these threats but can also automate the less-rewarding manual work in the process.?