There is a quiet battle between dark and light AI tools. While high-profile vendors like Microsoft and Google have invested heavily in using generative AI defensively, 51% of IT professionals predict that we are less than a year away from a successful cyberattack being credited to ChatGPT.
Although there haven’t been any high-profile data breaches attributed to ChatGPT or other LLM-driven chatbots, there is a growing number of dark AI tools for sale on the dark web, which are marketed toward malicious tools, including WormGPT, PoisonGPT, FraudGPT, XXXGPT, and WolfGPT.
Multiple threat actors are promoting the sale of "Wolf GPT," a project presented as an alternative to ChatGPT with malicious intent. The tool is built using Python and allegedly offers complete confidentiality, enabling powerful cryptographic malware creation, and advanced… https://t.co/8OQxUfaFu2 pic.twitter.com/AiYfJgYkE2
— FalconFeeds.io (@FalconFeedsio) July 28, 2023
The creators of each of these tools claim they can be used to generate phishing scams, write malicious code for malware, or help exploit vulnerabilities.
A Dark Industry
In early July, email security vendor SlashNext released a blog post explaining how its research team had discovered WormGPT for sale on an underground cybercrime forum, where the vendor had advertised how the tool could be used to create phishing emails that could bypass email spam filters.
WormGPT was notable because it used malware training data to inform its responses and was also free of the content moderation guidelines that are associated with mainstream LLMs like Bard and Claude.
That same month, Netenrich discovered a tool called FraudGPT for sale on the dark web and Telegram. The researchers claimed that FraudGPT could be used to create phishing emails and malware-cracking tools, identify vulnerabilities, and commit carding.
FalconFeedsio also found two other malicious LLM-based tools advertised on a hacking forum in July: XXXGPT and WolfGPT. Hackers claimed the first could create code for malware, botnets, keyloggers, and remote access trojans, while the second could create cryptographic malware and phishing attacks.
What’s the Danger?
There is considerable debate over not just whether these dark AI tools pose a threat but whether many of them exist as independent LLMs at all.
For instance, Trend Micro researchers have suggested that the sellers of tools like WolfGPT, XXXGPT, and Evil-GPT failed to provide adequate proof that they actually worked.
They also suggested that many of these tools could simply be wrapper services that redirect user prompts to legitimate LLMs like ChatGPT, which they’ve previously jailbroken to get around the vendor’s content moderation guardrails.
CEO of SlashNext, Patrick Harr, agrees that many of these tools may just be wrappers but highlights WormGPT as an example of a legitimate dark AI tool. He told Techopedia:
“WormGPT is the only real tool that used a custom LLM, and potentially DarkBERT, & DarkBART but we didn’t manage to get access to them.”
“These tools are evolving right in front of our eyes, and like ransomware, some are sophisticated, and some are bolted to other tools to make a quick profit, like the jailbreak versions of chatbots,” Harr added.
The CEO also suggested that more powerful tools like WormGPT could emerge in the future.
“The cybercrime community has proven already that they can develop a dark LLM, and while WormGPT has gone underground, a variant or something better will emerge.”
What’s Next for Dark AI?
The future of dark AI will depend on whether these tools prove to be profitable or not. If cybercriminals perceive that they can make a profit from these tools, then there will be an incentive to invest more time in developing them.
John Bambenek, principal threat hunter at security analytics company Netenrich, told Techopedia.
“Right now, the underground economy is exploring business models to see what takes off, and part of that will depend on the results that customers of these tools achieve.”
So far, these tools are advertised on a subscription basis. Prices for the advertised tools are as follows:
Dark AI Tool | Price* |
WormGPT | €100 for 1 month, €550 for 1 year |
FraudGPT | $90 for 1 month, $200 for 3 months, $500 for 6 months, $700 for 12 months |
DarkBERT | $110 for 1 month, $275 for 3 months, $650 for 6 months, $800 for 12 months, $1,250 for lifetime |
DarkBard | $100 for 1 month, $250 for 3 months, $600 for 6 months, $800 for 12 months, $1000 for lifetime |
DarkGPT | $200 for lifetime |
*Pricing information is taken from Outpost24’s dark AI study here.?
Given that a hacker used the open-source GPT-J LLM to create WormGPT, organizations need to be prepared to confront a reality where cybercriminals will find ways to use LLM maliciously for profit, whether it’s jailbreaking legitimate tools or training their own custom models.
In the future, Bambenek expects that social engineering-style attacks will be on the rise due to these solutions. He said:
“Certainly, there will be an expansion of impersonation attacks which is the logical direction of the use of such technologies. It’s one thing to make a phishing webpage, it’s another to impersonate a CEO for social engineering, for instance. Likely, it will be a tool in the arsenal as almost every attack requires some form of initial access which is enabled by phishing.”
The Real Risk: Phishing
At this stage, it doesn’t look like dark AI will take over the cyberthreat landscape just yet, but developments in this technology among threat actors should be taken seriously by organizations.
The reason is simple – it only takes one successful phishing email to trick a user into clicking an attachment or link to cause a full-blown data breach.
While LLMs like GPT-J aren’t as powerful or verbose as more popular ones like GPT-4, they’re good enough to be able to help non-native speakers put barebones scams together in another language.
In the world of scams, sometimes simplicity is enough. The infamous Nigerian prince scam still generates over $700,000 a year. As such, organizations can’t afford to write off the risk that an employee could be caught off guard by an AI-generated phishing email.
If LLMs pose enough of a threat for law enforcement agencies like Europol to warn that threat actors “may wish to exploit LLMs for their own nefarious purposes,” the development of dark AI is worth paying attention to just to be on the safe side.
Don’t Panic, but Stay Frosty
The small underground economy for dark AI tools and wrappers might not pose a significant threat now, but it could easily become a much bigger problem if more cyber gangs look for ways to exploit LLMs.
So while there’s no need to panic, double-down on phishing awareness is a great way for organizations to protect themselves in case hackers do find a way to use generative AI to streamline their phishing workflows.