5 ‘State-Backed AI Threat Actors’ Identified by Microsoft & OpenAI

Why Trust Techopedia

Microsoft and OpenAI have shone a spotlight on the growing use of artificial intelligence (AI) by state-affiliated cyber threat actors.

Malicious groups from Russia, China, North Korea, and Iran are alleged to be leveraging large language models (LLMs) like ChatGPT to support malicious campaigns.

OpenAI says it has partnered with Microsoft to identify and neuter threat actors that they say are probing the capabilities and limits of systems like ChatGPT, DALL-E, and CoPilot — with the tech giants aiming to disrupt their actions and push the frontiers of AI safety.

Microsoft has announced it would map identified adversary tactics, techniques, and procedures (TTPs) into the MITRE ATT&CK framework.

Key Takeaways

  • Microsoft and OpenAI have identified alleged state-affiliated cyber threat actors from Russia, China, North Korea, and Iran, who stand accused of using LLMs like ChatGPT for malicious campaigns.
  • These actors are Charcoal Typhoon and Salmon Typhoon from China, Forest Blizzard from Russia, Emerald Sleet from North Korea, and Crimson Sandstorm from Iran.
  • They have been accused of using LLMs for various malicious activities, including reconnaissance, deceptive communications, malware development, and spear-phishing campaigns.
  • “The genie is out of the bottle”: Techopedia speaks to a wide panel of experts to investigate the issue of state-backed threats coupled with AI.

How Five Nation-State Threat Actors are Weaponizing AI

Although no successful AI-enabled attacks were attributed to these threat groups, Microsoft believes their activities are malicious and could be a way of testing the waters to unravel how best to employ generative AI for malicious activities.

The malicious activities that Microsoft is investigating include victim reconnaissance, learning and using native languages to facilitate deceptive communications, software scripts, and malware development.

Advertisements

The profiles of the 5 state actors show they share similar traits in the way they probe LLMs for malicious attacks.

1. Charcoal Typhoon

Two Chinese state-sponsored groups — Charcoal Typhoon and Salmon Typhoon — are accused of exploiting AI models.

Charcoal Typhoon, also known as Aquatic Panda, stands accused of using ChatGPT to gather information on companies, generate social engineering texts, debug codes, and develop scripts.

Microsoft said: “They are known for targeting sectors that include government, higher education, communications infrastructure, oil and gas, and information technology. Their activities have predominantly focused on entities within Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal, with observed interests extending to institutions and individuals globally who oppose China’s policies.

“Charcoal Typhoon has been observed interacting with LLMs in ways that suggest a limited exploration of how LLMs can augment their technical operations. This has consisted of using LLMs to support tooling development, scripting, understanding various commodity cybersecurity tools, and generating content that could be used to social engineer targets.”

“All associated accounts and assets of Charcoal Typhoon have been disabled, reaffirming our commitment to safeguarding against the misuse of AI technologies,” the company added.

2. Salmon Typhoon

Salmon Typhoon is accused of leveraging generative AI for a variety of tasks including translating technical documents, gathering publicly accessible data on various intelligence agencies, and support with coding tasks.

Notably, Salmon Typhoon’s interactions with LLMs throughout 2023 appear exploratory and suggest that this threat actor is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics.

“This threat actor has demonstrated its capabilities through the deployment of malware, such as Win32/Wkysol, to maintain remote access to compromised systems.”

“Notably, Salmon Typhoon’s interactions with LLMs throughout 2023 appear exploratory and suggest that this threat actor is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs.”

The company added: “Salmon Typhoon’s engagement with LLMs aligns with patterns observed by Microsoft, reflecting traditional behaviors in a new technological arena. In response, all accounts and assets associated with Salmon Typhoon have been disabled.”

3. Forest Blizzard

According to Microsoft’s findings, Forest Blizzard, a group it claims is affiliated with Russia’s GRU military intelligence unit, uses LLMs for basic scripting tasks, gathering intelligence on Ukrainian targets, and researching satellite and radar technologies that likely relate to Russia’s war efforts.

“Forest Blizzard’s use of LLMs has involved research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations.”

Microsoft added that the group has been “Interacting with LLMs to understand satellite communication protocols, radar imaging technologies, and specific technical parameters. These queries suggest an attempt to acquire in-depth knowledge of satellite capabilities.”

4. Emerald Sleet

Identified as a North Korean threat actor, Emerald Sleet allegedly harnessed LLMs for various nefarious activities, including conducting reconnaissance efforts to gather intelligence on think tanks and experts related to North Korea, as well as generating content for spear-phishing campaigns.

Microsoft said it “Observed Emerald Sleet impersonating reputable academic institutions and NGOs to lure victims into replying with expert insights and commentary about foreign policies related to North Korea. Emerald Sleet overlaps with threat actors tracked by other researchers as Kimsuky and Velvet Chollima” and again closed accounts they identified with the group.

5. Crimson Sandstorm

Crimson Sandstorm, an Iranian threat actor affiliated with the Islamic Revolutionary Guard Corps, is accused of employing generative AI for generating spear-phishing emails and scripting tasks to evade detection.

Microsoft said: “Interactions have involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine”, and again closed suspected accounts.

Microsoft Categorizes Nine LLM-Themed TTPs

Microsoft has identified nine specific LLM-related TTPs used by the threat actors for attacks and is collaborating with MITRE to aid Microsoft’s research into how hackers are leveraging AI.

The TTPs cover a wide range of activities, such as LLM-driven reconnaissance, scripting techniques, optimized payload development, social engineering, vulnerability research, enhanced anomaly detection evasion, and more.

To help the broader security community analyze and defend against these risks, Microsoft is working to integrate these nine LLMs-themed TTPs into the MITRE ATT&CK industry framework.

These new additions will form a knowledge base that will enable the AI community to collaborate and track malicious use of LLMs with common taxonomy and create countermeasures.

AI Safety Still A Mirage

While AI holds incredible promises for future developments, these recent findings underline how the technology remains unsafe in key ways.

Speaking to Techopedia, Joseph Thacker, principal AI engineer and security researcher at AppOmni, notes his fears about the proficiency of threat actors leveraging AI.

“Threat actors that are effective enough to be tracked by Microsoft are likely already proficient at writing software.”

Thacker believes that while AI has significant vulnerabilities that malicious actors are eagerly exploiting, they haven’t birthed a novel attack yet. However, he warns that if these malicious actors succeed in unlocking a novel attack vector, it might take time before companies detect them.

“If a threat actor found a novel attack use case, it could still be in stealth and not detected by these companies yet, so it’s not impossible.

I have seen fully autonomous AI agents that can “hack” and find real vulnerabilities, so if any bad actors have developed something similar, that would be dangerous. And open source models like Mixtral [an LLM release by Mistral AI] are high quality and could be used at scale in novel ways.”

Another safety concern is that today’s large language models do not understand content and context at a deep level. They are trained to statistically generate persuasive outputs without discernment of truth, ethics, or safety.

Loris Degioanni, CTO and Founder at Sysdig, explains that this is why “generative AI can take natural language prompts and convert them to system queries and go as far as generating very convincing natural language phishing emails.”

Although much buzz around AI safety has seen initiatives like the United States AI Safety Institute (US AISI) and the UK’s AI Safety Institute spring up, Irina Tsukeerman, president of Scarab Rising, told Techopedia that AI safety might remain elusive if complex issues are not addressed.

These include “complicated regulatory, governance, and private sector market landscape, exponential innovation growth, various geopolitical security crises, tech sector interests, and the vested interests of the tech sector and the ambitions of individual companies and leaders.”

How to Combat AI-Driven Cyber Operations

The solution lies not in restricting AI development but in incentivizing responsibility, says Thacker.

“The genie is out of the bottle when it comes to generative AI. No regulations or bans will put it back in.

 

Instead, we must double down on instilling ethics in computer science education and broader technology culture. Initiatives like the Institute for Ethical AI and Machine Learning can nurture values-driven technologists.”

“And whistleblower policies and watchdog groups can counter toxic organizational dynamics that prioritize profit over public good,” he explained.

Tsukerman concurs with Thacker, pointing out there will be better results if OpenAI, Microsoft, and other big fish in the LLM market “engage with cyber ethicists, law enforcement professionals, digital forensics experts, cyber pathologists, psychologists, and lawyers to craft proactive, forward-looking policies.”

As with every new technology, we are bound to witness some missteps in generative AI development, Dane Sherrets, Solutions Architect at HackerOne, told Techopedia in a chat.

He said: “It is important to note that we are in new territory, and missteps are bound to happen.”

Sherrets outlined two key factors crucial for leading AI companies like Microsoft, OpenAI, and Google to advance AI technology responsibly:

“Commitment to transparency for what they have built, how they have built it, how they have tested it, and results of that testing and commitment to coordination with other organizations building AI.”

CISO at DeVry University, Fred Kwong, underscored the need for organizations to bake in AI-based defenses against AI-enable threats. He advocates focusing on three critical areas: integrating AI and machine learning into the security stack, emphasizing fundamentals such as multi-factor authentication and regular cybersecurity training, and planning for a future without sole reliance on passwords.

The Bottom Line

Just as was highlighted in our 2024 AI predictions, AI will become the next battleground in the cybersecurity race. Organizations need to invest in AI not just to compete in the market but to tighten their defenses against emerging threats.

Again, this is a critical time when purposeful leadership and multilateral cooperation are needed to steer AI toward the common good. Collaborations aimed at disrupting state-backed AI threats should go beyond Microsoft and OpenAI to include Google, Anthropic, and other leading AI developers to pass a resounding message of commitment to a safety-first approach to AI development.

Advertisements

Related Reading

Related Terms

Advertisements
Franklin Okeke
Technology Journalist
Franklin Okeke
Technology Journalist

Franklin Okeke is an author and tech journalist with over seven years of IT experience. Coming from a software development background, his writing spans cybersecurity, AI, cloud computing, IoT, and software development. In addition to pursuing a Master's degree in Cybersecurity & Human Factors from Bournemouth University, Franklin has two published books and four academic papers to his name. His writing has been featured in tech publications such as TechRepublic, The Register, Computing, TechInformed, Moonlock and other top technology publications. When he is not reading or writing, Franklin trains at a boxing gym and plays the piano.

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/