As generative AI technologies become more advanced, so do cyberattacks. That’s according to Microsoft and OpenAI, who have shared research findings on the malicious use of large language models (LLMs) by nation-state-backed adversaries.
On Wednesday, Microsoft published its Cyber Signals 2024 report, which details nation-state attacks it has detected and disrupted alongside OpenAI from Russian, North Korean, Iranian, and Chinese-backed adversaries, as well as the actions that individuals and organizations can take to prepare for potential attacks.
Also: Don’t tell your AI anything personal, Google warns in new Gemini privacy notice
The two tech companies tracked state-affiliated adversary attacks from Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon. Each attack used LLMs to augment its cyber operations in some capacity, including assistance with research, troubleshooting, and generating content.
For example, Emerald Sleet, a North Korean threat actor, leveraged LLMs to research think tanks and experts on North Korea, generate content that would likely be used in spear-phishing campaigns, understand publicly known vulnerabilities, troubleshoot technical issues, and even assist with using various web technologies, according to the report.
Also: The best VPN services (and how to choose the right one for you)
Similarly, Crimson Sandstorm, an Iranian threat actor, used LLMs for technical assistance, including support in social engineering, assistance in troubleshooting errors, and more.
If you are interested in reading more about each nation-state threat, including their affiliation and their use of LLMs, you can check out the report, which includes a section dedicated to individual threat briefings.
Microsoft shares how AI-powered fraud, such as Voice Synthesis, which allows actors to train a model to sound like anyone with as short as a three-second sound bite, is an emerging and increasingly concerning threat.
Also: 5 reasons why I use Firefox when I need the most secure web browser
While the report shows generative AI is being used by malicious actors, the technology can also be used by defenders, such as Microsoft, to develop smarter protection and stay ahead in the constant cat-and-mouse chase that is cybersecurity.
Microsoft detects over 65 million cybersecurity signals every day. AI ensures those signals are analyzed for their most valuable information in helping to stop threats, according to the report.
Also: I tested iOS 17.3.1: What’s inside, who needs it, and how it affected my iPhone
Microsoft also shares other ways it is using AI, including, “AI-enabled threat detection to spot changes in how resources or traffic on the network are used; behavioral analytics to detect risky sign-ins and anomalous behavior; machine learning (ML) models to detect risky sign-ins and malware; Zero Trust models where every access request must be fully authenticated, authorized, and encrypted; and device health verification before a device can connect to a corporate network.”
To conclude the report, Microsoft says continued employee and public education are pivotal in combating social-engineering techniques, which are only successful if humans fail to identify them, and that prevention, whether AI-enabled or not, is key to combating all cyber threats.