Microsoft and OpenAI have discovered that hackers are already using large-scale language models such as ChatGPT to improve and refine their existing cyber attacks.
In a recent investigation, Microsoft and OpenAI uncovered attempts by Russian, North Korean, Iranian, and Chinese groups to use ChatGPT to research targets, improve scripts, and develop social engineering techniques.
“Cybercriminal groups, state-threats, and other adversaries are testing various emerging AI technologies to try to understand their potential value to their operations, if the security controls they may need to bypass,” it was said from Microsoft.
The Strontium group, which is linked to Russian military intelligence, uses the LLM to “understand satellite communication protocols, radar imaging technologies and specific technical parameters. The hacking group, also known as APT28 or Fancy Bear, was active during the Russo-Ukraine war , and previously interfered and influenced Hillary Clinton’s 2016 presidential campaign.
The group also used LLM to help with “basic scripting tasks, including file manipulation, data selection, regular expressions, and multiprocessing, to potentially automate or optimize technical operations,” according to Microsoft.
The North Korean hacking group Thallium uses LLMs to research vulnerabilities and create content for phishing campaigns. Iran’s Curium hackers also use LLMs to generate phishing emails and even code designed to evade detection by antivirus applications.
Chinese state-linked hackers also use LLMs for research, scripting, translations and refinement of their existing tools.
There are concerns about the use of AI technology in cyber attacks, especially since AI tools such as WormGPT and FraudGPT have emerged, which help create malicious emails and cracking tools. A senior National Security Agency official has warned that hackers are using AI to make their phishing emails more convincing.
Microsoft and OpenAI have not detected any “significant attacks” using LLMs, but the companies have shut down all accounts and assets associated with these hacking groups.
“We believe this is important research to publish in order to expose threatening moves and share information about how we block and counter them,” Microsoft said.
While the use of AI in cyberattacks appears to be currently limited, Microsoft warns of future use cases such as voice spoofing.
“AI fraud is another big concern. Voice synthesis is an example, as a three-second voice sample can train a model to sound like anyone. Even something as innocuous as your voicemail voice can be used for a sufficient sample,” he adds.
As expected, Microsoft’s solution is to use AI technology in response to AI attacks.
“AI can help hackers add more sophistication to their attacks, and they have the resources to do that. We’ve seen it with more than 300 threat actors that Microsoft monitors, and we use AI to protect, detect and respond,” said Homa Hayatyfar, principal manager of detection analytics at Microsoft.
Microsoft is developing Security Copilot, a new AI assistant designed for cybersecurity professionals to identify issues and better understand the vast amount of signals and data generated daily by cybersecurity tools. The software giant is also working on its software security after major Azure cloud attacks, as well as cases in which Russian hackers spied on Microsoft executives, Klix.ba writes.