Google report finds state-based hackers are using AI for research and content generation – SiliconANGLE
by
A new report released today by Google LLC’s Threat Intelligence Group details how advanced persistence threat groups and coordinated information operations actors from countries such as China, Iran, Russia and North Korea are using generative artificial intelligence in their campaigns — but despite some headlines to the contrary, it’s not quite as bad as it could be.
The report, which focuses on interactions with Google’s AI assistant Gemini, found that allegedly government-backed threat actors have primarily used Gemini for common tasks such as reconnaissance, vulnerability research and content generation. What wasn’t found is that the hacking groups were using Gemini for more malicious purposes, such as developing new AI-driven attack techniques or bypassing its built-in safety mechanisms.
The Google analysts found that instead of using AI to revolutionize their attacks, APT and IO actors appear to be leveraging it to speed up routine tasks rather than create novel threats. The report highlights that Gemini’s safeguards blocked direct misuse, preventing it from being used for phishing, malware development or infrastructure attacks.
Among the notable findings, Iranian APT and IO actors were the most frequent users of Gemini, using it for research and content creation, while Russian APT actors showed limited interaction with the AI model. Chinese and Russian IO actors, on the other hand, were found to be using Gemini primarily for localization and messaging strategy rather than direct cybersecurity threats.
“For skilled actors, generative AI tools provide a helpful framework, similar to the use of Metasploit or Cobalt Strike in cyber threat activity,” the report notes. “For less skilled actors, they also provide a learning and productivity tool, enabling them to more quickly develop tools and incorporate existing techniques.”
The report adds that current large language modes on their own are not a game-changer for cybercriminals but acknowledges that this could change with the evolving nature of AI development. As new AI models and agent-based systems emerge, the researchers believe that threat actors will continue to experiment with generative AI, requiring continuous monitoring and updates to security frameworks.
To mitigate risks current and future, Google is actively refining Gemini’s security measures and sharing intelligence with the broader cybersecurity community. The report stresses the need for cross-industry collaboration to ensure AI remains a tool for security rather than exploitation.
THANK YOU
Cyolo updates privileged access platform with tools to monitor unmanaged vendor connections
Operant AI expands Gatekeeper platform with MCP Gateway for AI runtime security
Sedai raises $20M to optimize cloud environments with AI agents
Clay reportedly raises new funding led by CapitalG on a $3B valuation
Ali Ghodsi’s data intelligence playbook: Turning data into agentic advantage
Users of new Meta AI app unknowingly make chatbot logs public
Cyolo updates privileged access platform with tools to monitor unmanaged vendor connections
SECURITY – BY . 3 HOURS AGO
Operant AI expands Gatekeeper platform with MCP Gateway for AI runtime security
SECURITY – BY . 3 HOURS AGO
Sedai raises $20M to optimize cloud environments with AI agents
AI – BY . 3 HOURS AGO
Clay reportedly raises new funding led by CapitalG on a $3B valuation
AI – BY . 17 HOURS AGO
Ali Ghodsi’s data intelligence playbook: Turning data into agentic advantage
BIG DATA – BY . 23 HOURS AGO
Users of new Meta AI app unknowingly make chatbot logs public
AI – BY . 3 DAYS AGO
Forgot Password?
Like Free Content? Subscribe to follow.