How Threat Actors Exploit GenAI

I’ve spent years in government and tech watching threat actors adapt faster than the policies and safeguards built to stop them. GenAI isn’t just a new tool in their toolbox. It lowers barriers and accelerates the pace of abuse in ways prior platforms did not. Here are a few key risks we’re already observing, followed by a few that would benefit from additional research and analysis.

What’s Already Happening

  • High-quality propaganda at scale in multiple languages. Content creation no longer requires skilled graphic designers. With GenAI, threat actors can churn out endless variants of posters, memes, and videos, and insert synthetic voices of public figures. [1] They can further scale production using fake personas for automated social media content generation. This leverages an intended legitimate use case for GenAI to create marketing content in the “brand voice” of a corporation. [2] Propaganda can also be instantly translated into multiple languages for global distribution at almost no cost in time or money.

  • Operational guides for online and real world activities. Despite safeguards, research shows that jailbreak techniques enable users to successfully prompt models to develop “how-to” content on weapons, tactics, coordination, and attack planning (real world and cyber). [2]

  • Detection evasion. The volume of content produced can overwhelm automated content moderation methods, such as hash matching. Even minor variations to content - while maintaining the theme or message - can skirt these platform protections. This can also undermine cooperative industry efforts to combat the spread of online extremist content, like the Global Internet Forum to Counter Terrorism (GIFCT) hash matching database.

What’s Coming Next

Some risks on the immediate horizon are being discussed, but present complex threat vectors that will require additional policy and technical safeguards.

  • Dual-use challenges. A recent article by OpenAI flagged an important dual-use risk as models become increasingly capable research assistants in the sciences, such as biology and chemistry. On the one hand, GenAI tools can supercharge research into medicine. On the other hand, the same capabilities could provide violent extremists the knowledge to develop biological or chemical weapons. [3] This emerging risk space warrants a deeper dive, to be addressed in a future post.

  • Chatbot radicalizers. Chatbots are becoming increasingly conversational, and can read and respond to emotion and sentiment. This helps to explain their rapid expansion in usage across different domains as virtual professional coaches, therapists, and religious advisors. [4] However, these same capabilities can be used by threat actors to micro-target potential recruits for radicalization and play off fears and insecurities of individuals in vulnerable or marginalized populations who seek support and community online. [5]

  • False flag content. This kind of content is nothing new in the world of online misinformation or disinfo operations, in the past fueled in part with cheap fakes or deep fakes. However, earlier content could be identified and debunked through various visual or audio clues. Increasingly realistic GenAI images, videos, and audio present a novel capability that can sow confusion and distrust in situations such as elections and active conflict situations. [6]

  • Financial gain. Threat actors have already demonstrated success in using synthetic audio and video to steal millions. [7] Violent extremist groups may similarly exploit these capabilities to support operations.

Why This Matters

Effective risk mitigation starts with disciplined identification and analysis of existing and emerging threat vectors. From this base, we can then evaluate what interventions - policy changes, classifier improvements, and others - will best target specific risks.


Endnotes

  1. “Early terrorist experitmentation with generative artificial intelligence services,” Tech Against Terrorism, https://techagainstterrorism.org/hubfs/Tech%20Against%20Terrorism%20Briefing%20-%20Early%20terrorist%20experimentation%20with%20generative%20artificial%20intelligence%20services.pdf.

  2. “Generating Terror: The Risks of Generative AI Exploitation,” West Point Combating Terrorism Center, https://ctc.westpoint.edu/generating-terror-the-risks-of-generative-ai-exploitation/.

  3. “Preparing for future AI capabilities in biology,” OpenAI, https://openai.com/index/preparing-for-future-ai-capabilities-in-biology/.

  4. Lauren Jackson, “Finding God in the App Store,” https://www.nytimes.com/2025/09/14/us/chatbot-god.html.

  5. Erin Saltman & Skip Gilmour, “Artificial Intelligence: Threats, Opportunities, and Policy Frameworks for Countering VNSAs,” https://gifct.org/wp-content/uploads/2025/04/GIFCT-25WG-0425-AI_Report-Web-1.1.pdf, pg. 6.

  6. Clarisa Neru, “Exploitation of Generative AI by Terrorist Groups,” International Centre for Counter-Terrorism, https://icct.nl/publication/exploitation-generative-ai-terrorist-groups.

  7. Heather Chen and Kathleen Magramo, “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’,” https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk.

Previous
Previous

GenAI at Scale: Why the AI “Arms Race” Is Not Zero-Sum

Next
Next

Fact-Checking GenAI Outputs Matters Even for “Simple” Prompts