How Extremists Exploit GenAI
I’ve spent years in government and tech watching how threat actors adapt faster than the policies and safeguards built to stop them. GenAI isn’t just a new tool in their toolbox. It’s one that lowers barriers and accelerates the pace of abuse in ways we haven’t seen before. Here are a few top of mind risks we’re already facing, followed by a few that would benefit from additional research and analysis.
What’s Already Happening
High-quality propaganda at scale in multiple languages. Extremists no longer need skilled graphic designers. With GenAI, they can churn out endless variants of posters, memes, and videos, and insert synthetic voices of prominent figures. [1] These adversaries can further scale production using fake personas trained to embody the mission and values of a terrorist group for automated social media content generation. This leverages an intended legitimate use case for GenAI to create marketing content in the “brand voice” of a corporation. [2] Propaganda can also be instantly translated into multiple languages for global distribution at almost no cost in time or money.
Operational guides for online and real world activities. Despite safeguards, research shows that jailbreak techniques enable users to successfully prompt models to develop “how-to” content on weapons, tactics, coordination, and attack planning (real world and cyber). [2]
Detection evasion. For threat actors, an extra benefit of easy and cheap content modification is that sheer volume can overwhelm automated content moderation methods, such as hash matching. Even minor variations to content - while maintaining the theme or message - can skirt these platform protections. This also can serve to undermine cooperative industry efforts to combat the spread of online extremist content, like the Global Internet Forum to Counter Terrorism (GIFCT) hash matching database.
What’s Coming Next
Some risks on the immediate horizon are being discussed, but present complex threat vectors that will require additional development of policies and technical safeguards.
Dual-use challenges. A recent article by OpenAI flagged an important threat actor risk as models become increasingly capable research assistants in the sciences, such as biology and chemistry. On the one hand, GenAI tools can supercharge research into medicine. On the other hand, the same capabilities could provide violent extremists the knowledge to develop biological or chemical weapons. [3]
Chatbot radicalizers. Chatbots are becoming increasingly conversational, and can read and respond to emotion and sentiment. This helps to explain their rapid expansion in usage across different domains as virtual professional coaches, therapists, and religious advisors (God chatbots! [4]). However these same capabilities can be exploited by extremists to develop more engaging chatbots. These could micro-target potential recruits for radicalization, playing off fears and insecurities of individuals in vulnerable or marginalized populations who seek support and community online. [5]
False flag content. This kind of content is nothing new in the world of online misinformation or disinfo operations, in the past fueled in part with cheap fakes or deep fakes. However, earlier content could be identified and debunked through various visual or audio clues. Increasingly realistic GenAI images, videos, and audio present a novel threat capability that violent extremists will soon exploit to sow confusion and distrust in situations such as elections and active conflict situations. [6]
Financial gain. Threat actors have already demonstrated success in using synthetic audio and video to steal millions. [7] It stands to reason that extremist groups will borrow this tactic to fund operations and exploit GenAI to enable scams and cyberattacks.
Why This Matters
Risk mitigation starts with identifying and analyzing existing and emerging threats. From this base, we can then explore what interventions - policy changes, classifier improvements, and others - will best target specific risks.
Endnotes
“Early terrorist experitmentation with generative artificial intelligence services,” Tech Against Terrorism, https://techagainstterrorism.org/hubfs/Tech%20Against%20Terrorism%20Briefing%20-%20Early%20terrorist%20experimentation%20with%20generative%20artificial%20intelligence%20services.pdf.
“Generating Terror: The Risks of Generative AI Exploitation,” West Point Combating Terrorism Center, https://ctc.westpoint.edu/generating-terror-the-risks-of-generative-ai-exploitation/.
“Preparing for future AI capabilities in biology,” OpenAI, https://openai.com/index/preparing-for-future-ai-capabilities-in-biology/.
Lauren Jackson, “Finding God in the App Store,” https://www.nytimes.com/2025/09/14/us/chatbot-god.html.
Erin Saltman & Skip Gilmour, “Artificial Intelligence: Threats, Opportunities, and Policy Frameworks for Countering VNSAs,” https://gifct.org/wp-content/uploads/2025/04/GIFCT-25WG-0425-AI_Report-Web-1.1.pdf.
Clarisa Neru, “Exploitation of Generative AI by Terrorist Groups,” International Centre for Counter-Terrorism, https://icct.nl/publication/exploitation-generative-ai-terrorist-groups.
Heather Chen and Kathleen Magramo, “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’,” https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk.