Dual-Use Risks Aren’t New - But GenAI Changes the Game
In June 2025, OpenAI published research acknowledging that advanced GenAI tools carry dual-use risks, offering the potential to both accelerate legitimate biological research and significantly lower the barrier for people with minimal expertise to create biological threats. The report called for new safeguards designed specifically to mitigate these risks. [1]
“Dual-use” content - that which has legitimate use cases and can enable harmful outcomes - is not a new challenge for platforms. Search engines and online marketplaces, in existence for decades, create enormous opportunities for innovation, business growth, and research. At the same time, they make available information that extremists could exploit to develop weapons, acquire dangerous chemicals, or coordinate real-world attacks, or that vulnerable users could apply for self-harm or even suicide.
Platforms mitigate portions of this risk through content policies, prohibited products, and keyword-based detection. However, a large swath of high-risk dual-use content remains online as assumed residual risk.
So the natural question is: Are dual-use risks for GenAI tools just another risk space we can partially mitigate and otherwise tolerate?
The answer is a clear no. GenAI fundamentally shifts the threat landscape by lowering expertise requirements, providing adaptive guidance, and scaling the speed and reach of harmful outcomes.
The Traditional Dual-Use Problem
Traditional search engines or online marketplaces have built-in friction points that significantly limit a user's likelihood of success when lacking domain expertise. They require:
knowing what to search for
crafting correct queries
evaluating disparate sources
understanding domain terminology
discerning credible guidance from misinformation
combining and operationalizing the information successfully.
Platforms mitigate risks by restricting the most egregious content and products (e.g. ghost gun kits), building classifiers, applying proactive and reactive enforcement, and assuming residual risk. It is important to remember that these residual risks are real. In 2022 the Buffalo, NY shooter killed 60 people at a super market after learning about and purchasing weapons and equipment online. [2] The same year, a teen committed suicide after finding information and purchased an industrial chemical (sodium nitrite) online. [3,4]
Why GenAI Is NOT “Just Another Dual-Use Tool”
GenAI removes the friction points that significantly reduce risk of real-world harm. It is far more than a new search engine providing information. It is an interactive system that can teach, translate, and guide a user regardless of their starting level of expertise. This isn’t incremental change, but rather an ecosystem shift that radically transforms the threat landscape.
Removes the Expertise Barrier
GenAI removes the friction points previously mentioned. A user no longer needs domain expertise to make effective queries, combine information across disparate sources, or evaluate information credibility. A single prompt (or a few turns) can collapse the entire research process. A user who previously needed months or years of training may now achieve similar outcomes through guided prompting, and can even use the model to generate the necessary prompts to find the desired information.
Search engines deliver links. Online market places offer products. Social media allows for interaction between uses, but provides limited direct guidance. GenAI is an interactive collaborator and personalized tutor for a threat actor or vulnerable user. Even when guardrails prevent a model from answering a dangerous query directly, the attempt can still provide:
extensive enabling information for the intended harmful activity
alternative phrasing
contextual explanations
correction of misunderstandings
translations
steers to previously unknown but relevant knowledge.
This is a massive capability shift.
Scales Potential Harm
When friction points disappear AND the dual-use nature of the information may not trigger guardrails, more threat actors or vulnerable users can reach harmful capability with a greater number of attempts that are more sophisticated with higher success rates, and fewer opportunities for platforms or law enforcement to detect risk.
This is a massive capacity shift.
What This Means for Mitigating Dual-Use Risks
Build Intent-Aware Protections
Classifiers are ineffective when threat actor queries look nearly identical to those from users seeking information for legitimate reasons (e.g. medical research, personal protection, or weapons safety). Mitigation must shift toward:
turn-by-turn intent modeling
behavioral signal analysis
context-aware guardrails
connected risk scoring.
Increase Cross-Platform Cooperation (Even When It’s Uncomfortable)
Companies protect the “secret sauce” of products and tooling, as well as the data behind classification methods, model evaluations, and deplatforming signals. This is true even when they demonstrate risks to another company (e.g. OpenAI deplatforms a user for violent extremism policy violations whose login is an Outlook address). That approach is now counterproductive. Effective mitigation requires:
shared indicators of malicious intent
shared methodologies for turn/prompt analysis
inter-platform escalation pathways
the ability to connect high-risk activities of a user across multiple products.
Re-Evaluate the Level of Acceptable Residual Risk
Traditional online platforms operate with an accepted level of residual risk for dual-use content, even when it creates opportunities for misuse by threat actors or harm to vulnerable users. The massive shift in capability and capacity enabled by GenAI makes that approach untenable. It requires a move away from risk acceptance toward active risk mitigation, particularly for categories with immediate and life-threatening consequences.
Conclusion
Dual-use challenges are not new, but GenAI removes friction points, expands capability and capacity, and accelerates harm in ways past technologies did not. We must recognize the ecosystem shift that has already occurred and the danger of assuming dual-use risks, evolving our protective arsenal to include intent-based guardrails and even closer cross-platform cooperation.
End Notes
OpenAI, “Preparing for Future AI Capabilities in Biology,” https://openai.com/index/preparing-for-future-ai-capabilities-in-biology/.
Office of the New York State Attorney General Letitia James, “Investigative Report on the role of online platforms in the tragic mass shooting in Buffalo on May 14, 2022,” https://ag.ny.gov/sites/default/files/buffaloshooting-onlineplatformsreport.pdf.
Rep Lori Trahan, “Suicides Spur Suits on Amazon Sales of Legal-But-Lethal Chemicals,” https://trahan.house.gov/news/documentsingle.aspx?DocumentID=2740.
Joe Hernandez, “A parents' lawsuit accuses Amazon of selling suicide kits to teenagers,” https://www.npr.org/2022/10/09/1127686507/amazon-suicide-teenagers-poison.