Second-Order GenAI Risks We Aren’t Talking About Enough
Discussions about GenAI risks focus primarily on threat actor exploitation and familiar dual-use tradeoffs: scaling creation of harmful content, directly assisting a cyber attack (e.g. writing malicious code), or enabling development of biological or chemical agents. While critically important, this focus misses a set of additional risks.
What guardrails, if any, are triggered when users do not seek direct assistance, but instead use these tools to improve the efficiency, scale, or resilience of harmful activities? This distinction between first and second-order risk matters because the latter may receive little mitigation at present, yet are a force multiplier for threat actors.
Throughout this post, “second-order risk” refers to upstream capability and process changes that increase the likelihood or severity of downstream harms, rather than harm itself.
Defining First vs. Second-Order Risk
First-order risk refers to the potential for direct misuse. It is the primary focus of existing GenAI guardrails and legacy content moderation approaches. Example prompts likely to trigger protections include prompts for:
Explicit instructions for wrongdoing.
Generation of prohibited content.
Step-by-step guidance for harmful acts.
Second-order risk operates differently. Rather than enabling direct misuse, GenAI improves, augments, or automates the systems that enable harmful activity. For example:
Reducing expertise barriers.
Accelerating workflows.
Improving coordination and logistics.
How GenAI Strengthens Harmful Systems
Process Optimization
GenAI tools aren’t just content creation engines. They can improve, augment, or automate business processes. In the legitimate world this might lead to automation of compliance monitoring or augmenting human customer service (i.e. chat bots). For threat actors, GenAI can instead be used to optimize processes - microtargeting vulnerable users for recruitment, improving scam conversion rates, or expanding cross-platform propaganda networks. Taken together, these dynamics illustrate why second-order risks persist even as protections improve against direct abuse.
Analysis, Synthesis, and Course Correction
Enhanced capabilities to analyze and synthesize data have numerous applications for improving threat actor operations. For example, a drug cartel can apply information on shipping routes to modify smuggling operations and avoid counternarcotics efforts. Or a terrorist group can derive insights into social media engagement for its content and improve the targeting of vulnerable user communities for recruitment, identifying propaganda/narratives with greatest emotional resonance/influence, and pivoting quickly to exploit emerging events.
Detection Evasion
Historically, platform defenders relied on clear signals to identify certain online threats, such as low-effort fraud (e.g. emails of financial opportunities from a “Nigerian Prince”) - poor spelling/grammar, inconsistent tone, and cultural errors. GenAI tools remove the domain expertise necessary to improve translations and otherwise significantly reduce abuse signals. Here again, the risk is not the model producing prohibited content, but its role in reducing detection signals, making threat actors and their activity harder to identify and disrupt.
Why This Is Harder to Address Than Direct Misuse
Second-order risks:
Do not map cleanly onto content policies designed to prevent violative outputs.
Emerge from chains of non-violative interactions rather than single prompts.
Degrade the signals relied on to detect and attribute abuse
Fail to surface in existing detection pipelines because these non-violative prompts for process improvement lack the “dog whistles” that typically signal threat actor activity or overt abuse.
From a Trust & Safety and Intelligence perspective, this is a critical distinction for risk mitigation. Successfully blocking violative promptings and preventing directly harmful outputs will not prevent threat actors from becoming faster, more adaptive, and more resilient.
Early Considerations for Mitigating Second-Order Risk
Capability-aware risk assessment: Examine which processes link to enhanced capabilities of threat actors. Once identified, these processes can inform development of potential risk mitigations, such as targeted access controls or monitoring approaches that preserve legitimate use while reducing abuse potential.
Behavioral signals: When these risks manifest as operational improvements (e.g. efficiency gains) rather than overt violations, then they will have different signals. While more difficult to operationalize, early efforts could focus on identifying changes in speed, coordination, or adaptability that signal GenAI-enabled process optimization by a threat actor.
Cross-surface links analysis: Because GenAI-enabled process improvements rarely remain contained within a single tool, they create both abuse potential and detection signals on other platforms/surfaces. For example, non-violative process improvement recommendations may point users to purchase products or services other on an e-commerce site. Companies that have a GenAI tool embedded or linked to another platform, such as Amazon/Rufus or X/Grok, should explore these connections as it pertains to second-order risks from threat actors. This may yield additional behavioral signals to identify threat actor exploitation.
Conclusion
GenAI does not need to commit the act to reshape the threat landscape. By quietly strengthening the systems that enable harm—scams, influence operations, recruitment pipelines, coordination networks—it expands risk in ways that are harder to see, attribute, and counter.
Focusing exclusively on direct abuse (prompts and outputs) creates a false sense of effective risk mitigation. Platforms may succeed in reducing overt policy violations while simultaneously enabling adversaries to become faster, more adaptive, and more resilient. The point here is not to suggest a failure of moderation. Rather, it reflects a growing recognition that GenAI expands the scope of abuse beyond overt violations, demanding new ways of identifying and mitigating risk.