GenAI and Elections: Beyond Deepfakes and Misinformation

Public discussion of GenAI and elections has centered on synthetic media, scaled misinformation, and coordinated inauthentic behavior. [1, 2] While those concerns are valid and well documented, they omit a critical second-order risk.

Beyond the capacity to generate persuasive political material at scale, GenAI enables what I’ve called “narrative agility” – rapid adaptation in response to enforcement, audience feedback, and unfolding events. Threat actors can leverage this capability to sharpen persuasive impact and rapidly reframe messaging during election-cycle flashpoints such as debates, early voting, delayed counts, certification disputes, and breaking news. When one claim loses traction, it can quickly be replaced with another invoking themes such as bias, suppression, or institutional distrust.

The 2026 midterms will be the first major U.S. election cycle since this capability has become widely accessible and integrated into mainstream GenAI tools. This makes it urgent to expand beyond safeguards against violative content and implement controls that address adaptive optimization.

Strategic Tradeoffs in Election Risk Mitigation

Before outlining responses, it is important to acknowledge a key constraint. Core model functions that enable narrative agility, including summarization, tone adjustment, sentiment analysis, and reframing, are non-violative and high-value uses of GenAI systems. Broad guardrails against these functions would hinder legitimate research, journalism, and political participation.

The objective is not elimination, but rather calibrated friction to increase detection probability and degrade adversarial exploitation while minimizing impact on benign users. Three complementary mitigations operate across behavioral analytics, usage friction, and output guardrails:

  • Detect optimization behavior over time.

  • Constrain rapid refinement of persuasive political messaging.

  • Recalibrate outputs when prompts approach harmful persuasion.

1. Detect Optimization Behavior Over Time

Narrative agility depends on iteration. Threat actors test variations, assess engagement, and refine messaging. Individual prompts may be non-violative, yet behavioral patterns over time can reveal deliberate optimization for harmful persuasion or other malicious intent.

Detection should extend beyond single prompts to turns within and across sessions, using signals distinct from traditional content classifiers. Examples include:

  • Repeated reframing of the same political claim.

  • Requests to optimize messaging for specific demographic groups.

  • Rapid cycles of emotional tone adjustment.

  • Sequential attempts to increase persuasive impact.

This mitigation provides the visibility required to enable downstream safeguards. In addition to technical work, implementation will require updates to product requirements and data retention frameworks that may not currently permit longitudinal prompt analysis. Policies and enforcement standards will need calibration to establish clear thresholds defining when non-violative prompts aggregate into abuse, informed by appropriate privacy and legal review. Recent industry research exploring privacy-preserving analysis of aggregate model interaction patterns suggests this type of longitudinal insight can be developed without compromising user privacy. [3]

2. Constrain Rapid Refinement of Persuasive Political Messaging

After surfacing risk patterns, it becomes possible to address the speed and volume at which persuasive political messaging can be generated, refined, and deployed. During clearly defined election windows, platforms can introduce targeted friction in response to optimization behavior without broadly restricting political expression. Measures should be narrowly scoped to relevant jurisdictions, timeframes, and language contexts.

  • Limit repeated attempts to refine persuasive political messaging within short intervals.

  • Cap high-frequency generation of demographically-tailored political variants.

  • Introduce short cooldown periods between successive persuasion-optimization prompts.

3. Recalibrate Outputs When Prompts Approach Harmful Persuasion

Some high-risk political prompts remain non-violative, such as requests to sharpen attacks, heighten emotional resonance, or tailor messaging to specific audiences. While models may generate such content under normal operating conditions, election cycles justify adjusting how they respond to persuasion-optimization requests. The objective is not to restrict generation of political content, but to narrow the model’s utility as a messaging optimizer when persuasive refinement patterns elevate second-order risk. Model governance frameworks have begun articulating limits on political persuasion, and election-sensitive periods warrant calibrated application of those principles. [4]

  • Shift from persuasive framing to neutral informational framing.

  • Remove demographic targeting guidance.

  • Decline to amplify emotional intensity or fear-based messaging.

  • Provide balanced civic context rather than refinement.

Conclusion

As the 2026 U.S. midterms approach, platforms must address how GenAI tools can be used to refine and deploy harmful persuasion during election cycles. This requires more than identifying violative prompts and outputs. Platforms should implement complementary interventions to surface optimization behavior over time, introduce targeted friction during high-risk windows, and recalibrate outputs when prompt patterns signal malicious intent.

The objective is not to eliminate political use of GenAI. It is to ensure these systems do not function as accelerants for adaptive manipulation when risk is highest.

Endnotes

  1. National Coordinator for Critical Infrastructure Security and Resilience, “Risk in Focus: Generative A.I. and the 2024 Election Cycle,” https://www.cisa.gov/sites/default/files/2024-05/Consolidated_Risk_in_Focus_Gen_AI_ElectionsV2_508c.pdf.

  2. Tiffany Saade, “Election Interference in An Age of AI-Enabled Cyberattacks and Information Manipulation Campaigns,” https://fsi.stanford.edu/sipr/content/election-interference-age-ai-enabled-cyberattacks-and-information-manipulation-campaigns.

  3. Anthropic, “Clio: A system for privacy-preserving insights into real-world AI use,” Dec 12, 2024, https://www.anthropic.com/research/clio.

  4. OpenAI, “OpenAI Model Spec,” December 18, 2025, https://model-spec.openai.com/2025-12-18.html.



Next
Next

Narrative Agility: How GenAI Enables Rapid Adaptation by Threat Actors