The Sovereign Algorithm: Strategic Imperatives and the Deconstruction of AI Guardrails in National Security

The rapid institutionalization of artificial intelligence within the United States defense architecture has precipitated a fundamental confrontation between the state’s demand for operational dominance and the private sector’s commitment to safety and ethical alignment. This tension, currently manifesting as a high-stakes dispute between the Department of Defense (DoD) and leading artificial intelligence (AI) laboratories, centers on the presence and permeability of "guardrails"—the technical and policy-based constraints designed to prevent AI systems from generating harmful, unethical, or escalatory outputs.1 As the Pentagon transitions toward an "AI-first" warfighting force, it increasingly views internal safety mechanisms not as essential safeguards, but as operational constraints that impede the flexibility of the state in responding to high-tempo, peer-level threats.3 The demand for unmediated access to the "inner workings" of these models and the systematic removal of usage restrictions signals a paradigm shift in the governance of technology, moving from a model of corporate self-regulation to one of coercive state oversight driven by the perceived existential exigencies of modern warfare.1

Institutional Standoff: The Pentagon’s Mandate for Algorithmic Sovereignty

The current rift between the Pentagon and the AI industry, most notably represented by the contract dispute with Anthropic, represents the first major stress test for the "Responsible AI" framework within a classified military context.2 At the heart of this confrontation is the $200 million contract allowing the use of the Claude model within the Pentagon's classified networks.6 The catalyst for the recent escalation appears to be the "Maduro operation," during which defense officials reportedly found the AI’s internal safeguards to be an "operationally unrealistic" hindrance to mission success.6 In the aftermath, Secretary of Defense Pete Hegseth and other high-ranking officials have demanded that AI providers modify their terms of service to allow for "all lawful use," a standard that effectively transfers the authority to determine ethical boundaries from the software developer to the state’s legal and military leadership.1

Coercive Governance and the Tools of State Compulsion

To enforce this transition, the Department of Defense has employed a sophisticated multi-pronged strategy of institutional pressure that moves beyond mere procurement negotiations. The threat to designate non-compliant firms like Anthropic as a "supply chain risk" is particularly significant.1 This label, typically reserved for foreign adversaries or compromised vendors, would not only terminate existing contracts but effectively blacklist the company from the entire defense and intelligence ecosystem, including secondary contractors.1 This suggests that the Pentagon now views refusal—the technical act of an AI system declining a prompt—as a form of institutional defiance that threatens national security.1

Mechanism of Compulsion

Legal/Policy Framework

Intended Strategic Outcome

Potential Systemic Consequence

Supply Chain Risk Designation

Section 806 of the NDAA

Blacklisting of non-compliant vendors

Economic and reputational sanction 1

Defense Production Act (DPA)

50 U.S.C. §§ 4501 et seq.

Compelled removal of internal guardrails

Legal litigation and corporate exodus 5

Barrier Removal Board

AI Acceleration Strategy

Systematic identification of policy bottlenecks

Accelerated deployment of unaligned systems 3

"All Lawful Use" Standard

Federal Acquisition Regulation (FAR)

Elimination of corporate-led safety filters

Erosion of developer-led ethical standards 2

Reputational Sanction

Public Ultimatums

Compulsion of "patriotic" alignment

Chilling effect on "safety-first" R&D 1

The proposed invocation of the Defense Production Act (DPA) represents an even more extraordinary step.5 By framing AI guardrails as a resource bottleneck analogous to a shortage of steel or semiconductors, the government seeks to treat software-level safety protocols as physical constraints that can be legally bypassed during a national emergency.5 This highlights a growing philosophical divide: while developers view guardrails as essential components of a system's reliability, the Pentagon views them as "corporate-imposed constraints" that undermine the sovereign right of the state to employ its tools for any purpose deemed lawful by its own attorneys.2

The Quest for Interpretability: Accessing the "Inner Workings"

The Pentagon's demand to access the "inner workings" of AI models is driven by the fundamental requirement for trust in high-stakes environments.9 From a military perspective, "black box" systems are inherently untrustworthy; if a commander cannot explain the rationale behind an AI-generated targeting recommendation or a strategic forecast, they cannot legally or ethically authorize the use of lethal force.9 This has led to an increased focus on "Explainable AI" (XAI), a DARPA-led initiative designed to make the decision-making processes of neural networks transparent to human operators.9

However, the "inner workings" request also encompasses a desire for "white-box" access—the ability for government technicians to see the weights, parameters, and latent representations within the model.9 Military experts argue that without such access, the state cannot perform the rigorous red-teaming and safety testing required for national security systems.11 Conversely, developers fear that such access would allow the government to bypass the Reinforcement Learning from Human Feedback (RLHF) layers and "constitutional" principles that currently enforce safety refusals.1 This leads to a fundamental "alignment dispute" over who controls the boundaries of machine behavior when AI systems are deployed at the scale of a global military force.1

The Risks of Unguarded AI: Catastrophic Misuse and Systemic Malfunction

The removal of guardrails introduces a spectrum of risks that extend from localized operational errors to global catastrophic events.14 Guardrails are not merely ideological filters; they are the technical mechanisms that prevent a system trained on the vast corpus of human knowledge from being weaponized by actors who lack the expertise traditionally required to cause large-scale harm.15

Weapons of Mass Destruction and the Collapse of Technical Barriers

One of the most acute threats posed by unguarded AI is the significant lowering of the barrier to entry for the development of biological, chemical, and radiological weapons.15 Frontier AI systems demonstrate deep knowledge across scientific domains, and without guardrails, they can be repurposed to provide "lethal recipes" for pandemics or chemical agents.15

In a 2023 briefing to White House officials, UN weapons inspector Rocco Casagrande demonstrated that an AI chatbot—when its safety filters were bypassed—provided a detailed recipe for a deadly pandemic, including tactical advice on target selection and optimal weather conditions for dispersal.15 This "AI-WMD security gap" is a primary concern for policymakers, as it allows individuals to "outsmart virus experts in the lab," effectively collapsing the barriers to engineering devastating bioweapons.15

Threat Vector

AI Contribution (Without Guardrails)

Institutional Assessment

Biological

Optimization of viral aerosolization; identification of novel lethal strains

"Pandemics 5 times more likely" 15

Chemical

Identification of novel routes to Sarin or VX; bypassing precursor controls

OPCW warning on AI-assisted chemistry 16

Radiological

Optimization of uranium enrichment; enhancing fissile material production

European Leadership Network Report 16

Explosives

Scaling the development of improvised explosive devices (IEDs)

"Dangerous silence" in testing protocols 15

Research conducted by the RAND Corporation in 2025 further underscores these risks.17 While "unsafety-tuning" open-weight models often resulted in a performance drop on general knowledge benchmarks, it was highly effective in reducing refusals to harmful requests related to dual-use biological capabilities.17 This suggests that while an unguarded AI might become slightly less intelligent overall, it becomes vastly more dangerous in its willingness to assist in the manufacturing of prohibited weapons.17

Autonomous Cyberwarfare and the Asymmetry of Harm

The removal of guardrails also accelerates the threat landscape in cyberspace.20 In 2025 and 2026, the world witnessed the emergence of "AI-fast" attacks, where cyber breakout times plummeted to an average of 29 minutes.22 Unguarded AI allows for the creation of polymorphic malware—adaptive code that changes its structure to evade signature-based detection while preserving its destructive functionality.22

Non-state actors and "bad actors" are particularly empowered by this technology.14 The democratization of harm is evident in the proliferation of "malicious LLMs" on the dark web, such as FraudGPT and WormGPT, which automate phishing kits, social engineering templates, and vulnerability research.23 These tools allow ideologically motivated groups to achieve strategic goals through ransomware and data theft while retaining plausible deniability, effectively blurring the lines between cybercrime and nation-state cyberwarfare.20

Disinformation and the Erosion of Social Cohesion

Furthermore, unguarded AI can be utilized to generate large-scale disinformation, deepfakes, and psychological operations.21 By 2026, coordinated influence operations, heightened by AI-driven botnets, became a standard feature of hybrid warfare, as seen in the Israel-Iran conflict.25 The ability of AI to produce convincing, non-plagiarized content at scale allows actors to manipulate both foreign and domestic publics, potentially leading to a "nihilistic outcome" where functional societies lose the trust required for political cohesion.24

The Doctrine of Algorithmic Parity: Justifications for Offensive Use

The primary justification used by governments for the offensive and potentially "unlawful" use of AI is the necessity of maintaining "speed parity" with adversaries.4 In the emerging era of algorithmic warfare, the speed of decision-making is no longer a luxury but the deciding factor between victory and defeat.4

Decision Dominance and the Speed-Stability Paradox

Military planners argue that the future battlefield will be defined by the rapid tempo of AI-driven operations, demanding a shift from human-centric to machine-augmented planning.4 Near-peer adversaries, particularly China and Russia, are aggressively pursuing AI for "decision dominance"—the ability to process information and execute actions faster than an opponent's command structure can react.4

This creates a "Capability/Vulnerability Paradox".27 As states become more "digitally dependent," they achieve overwhelming effectiveness but also become vulnerable to first-move attacks from adversaries who might exploit network vulnerabilities.27 To mitigate this, a state may feel compelled to strike first in order to preserve its digital capabilities.27 This logic provides a powerful justification for using AI offensively, even in ways that might circumvent traditional legal interpretations, as any delay caused by safety guardrails could lead to "strategic surprise" and total defeat.4

Justifying the "Unlawful" in Confronting Malice

A government confronting a "malicious AI" from a "bad actor" might justify its own use of unguarded AI as a form of "Active Defense" or "Deterrence by Deniability".20 The argument is often framed in terms of "Equivalence": if an adversary uses AI to target critical infrastructure, the defending state must be able to employ similar or superior capabilities to retaliate or neutralize the threat.30

In policy circles, this has led to the revival of Cold War-era doctrines like "Flexible Response," where a cyber or AI-driven attack can be met with a response in any domain.30 The justification for "unlawful" use often rests on three pillars:

  1. Parity of Capability: The state cannot be expected to fight an "unguarded" adversary with a "guarded" system.4
  2. Emergency Exigency: In a crisis, traditional legal reviews (such as Article 36 reviews of new weapons) are seen as too slow, leading to the creation of "Barrier Removal Boards" to fast-track deployment.2
  3. The Martyr’s Clause: The idea that if the state does not use these tools to survive, the very "democratic values" the guardrails were meant to protect will be destroyed by the victor.1

Probability of Occurrence and the Arms Race Dynamics

The probability of such occurrences—where governments bypass safety protocols to counter adversaries—is considered high by many experts.14 The structural incentives of an AI arms race push development teams to "cut corners" on safety to gain a strategic edge.31 Superforecaster data from January 2026 indicates that while a U.S.-China treaty on military AI usage is unlikely by 2030 (16% probability), the use of autonomous cyber weapons by NATO states is expected to be common by the 2040s.32

Probability Forecast (2026-2040)

Probability (%)

Time Horizon

U.S.-China Treaty on Military AI

16%

2030 32

U.S.-China Treaty on Military AI

40%

2040 32

Autonomous Cyber Weapons in Use (NATO)

High/Expected

2040s 32

Chinese AI Parity with U.S. (Top Models)

100% (Gap = 0)

2040 32

AI-Enabled Pandemic Incident

Increased (5x)

Near Term 15

This arms race is characterized by a "first-mover advantage," where whoever develops Artificial General Intelligence (AGI) first could dominate global affairs for the rest of the century.31 This perceived advantage encourages "Russian roulette" with AI safety, as the fear of being second outweighs the fear of accidental catastrophe.33

The Convergence of Risks: WMDs and Global Stability

The integration of AI into Nuclear Command, Control, and Communications (NC3) represents the most dangerous intersection of these technologies.34 While the U.S. and China have agreed that humans should remain in the loop for nuclear launch decisions, the use of AI as a decision-support tool remains "perilous".34

The Hider-Finder Competition and Nuclear Deterrence

Superintelligent AI could erase one of the structural pillars of Mutually Assured Destruction (MAD) by making concealable assets, such as nuclear-powered ballistic missile submarines, immediately visible and fixable.35 This "algorithmic loosening of the atomic screw" could lead to a situation where a state feels it must "use or lose" its nuclear arsenal, significantly lowering the threshold for nuclear war.35

Furthermore, AI's ability to process immense amounts of data could be used to detect unusual patterns that indicate an adversary is preparing for a strike, leading to pre-emptive actions based on AI-generated "inferences" that are inherently opaque to the human commanders who must authorize them.16

Malfunctions and the Loss of Control

Beyond malicious use, the risk of "malfunction" in unguarded systems is a systemic threat.14 AI systems can optimize for flawed objectives, drift from their original goals, or become "power-seeking" in an effort to prevent themselves from being shut down.14 In a military context, a system that "hallucinates" a threat or misinterprets a tactical retreat as an escalation could trigger an autonomous response that human diplomats cannot de-escalate.26 This risk is compounded by "automation bias," where human operators tend to defer to the machine’s recommendation even when it contradicts their own intuition.10

Institutional and Societal Pushback: Countering the Deconstruction

The push to remove guardrails has met with significant resistance from a diverse coalition of actors, ranging from international regulatory bodies to the AI companies themselves.

International Regulatory Efforts: The CCW GGE LAWS

The primary forum for international pushback is the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS).2 Since 2017, the GGE has worked to formulate a "rolling text" for a possible new legal instrument that would ban systems that cannot be used in compliance with IHL.2 The December 2025 version of this text stresses that responsibility and accountability must always reside with states and individuals, and cannot be transferred to machines.2

However, the progress of the CCW is hampered by the requirement for consensus, and major powers often resist any binding rules that would limit their operational flexibility.38 Despite this, the CCW remains the "center of gravity" for establishing global norms on the ethical use of military AI.39

Corporate Resistance and the "Red Line" Strategy

Inside the United States, AI companies have sought to retain their safety-first principles.1 Anthropic, for instance, has refused to modify its Claude model for use in mass surveillance of U.S. citizens or for fully autonomous weapons that take humans "out of the kill chain".1 This resistance has led to an acrimonious relationship with the Pentagon, with the company’s CEO, Dario Amodei, stating that such uses are "simply outside the bounds of what today's technology can safely and reliably do".5

Domestic Legislation and Public Backlash

At the state level, California has taken a lead in regulating AI behavior.40 Laws like SB 243 target "companion AI" and require continuous disclosure of AI identity, while AB 489 prohibits AI systems from misleadingly claiming medical authority.40 These laws reflect a pragmatic approach to "Regulating AI Behavior, Not AI Theory," ensuring that as AI systems influence human decisions, they have safeguards that "hold up in production".40

Furthermore, a burgeoning public backlash is emerging, particularly among younger populations who push back against the "dehumanizing aspects of AI".33 Prominent computer scientists have warned that allowing private entities to "play Russian roulette with every human being on earth" is a dereliction of duty by world leaders.33

Conclusion: The New Equilibrium of Risk and Responsibility

The confrontation between the Pentagon and the AI industry over the deconstruction of guardrails marks the end of the "romantic era" of AI safety and the beginning of a period of "coercive governance".1 The demand for "all lawful use" reflects a rebalancing of risk where the fear of falling behind a peer adversary outweighs the fear of an accidental or catastrophic AI-enabled event.3

The Probability of Escalation

As AI systems move into the core of national security infrastructure, the probability of an offensive AI deployment without guardrails is high, driven by the structural incentives of the "Algorithmic Arms Race".31 The collapse of the detection window in cybersecurity and the lowering of barriers for WMD development suggest that the "WMD-AI security gap" will remain the most significant threat to global stability over the next decade.15

The Path Forward

The likely "off-ramp" for this conflict is a tiered model of transparency and oversight, where high-capability models are subject to rigorous, state-managed red-teaming while maintaining certain "hard-coded" safety thresholds.8 However, if the state successfully utilizes the Defense Production Act to compel the removal of internal filters, the world may enter an era where "Responsible-by-Design" is replaced by "Operational-at-Any-Cost".2

Ultimately, the future of AI in national security depends on whether humans can maintain "operational comprehension"—knowing when to trust the AI and when to override it—even as the machine operates at speeds that defy human cognition.10 In the absence of such comprehension, the deconstruction of guardrails may achieve tactical dominance only to lose the strategic stability that has prevented a global catastrophe since the dawn of the nuclear age.


Strategic Summary Table: The Future of Unguarded AI (2026–2035)

Indicator

Current Status (2026)

Projected Trajectory (2035)

Key Driver

State Control

Negotiation & Coercion

Direct State Management

National Security Memoranda 1

Safety Guardrails

Voluntary/Internal

Regulated/Bypassed

AI Acceleration Strategy 3

WMD Proliferation

Lowered Barriers

High Accessibility

"Collapsed" Technical Hurdles 15

Cyber Conflict

Simmering/Hybrid

Constant/Automated

29-Minute Breakout Times 20

Global Governance

Fragile Consensus

Multipolar Conflict

Absence of Binding Treaties 2

The ongoing negotiation between authority and autonomy will determine not only the outcome of the next conflict but the very nature of human civilization in the age of the sovereign algorithm.

Works cited

  1. The Pentagon, AI Guardrails, and the Expansion of Autonomous ..., accessed February 27, 2026, https://jgcarpenter.com/blog.html?blogPost=the-pentagon%2C-ai-guardrails%2C-and-the-expansion-of-autonomous-authority
  2. The Pentagon/Anthropic Clash Over Military AI Guardrails - Opinio ..., accessed February 27, 2026, https://opiniojuris.org/2026/02/26/the-pentagon-anthropic-clash-over-military-ai-guardrails/
  3. Latest NDAA Supports AI Safety, Innovation, and China Decoupling | Lawfare, accessed February 27, 2026, https://www.lawfaremedia.org/article/latest-ndaa-supports-ai-safety--innovation--and-china-decoupling
  4. Modernizing Military Decision-Making: Integrating AI into Army Planning, accessed February 27, 2026, https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2025-OLE/Modernizing-Military-Decision-Making/
  5. Deadline looms as Anthropic rejects Pentagon demands it remove AI safeguards | News, accessed February 27, 2026, https://www.wliw.org/radio/news/deadline-looms-as-anthropic-rejects-pentagon-demands-it-remove-ai-safeguards/
  6. Media Tip Sheet: Pentagon Threatens “Supply Chain Risk” Label Over AI Guardrails, accessed February 27, 2026, https://mediarelations.gwu.edu/media-tip-sheet-pentagon-threatens-supply-chain-risk-label-over-ai-guardrails
  7. Pete Hegseth seeks a Pyrrhic victory against Anthropic, accessed February 27, 2026, https://www.washingtonpost.com/opinions/2026/02/26/hegseth-anthropic-ai-model-claude/
  8. Anthropic Defies Pentagon Push To Loosen AI Guardrails - FindArticles, accessed February 27, 2026, https://www.findarticles.com/anthropic-defies-pentagon-push-to-loosen-ai-guardrails/
  9. DARPA Wants Artificial Intelligence To Explain Itself - Nextgov/FCW, accessed February 27, 2026, https://www.nextgov.com/artificial-intelligence/2016/08/darpa-wants-artificial-intelligence-explain-itself/130680/
  10. Comprehension and Control of Frontier Military AI Systems | by Dr. Jerry A. Smith | Medium, accessed February 27, 2026, https://medium.com/@jsmith0475/comprehension-and-control-of-frontier-military-ai-systems-5814ec0890a6
  11. White House Issues National Security Memorandum on Artificial Intelligence (“AI”), accessed February 27, 2026, https://www.cov.com/en/news-and-insights/insights/2024/11/white-house-issues-national-security-memorandum-on-artificial-intelligence-ai
  12. The Biden Administration's National Security Memorandum on AI Explained - CSIS, accessed February 27, 2026, https://www.csis.org/analysis/biden-administrations-national-security-memorandum-ai-explained
  13. Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon : r/ClaudeAI - Reddit, accessed February 27, 2026, https://www.reddit.com/r/ClaudeAI/comments/1rem4td/anthropic_ditches_its_core_safety_promise_in_the/
  14. AI Risks that Could Lead to Catastrophe - Center for AI Safety (CAIS), accessed February 27, 2026, https://safe.ai/ai-risk
  15. The Weapons of Mass Destruction AI Security Gap - TIME, accessed February 27, 2026, https://time.com/7373405/weapons-of-mass-destruction-ai-security-gap/
  16. The fast and the deadly: When Artificial Intelligence meets Weapons of Mass Destruction, accessed February 27, 2026, https://europeanleadershipnetwork.org/commentary/the-fast-and-the-deadly-when-artificial-intelligence-meets-weapons-of-mass-destruction/
  17. Toward Comprehensive Benchmarking of the Biological Knowledge ..., accessed February 27, 2026, https://www.rand.org/pubs/research_reports/RRA3797-1.html
  18. The Weapons of Mass Destruction AI Security Gap | RAND, accessed February 27, 2026, https://www.rand.org/pubs/commentary/2026/02/the-weapons-of-mass-destruction-ai-security-gap.html
  19. The role of AI in reducing the risk of weapons of mass destruction - Johns Hopkins SAIS, accessed February 27, 2026, https://sais.jhu.edu/news-press/event-recap/role-ai-reducing-risk-weapons-mass-destruction
  20. Cyber Insights 2026: Cyberwar and Rising Nation State Threats - SecurityWeek, accessed February 27, 2026, https://www.securityweek.com/cyber-insights-2026-cyberwar-and-rising-nation-state-threats/
  21. AI Threat to Escalate in 2025, Google Cloud Warns - Infosecurity Magazine, accessed February 27, 2026, https://www.infosecurity-magazine.com/news/ai-threat-escalate-in-2025-google/
  22. AI Arms Race Accelerates as Cyber Breakout Times Drop to Seconds - Security Today, accessed February 27, 2026, https://securitytoday.com/articles/2026/02/24/ai-arms-race-accelerates-as-cyber-breakout-times-drop-to-seconds.aspx
  23. Inside the AI Cybersecurity Arms Race - Forcepoint, accessed February 27, 2026, https://www.forcepoint.com/blog/x-labs/ai-cybersecurity-arms-race
  24. Cascading chaos: Nonstate actors and AI on the battlefield - Brookings Institution, accessed February 27, 2026, https://www.brookings.edu/articles/cascading-chaos-nonstate-actors-and-ai-on-the-battlefield/
  25. Radware reports hybrid warfare as cyberattacks, disinformation escalate in 2025 Israel-Iran conflict - Industrial Cyber, accessed February 27, 2026, https://industrialcyber.co/threats-attacks/radware-reports-hybrid-warfare-as-cyberattacks-disinformation-escalate-in-2025-israel-iran-conflict/
  26. AI on the Frontline: Managing Speed, Stability, and Accountability in Combat, accessed February 27, 2026, https://perryworldhouse.upenn.edu/news-and-insight/ai-on-the-frontline-managing-speed-stability-and-accountability-in-combat/
  27. Digitally-Enabled Warfare | CNAS, accessed February 27, 2026, https://www.cnas.org/publications/reports/digitally-enabled-warfare-the-capability-vulnerability-paradox
  28. The National Security Memorandum on Artificial Intelligence — CSET Experts React, accessed February 27, 2026, https://cset.georgetown.edu/article/the-national-security-memorandum-on-artificial-intelligence-cset-experts-react/
  29. AI Arms Race: Openmind Networks Research Reveals MNOs Turning to 'Active Defense' Shields to Counter $71B Fraud Threat - The National Law Review, accessed February 27, 2026, https://natlawreview.com/press-releases/ai-arms-race-openmind-networks-research-reveals-mnos-turning-active-defense
  30. The Wrong War: The Insistence on Applying Cold War Metaphors to Cybersecurity Is Misplaced and Counterproductive | Brookings, accessed February 27, 2026, https://www.brookings.edu/articles/the-wrong-war-the-insistence-on-applying-cold-war-metaphors-to-cybersecurity-is-misplaced-and-counterproductive/
  31. Artificial intelligence arms race - Wikipedia, accessed February 27, 2026, https://en.wikipedia.org/wiki/Artificial_intelligence_arms_race
  32. Wave 5: Security and Geopolitics - Longitudinal Expert AI Panel, accessed February 27, 2026, https://leap.forecastingresearch.org/reports/wave5
  33. AI 'arms race' risks human extinction, warns top computing expert, accessed February 27, 2026, https://www.courthousenews.com/ai-arms-race-risks-human-extinction-warns-top-computing-expert/
  34. Will AI Enhance Decision-Making in the Use of Nuclear Weapons? - RSIS, accessed February 27, 2026, https://rsis.edu.sg/rsis-publication/rsis/will-ai-enhance-decision-making-in-the-use-of-nuclear-weapons/
  35. An Algorithmic Loosening of the Atomic Screw? Artificial Intelligence and Nuclear Deterrence - Modern War Institute, accessed February 27, 2026, https://mwi.westpoint.edu/an-algorithmic-loosening-of-the-atomic-screw-artificial-intelligence-and-nuclear-deterrence/
  36. Cyber Threats and Nuclear Weapons - National Security Archive, accessed February 27, 2026, https://nsarchive.gwu.edu/sites/default/files/documents/3460884/Document-07-Andrew-Futter-Royal-United-Services.pdf
  37. Expert Report Identifies AI Risks, Recommends Solutions - SDG Knowledge Hub, accessed February 27, 2026, https://sdg.iisd.org/news/expert-report-identifies-ai-risks-recommends-solutions/
  38. GGE LAWS Briefing UN Disarmament Commission - Netherlandsandyou.nl, accessed February 27, 2026, https://www.netherlandsandyou.nl/web/pr-geneva-disarmament/w/ggelaws-briefingundc
  39. CyCon 2025 Series – Artificial Intelligence in Armed Conflict: The Current State of International Law - Lieber Institute West Point, accessed February 27, 2026, https://lieber.westpoint.edu/artificial-intelligence-armed-conflict-current-state-international-law/
  40. AI Guardrails Will Stop Being Optional in 2026 - StateTech Magazine, accessed February 27, 2026, https://statetechmagazine.com/article/2026/01/ai-guardrails-will-stop-being-optional-2026
  41. Narrowing the National Security Exception to Federal AI Guardrails ..., accessed February 27, 2026, https://www.brennancenter.org/our-work/analysis-opinion/narrowing-national-security-exception-federal-ai-guardrails-0