The Global Synthesis of Computational Game Theory, Quantum Intelligence, and the Erosion of Legal Guardrails: A Strategic Analysis of Modern Statecraft
The integration of advanced mathematical modeling, high-performance computing, and geopolitical strategy has fundamentally transformed the landscape of international relations. At the heart of this transformation lies game theory—the mathematical study of strategic decision-making where the outcome for one actor depends on the choices of others. In the contemporary era, the state-of-the-art in this field has moved beyond the simple, static matrices of the mid-twentieth century into the realm of neural mean-field games, recursive large language model (LLM) agents, and quantum-enhanced optimization. These tools allow governments and think-tanks to simulate global affairs with unprecedented resolution, yet they also provide the technical justification for bypassing traditional legal and constitutional guardrails under the banner of realpolitik.
State-of-the-Art Modeling: Neural Mean-Field Games and Stochastic Complexity
The current frontier of game theory is dominated by Mean-Field Game (MFG) theory, a framework designed to handle strategic interactions within extremely large or infinite populations of agents.1 Traditionally, modeling the strategic interactions of millions of actors—whether they be traders in a global market, citizens in a pandemic, or drones in a swarm—was computationally intractable due to the "curse of dimensionality," where the complexity of finding a Nash equilibrium grows exponentially with the number of players.1 MFGs resolve this by approximating the collective behavior of agents as a continuous distribution, or "mean-field," rather than tracking every individual interaction.2
The Mechanism of Neural Mean-Field Equilibrium
The mathematical structure of a mean-field game typically involves two coupled differential equations. The first is the Hamilton-Jacobi-Bellman (HJB) equation, which describes the optimal control and decision-making of a representative agent moving backward in time.1 The second is the Fokker-Planck (FP) equation, which models the forward temporal evolution of the entire population's density.2 Recent breakthroughs have introduced "neural mean-field games," which combine these classical equations with deep learning architectures like neural ordinary differential equations (Neural ODEs) and neural stochastic differential equations (Neural SDEs).1
This hybridization allows models to be data-driven and model-free, alleviating the inherent modeling bias found in predefined dynamics.1 By leveraging automatic differentiation, these neural models can learn complex strategic interactions from sparse observations, making them robust to noise and varying levels of observability.1 For instance, neural MFGs have been successfully applied to simulate viral dynamics using real-world COVID-19 data, providing a more objective tool for public health policy than traditional epidemiological models.2
Model Uncertainty and Stochastic Stability
State-of-the-art research also addresses "model uncertainty" within mean-field games. In discrete-time, finite-state MFGs, agents often face ambiguity regarding state transition probabilities.4 Unlike classical models that assume perfect knowledge of the environment's rules, modern models incorporate "worst-case" transition sets, where agents maximize their expected payoff against adversarial environmental shifts.4 This leads to the emergence of strategies that are contingent not only on individual states but on the realized distribution of the entire population.4
Model Type | Population Scope | Mathematical Foundation | Key Innovation (2024-2025) |
Classical | Finite, small | Payoff Matrices/Game Trees | Multi-agent Reinforcement Learning (MARL) integration.5 |
Mean-Field Game (MFG) | Infinite limit ( | HJB and Fokker-Planck Equations | Convergence of |
Neural MFG | Data-driven/Infinite | Neural ODEs/SDEs | Model-free analysis of extensive strategic interactions.1 |
MFG of Controls | Joint state/control | Reflected state process | Relaxed formulation for exogenous stochastic boundaries.6 |
These advancements enable governments to model "trade crowding" and "algorithmic liquidation" in financial markets, where the actions of a single liquidator are balanced against a background noise of high-frequency traders and noise traders.7 By passing to the infinite limit, these models become tractable, allowing for the calculation of
-Nash equilibria that provide "good enough" strategies for real-world application in highly complex systems.3
Advanced Considerations in Model-on-Model Strategic Interactions
When sophisticated game theory models are deployed to "play" against one another—a scenario increasingly common in algorithmic trading, cybersecurity, and autonomous warfare—the dynamics shift from static optimization to a recursive battle of higher-order beliefs.9 In these environments, agents do not merely react to the state of the world; they react to their perception of the other agent's perception of them.9
Recursive Reasoning and the Level-K Framework
The "Level-K" framework of behavioral economics serves as a foundational pillar for understanding model-on-model play in Large Language Model (LLM) agents.9 This hierarchy classifies agents by their depth of strategic thinking:
- Level-0 (
): Non-strategic agents that act according to simple heuristics or random distributions.10 - Level-1 (
): Agents that assume their rivals are Level-0 and act to maximize their own utility accordingly.11 - Level-2 (
): Agents that model their rivals as Level-1, incorporating a first-order theory of mind.11 - Level-
: Agents that recursively apply this logic to
levels of depth.9
Recent research into "K-Level Reasoning with LLMs" (K-R) has demonstrated that frontier models can achieve a strategic depth of approximately 1.89—nearly matching the depth exhibited by high-performing human participants in financial competitions (1.91).11 These models use recursive mechanisms to anticipate rival moves, significantly improving win rates in games like "Guessing 0.8 of the Average" and "Survival Auctions".10 In these contexts, K-R models have shown a win rate of 0.67, vastly outperforming standard reasoning methods like Chain-of-Thought (0.18).10
Emergent Heuristics and Opponent Modeling
In model-on-model interactions, agents often develop "emergent heuristics"—stable, model-specific rules of choice that allow them to handle increasing complexity without collapsing into pure recursion.9 This is critical because, as the number of agents and variables grows, explicit recursion becomes computationally expensive. Instead, agents form differentiated conjectures about human vs. synthetic opponents, a form of "meta-reasoning" that allows them to adjust their strategic depth dynamically.9
Strategic communication and signaling also play a vital role. Agents in a Multi-Agent System (MAS) use structured messages to reveal private information, such as task intent or resource capacity, to influence the collective outcome.13 This mirrors the game-theoretic concept of "signaling," where players use costly or verifiable actions to prove their "type" or intentions to a rival, thereby stabilizing a Nash equilibrium.13
Dynamic Feature | Description | Strategic Implication |
Best-Response Dynamics | Iterative strategy adjustment based on the latest moves of all other agents. | Leads to self-balancing systems and "Nash-like" stability.13 |
Non-Stationarity | The environment changes because other agents are learning and adapting simultaneously. | Undermines traditional Markov assumptions, requiring adaptive online planning.15 |
Adaptation Index | A metric measuring how quickly a model adjusts its strategy to a changing opponent. | K-Level models show lower (better) adaptation indices (0.31 vs 0.71 for CoT).10 |
Opponent Modeling | Monitoring historical actions to identify patterns in a rival's decision-making. | Essential for avoiding conflicts and resource clashes in decentralized systems.13 |
Core Game Theory Concepts Explained
To understand the sophisticated models used in global affairs, one must grasp the fundamental concepts that govern strategic interactions. These concepts provide the "grammar" for the "language" of geopolitics.
Nash Equilibrium and Subgame Perfection
A Nash Equilibrium (NE) occurs in a game when no player can unilaterally improve their payoff by changing their strategy, assuming all other players keep their strategies fixed.14 While an NE represents a point of stability, it does not always represent the most efficient outcome for the group—a phenomenon known as Pareto suboptimality, famously illustrated by the Prisoner's Dilemma.16
In multi-stage or sequential games, the concept of Subgame Perfect Nash Equilibrium (SPNE) is used to filter out "incredible threats".18 An SPNE requires that the players' strategies constitute a Nash equilibrium in every subgame of the larger game. For instance, in a "Chain-Store Paradox" game, a dominant firm might threaten to start a price war if a new competitor enters. However, if starting a price war would also bankrupt the dominant firm, the threat is incredible, and the SPNE would predict the competitor enters and the incumbent accommodates.18
The Folk Theorem and Infinite Horizon Games
The Folk Theorem is a crucial result for understanding international cooperation, such as trade agreements or arms control.18 It states that in an infinitely repeated game, any payoff that is "individually rational" (meaning it provides at least the payoff a player could guarantee for themselves) can be sustained as a Nash equilibrium if the players are sufficiently "patient" (meaning they have a high discount factor,
).21
This is often achieved through a "Grim Trigger" strategy: players cooperate as long as everyone else cooperates, but if one player defects, everyone else defects forever, punishing the traitor.20 The Folk Theorem explains why nations maintain cooperation in the absence of a global sovereign: the long-term benefit of a stable relationship outweighs the short-term gain of a single betrayal, provided the "shadow of the future" is long enough.21
Bayesian Games and Information Asymmetry
In global affairs, players rarely have "complete information" about their rivals' payoffs or intentions. Bayesian Games model these interactions by introducing "types" for players.18 For example, a state might be a "hawk" (with high payoffs for military action) or a "dove" (with high payoffs for diplomacy).20 Other players do not know the state's true type but have a "prior belief" based on a probability distribution. Strategic play in Bayesian games involves "signaling"—taking actions that a "dove" would find too expensive but a "hawk" would find worthwhile—to reveal or obscure one's true nature.14
Public Information on Government and Think-Tank Applications
Governments and prominent think-tanks utilize game theory not just as an academic exercise, but as a rigorous tool for crisis management, resource allocation, and long-term strategic planning.
The RAND Corporation: Geopolitics of AGI and Nuclear Risk
The RAND Corporation has used game theory for over seven decades to explore conflict, economics, and psychology.23 Currently, RAND’s Geopolitics of AGI Initiative uses the Infinite Potential platform to conduct "Day After AGI" exercises.24 These are two-hour simulations involving senior policymakers (role-playing as the National Security Advisor or Cabinet members) to evaluate the U.S. response to technological "moonshots" from adversaries like China.24
A primary focus is the Two Moonshots scenario, which explores the strategic dilemma of a simultaneous domestic and foreign breakthrough in Artificial General Intelligence.24 Game-theoretic insights from these exercises have highlighted that a perceived "first-mover advantage" in AGI can lead to "crisis instability," where nations are tempted to take aggressive, preemptive actions to ensure they control the technology first.24 RAND also maintains the ROMANCER (RAND Ontological Model for Assessing Nuclear Crisis Escalation Risk), which uses formal modeling to calculate the probability of escalation in nuclear-armed conflicts.23
SIPRI and the AI-Nuclear Nexus
The Stockholm International Peace Research Institute (SIPRI) has conducted extensive research on how machine learning and autonomy impact strategic stability.27 Their analysis identifies several game-theoretic risks associated with AI-enabled deterrence:
- Decapitation Risk: AI-enhanced surveillance could make mobile launchers and submarines more detectable, undermining the "survivability" of a second-strike capability—a cornerstone of Cold War-era stability.28
- Threshold Blurring: The use of unmanned combat aerial vehicles (UCAVs) and unmanned underwater vehicles (UUVs) may lower the threshold for conflict, as states might be more willing to attack an uncrewed platform than a crewed one, potentially triggering a "cascading series of nuclear risks".28
- OODA Loop Compression: AI systems can process information and execute actions faster than human "Observe-Orient-Decide-Act" loops. This leads to "behavioral risk," where the speed of AI-driven escalatory moves outpaces the ability of human leaders to de-escalate.28
China's "Active Defense" and Regional Security Order
Analysis of China's Defense White Papers (DWPs) from 2010 to 2019 reveals a strategic posture rooted in "active defense" and a preference for non-military measures like economic statecraft.29 Chinese strategic thought often frames regional security as a Chicken Game with asymmetric stakes: China views its claims (such as Taiwan) as essential to national identity, while it perceives the U.S. as prioritizing regional stability and ally credibility.30 By appearing "unstable" or "irrational" regarding its core interests—a tactic known as the "Madman Theory"—a leader can theoretically force a more rational opponent to swerve first in a high-stakes confrontation.30
European Union: Trade Coercion and Bargaining
The European Union utilizes game theory to model trade tensions and diplomatic strategy.31 The EU’s Anti-Coercion Instrument (ACI), adopted in 2023, is a strategic deterrent designed to counter economic pressure from third countries.33 By modeling trade relations as a "Threat Game," the EU identifies targeted customs duties, the blocking of foreign investment, or the suspension of intellectual property rights as credible deterrents to protect its political choices.33 EU think-tanks also use "strategic foresight" to "wind-tunnel" future policies against various plausible scenarios (e.g., "Storms," "Endgame," "Struggling Synergies") to ensure they are robust to global trade fragmentation.34
Entity | Primary Framework | Key Focus Area | Publicly Documented Objective |
RAND Corp | Day After AGI Exercises | Artificial General Intelligence | Developing national "playbooks" for AI crises.35 |
SIPRI | Strategic Stability Mapping | AI & Nuclear Weapons | Identifying behavioral risks in automated NC2.28 |
China (PLA) | Active Defense / Chicken Game | South China Sea / Taiwan | Securing "core interests" via brinkmanship.29 |
EU (EP Think Tank) | Anti-Coercion / Trade Warfare | Global Trade Fragmentation | De-risking supply chains and deterring economic blackmail.32 |
The Quantum Edge: AI and Global Strategic Power
The advent of quantum computing (QC) is set to provide an overwhelming "edge" to any government capable of running AI game-theoretic models on quantum hardware.36 Quantum mechanics introduces capabilities—such as superposition and entanglement—that allow for the processing of nature's inherent complexity in ways classical computers cannot.36
Exponential Speedup and Enhanced Exploration
Classical AI models frequently struggle with the "Traveling Salesman Problem" or complex logistics where the number of possible solutions explodes as the problem grows.37 Quantum algorithms, however, can achieve exponential speedups, converting an NP-hard problem that would take a classical computer billions of years into a polynomial problem solvable in minutes.37
In the context of Quantum Reinforcement Learning (QRL), an agent can use superposition to explore millions of different strategies simultaneously.38 This is not merely "faster" computing; it is the ability to "read all chapters of a choose-your-own-adventure book at once" to find the winning path.38 For a government, this provides a decisive advantage in:
- Missile Defense: Integrating Riemannian geometry and quantum mechanics allows a "CAGI" (Cognitive Artificial General Intelligence) to coordinate multi-layered responses to hypersonic swarm attacks by "entangling" trajectories and environmental data across sensor networks.36
- Information Dominance: Quantum gradient descent speeds up the fine-tuning of strategies, allowing an AI to synthesize intelligence and anticipate adversary moves with "near-perfect interception rates" or "preemptive strike" planning.36
- Cryptography and Error Coding: Quantum computers can find optimal solutions to wiggly polynomial lines that touch the most points—a task critical for both encoding secure data and breaking an adversary's communications.39
Computing Nash Equilibria with Rydberg Atom Arrays
One of the most profound developments is the use of neutral-atom quantum computers to compute Nash equilibria directly through physical simulation.40 By mapping a "Public Goods on Networks" game onto a 2D array of Rydberg atoms, researchers can exploit the Rydberg blockade.40
In this physical game, an atom excited to a high-energy state (representing a player who "contributes") prevents its neighbors from doing the same due to strong inter-atomic interactions.40 This mirrors the game-theoretic concept of strategic substitutes, where the incentive to act decreases as others act. By driving the system to its "ground state" (lowest energy configuration) through adiabatic annealing, the quantum computer naturally settles on the Maximum Independent Set (MIS).40 The MIS represents the Nash equilibrium that maximizes the number of contributors while minimizing free-riding—a result that is NP-hard to find classically for complex networks but manifests physically in a quantum array.40
Bypassing Guardrails: Realpolitik and the National Security State
As governments gain access to these powerful modeling tools, a significant contradiction emerges between their public legal frameworks and their actual strategic behavior. Game-theoretic reasoning often reveals that the "optimal" strategy involves bypassing domestic and international laws that were originally put in place as guardrails.
The Mechanism of Executive Overreach
In periods of perceived national security threat, the rule of law often takes a "backseat" to executive initiative.41 Governments utilize several mechanisms to bypass legal constraints:
- Abuse of Emergency Powers: The National Emergencies Act grants presidents a "dizzying range of powers" once an emergency is declared.43 Contemporary examples include the declaration of "double-digit" national emergencies to freeze appropriations and redirect funds in open disregard of legislation forbidding the impoundment of funds.42
- Sidelining Independent Legal Advice: Recent trends in national security show the removal of career JAG (Judge Advocate General) officers from the decision-making loop.42 By replacing independent legal advisors with appointees who offer "executive autocracy" advice, administrations make the military justice system more "flexible" for domestic deployment and unilateral strikes.42
- Personalization of the Military: The deployment of the regular military and National Guard for domestic policing—such as immigration enforcement or "controlling crime waves" in cities—often ignores the Posse Comitatus Act, which prohibits using the military for domestic law enforcement.42
Realpolitik as a Philosophical Justification
The philosophical driver for this behavior is Realpolitik—the pursuit of vital state interests in a dangerous world where systemic constraints incentivize "egoistic" behavior over moral or legal adherence.45 Realpolitik presupposes "instrumental rationality"—doing the best one can for the country in a given situation, often involving "compensating trade-offs" where one goal (the rule of law) is sacrificed for a greater perceived benefit (national survival).45
Historical analysis shows that "constitutional reverence" has often emerged in tandem with the national security state, serving to legitimize government power rather than to limit it.46 For example, during the 1930s and World War II, elite groups used "constitutional loyalty" as a prerequisite for citizenship, which simultaneously enabled the "permanent extension of the reach of the federal government's coercive apparatus".46 The Supreme Court has historically deferred to the executive in matters of national security, upholding restrictions on personal liberties (as in Schenck v. United States) based on the logic that "the laws are no more and no less than what the courts will enforce".47
The Illusion of Inherent Power
Governments often invoke the "Sole Organ" doctrine or "Inherent Power" to justify what is explicitly illegal under statute.41 By claiming that the President has "energy, unity, secrecy, and dispatch," Hamilton and his successors argued for a grant of power that should exist "without limitation" because it is impossible to foresee the extent of future threats.47 This leads to the creation of "secret law" and the overreaching use of executive privilege to resist congressional oversight, effectively exalting the executive at the expense of the separation of powers.41
Mechanism of Bypass | Legal Guardrail Violated | Game-Theoretic Rationale |
National Emergencies Act abuse | Congressional Power of the Purse | Maximizing "energy and dispatch" in a crisis.47 |
Bypassing JAG Officers | Professional Ethics & Internal Legality | Reducing "friction" in the execution of strikes.42 |
Insurrection Act overreach | Posse Comitatus Act | Ensuring domestic stability via overwhelming force.43 |
Executive Autocracy Advice | Separation of Powers | Eliminating "veto players" in strategic decisions.42 |
Synthesis and Strategic Outlook
The state-of-the-art in game theory has created a world where strategic decisions are increasingly driven by "reality-centric AI agents" that can reason, plan, and adapt in uncertain environments.50 These agents move beyond passive prediction toward "active, goal-driven interaction" with both the physical and virtual worlds.51 When these agents are run on quantum hardware, they provide a government with a level of "information dominance" that can neutralize threats like hypersonic missiles or coordinated trade coercion before they fully materialize.36
However, the "Game Theory of the State" suggests that as these predictive tools become more accurate, the incentive to follow public laws decreases. If a government’s model predicts that a unilateral strike or a domestic emergency declaration is the "optimal" path to maintaining power or security, realpolitik dictates that the legal "guardrail" be treated as a mere obstacle to be circumvented.42 This creates a paradox: the very models designed to ensure a nation's stability may accelerate the erosion of the legal and constitutional foundations upon which that nation was built.
In the future, global power will not just be about who has the most missiles or the largest economy, but who can best navigate the "propagation of chaos" in large-scale strategic interactions while maintaining the most efficient—and often most ruthless—computational path to their goals.3 The integration of LLM-based recursive reasoning, quantum Nash mapping, and the systematic suspension of legal constraints represents the new "Total Warfare" of the 21st century.
Works cited
- Modelling Mean-Field Games with Neural Ordinary Differential Equations - arXiv, accessed January 31, 2026, https://arxiv.org/html/2504.13228v1
- Extending Mean-Field Game Theory with Neural Stochastic Differential Equations - arXiv, accessed January 31, 2026, https://arxiv.org/html/2504.13228v3
- Mean Field Game Theory: A Tractable Methodology for Large Population Problems | SIAM, accessed January 31, 2026, https://www.siam.org/publications/siam-news/articles/mean-field-game-theory-a-tractable-methodology-for-large-population-problems/
- [2601.12226] Mean-Field Games Under Model Uncertainty - arXiv, accessed January 31, 2026, https://arxiv.org/abs/2601.12226
- Game-Theoretic Multiagent Reinforcement Learning - arXiv, accessed January 31, 2026, https://arxiv.org/html/2011.00583v4
- Mean Field Game of Controls with State Reflections: Existence and Limit Theory - arXiv, accessed January 31, 2026, https://arxiv.org/abs/2503.03253
- Mean Field Game of Controls and An Application To Trade Crowding - ResearchGate, accessed January 31, 2026, https://www.researchgate.net/publication/309573292_Mean_Field_Game_of_Controls_and_An_Application_To_Trade_Crowding
- [PDF] Trading with the crowd - Semantic Scholar, accessed January 31, 2026, https://www.semanticscholar.org/paper/Trading-with-the-crowd-Neuman-Vo%C3%9F/af65b3cb1619684048a438365096d2916106ae2e
- K-Level Reasoning: Establishing Higher Order Beliefs in Large Language Models for Strategic Reasoning | Request PDF - ResearchGate, accessed January 31, 2026, https://www.researchgate.net/publication/392504549_K-Level_Reasoning_Establishing_Higher_Order_Beliefs_in_Large_Language_Models_for_Strategic_Reasoning
- [Quick Review] K-Level Reasoning: Establishing Higher Order Beliefs in Large Language Models for Strategic Reasoning - Liner, accessed January 31, 2026, https://liner.com/review/klevel-reasoning-establishing-higher-order-beliefs-in-large-language-models
- K-Level Reasoning: Establishing Higher Order Beliefs in Large Language Models for Strategic Reasoning - ACL Anthology, accessed January 31, 2026, https://aclanthology.org/2025.naacl-long.370.pdf
- [2402.01521] K-Level Reasoning: Establishing Higher Order Beliefs in Large Language Models for Strategic Reasoning - arXiv, accessed January 31, 2026, https://arxiv.org/abs/2402.01521
- How Game Theory Shapes Modern Multi-Agent AI Systems | by ..., accessed January 31, 2026, https://medium.com/@mukherjeetiyasa1998/game-theoretic-impact-on-multi-agent-systems-4307c3e8872f
- Quantum games: a review of the history, current state, and interpretation - OSTI, accessed January 31, 2026, https://www.osti.gov/servlets/purl/1513429
- Multi-Agent Reinforcement Learning in Games: Research and Applications - PMC, accessed January 31, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC12190516/
- Efficiency of Classical and Quantum Games Equilibria - PMC - NIH, accessed January 31, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC8147025/
- Understanding Quantum Game Theory: Strategic Thinking Redefined - BlueQubit, accessed January 31, 2026, https://www.bluequbit.io/blog/quantum-game-theory
- Advanced Game Theory, accessed January 31, 2026, https://www.ozyurtselcuk.com/home/teaching/advanced-game-theory
- Game Theory - Coursera, accessed January 31, 2026, https://www.coursera.org/learn/game-theory-1
- Week11 Game Theory Flipped Learning Lectures - Scribd, accessed January 31, 2026, https://www.scribd.com/document/981388694/Week11-Game-Theory-Flipped-Learning-Lectures
- (AGT3E12) [Game Theory] Folk Theorem - YouTube, accessed January 31, 2026, https://www.youtube.com/watch?v=dUEsI0qwMQs
- Game Theory 101 (#61): The Folk Theorem - YouTube, accessed January 31, 2026, https://www.youtube.com/watch?v=a8QAcwNcJPU
- Game Theory | RAND, accessed January 31, 2026, https://www.rand.org/topics/game-theory.html
- Infinite Potential—Insights from the Two Moonshots Scenario - RAND, accessed January 31, 2026, https://www.rand.org/content/dam/rand/pubs/research_reports/RRA4200/RRA4230-1/RAND_RRA4230-1.pdf
- Publications | RAND, accessed January 31, 2026, https://www.rand.org/global-and-emerging-risks/centers/geopolitics-of-agi/pubs.html
- Geopolitical Strategic Competition - RAND, accessed January 31, 2026, https://www.rand.org/topics/geopolitical-strategic-competition.html
- The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Volume I, Euro-Atlantic perspectives | SIPRI, accessed January 31, 2026, https://www.sipri.org/publications/2019/research-reports/impact-artificial-intelligence-strategic-stability-and-nuclear-risk-volume-i-euro-atlantic
- The Impact of Artificial Intelligence on Strategic Stability and ... - SIPRI, accessed January 31, 2026, https://www.sipri.org/sites/default/files/2019-10/the_impact_of_artificial_intelligence_on_strategic_stability_and_nuclear_risk_volume_ii.pdf
- Full article: China's conception of regional security order: an analysis of Defence White Papers from 2010 to 2019 - Taylor & Francis, accessed January 31, 2026, https://www.tandfonline.com/doi/full/10.1080/10357718.2025.2502632
- The Taiwan Game. Navigating U.S.-China Tensions from Game Theory Perspective - Stosunki Międzynarodowe – International Relations, accessed January 31, 2026, https://internationalrelations-publishing.org/articles/5-14/pdf
- Game Theory and Trade Tensions between Advanced Economies - European Research Studies Journal, accessed January 31, 2026, https://ersj.eu/journal/1744
- List of publications from the EP Think Tank - European Union, accessed January 31, 2026, https://www.europarl.europa.eu/thinktank/en/research/advanced-search/pdf?policyAreas=INTTRA
- What Responses Can the European Union Offer to Donald Trump's Trade Aggression?, accessed January 31, 2026, https://www.iris-france.org/en/what-responses-can-the-european-union-offer-to-donald-trumps-trade-aggression/
- List of publications from the EP Think Tank - European Union, accessed January 31, 2026, https://www.europarl.europa.eu/thinktank/en/research/advanced-search/pdf?policyAreas=FORPLA
- Wargaming | RAND, accessed January 31, 2026, https://www.rand.org/topics/wargaming.html
- Missile Defense Through a Quantum Leap in Artificial Intelligence, accessed January 31, 2026, https://jstribune.com/gfoeller-rundell-missile-defense-through-a-quantum-leap-in-ai/
- How Quantum AI is Expanding the Boundaries of Computational Intelligence, accessed January 31, 2026, https://www.thepurplestruct.com/blog/quantum-ai-computational-intelligence-boundaries
- Quantum AI: How Quantum Reinforcement Learning Solves ..., accessed January 31, 2026, https://medium.com/a-microbiome-scientist-at-large/quantum-ai-a-review-of-two-dominant-algorithmic-paradigms-1ff786d12c55
- Quantum Speedup Found for Huge Class of Hard Problems | Quanta Magazine, accessed January 31, 2026, https://www.quantamagazine.org/quantum-speedup-found-for-huge-class-of-hard-problems-20250317/
- Mapping Game Theory to Quantum Systems: Nash Equilibria ... - arXiv, accessed January 31, 2026, https://arxiv.org/pdf/2511.09841
- WHTP - 25 Guide to National Security.docx - White House Transition Project, accessed January 31, 2026, https://whitehousetransitionproject.org/wp-content/uploads/2024/03/WHTP-25-Guide-to-National-Security.pdf
- National Security Law in the Second Trump Administration ..., accessed January 31, 2026, https://nationalsecurity.law.georgetown.edu/journal/2025/10/01/national-security-law-in-the-second-trump-administration/
- Executive Power | Brennan Center for Justice, accessed January 31, 2026, https://www.brennancenter.org/topics/government-power/executive-power
- National Guard, Presidential Power, and the Law: A Q&A with Stanford Law's Bernadette Meyler - Legal Aggregate, accessed January 31, 2026, https://law.stanford.edu/2025/12/04/national-guard-presidential-power-and-the-law-a-qa-with-stanford-laws-bernadette-meyler/
- The Rarity of Realpolitik: What Bismarck's Rationality Reveals about International Politics, accessed January 31, 2026, https://direct.mit.edu/isec/article/43/1/7/12204/The-Rarity-of-Realpolitik-What-Bismarck-s
- Constitutionalism and the Foundations of the Security State - Scholarship@Cornell Law, accessed January 31, 2026, https://scholarship.law.cornell.edu/cgi/viewcontent.cgi?article=2549&context=facpub
- National security and the Supreme Court | Research Starters - EBSCO, accessed January 31, 2026, https://www.ebsco.com/research-starters/law/national-security-and-supreme-court
- "The Limits of National Security" by Laura K. Donohue - Scholarship @ GEORGETOWN LAW, accessed January 31, 2026, https://scholarship.law.georgetown.edu/facpub/1010/
- Executive Power | Yale Law Journal, accessed January 31, 2026, https://yalelawjournal.org/explore/executive-power
- van der Schaar Lab at NeurIPS 2025: Explained, accessed January 31, 2026, https://www.vanderschaar-lab.com/neurips-2025-explained/
- NeurIPS 2025 Workshops, accessed January 31, 2026, https://neurips.cc/virtual/2025/events/workshop


