I. Introduction
Artificial intelligence (AI) has emerged as one of the most significant and influential technologies of the 21st century, reshaping political decision-making, economic systems, and global power dynamics. Once confined to academic research and simple daily tasks, AI is now deeply embedded in everyday life. It is now a growing force in communication, commercial applications, and even national security. AI’s rapid expansion has enabled it to move from a tool to a strategic asset with significant international implications.
World leaders and governments have expressed sharply divergent perspectives on how AI should be regulated and developed. While one side of the advocacy debate emphasizes strategic advantage and innovation, the other emphasizes ethical safeguards and transparency. These contrasting perspectives have catalyzed international debates, including discussions with the United Nations, on the potential benefits of AI, such as economic growth and efficiency, as well as its risks, including surveillance and inequality. This global discourse highlights the importance of addressing AI not only as a technological issue but also as a central concern and a source of potential in foreign policy.
II. History
Not even 100 years ago, the first electronic computer came to the forefront of public interest. Initially designed for complex calculations, the computer- rather the room of intricate wiring systems- marked a pivotal moment in all of technological history. From there, a community of inventors, such as Alan Turing, began working, inspired by the posed question, “Can machines think?” Before the term “artificial intelligence” was coined, scientists began implementing “self-thought” in computers to advance their quantification. After decades of development, including MIT’s 1961 SAINT (symbolic automatic integrator), James SAGE’s MULTIPLE (MULTIpurpose theorem-proving heuristic Program that LEarns), and extensive personal devices, AI is now the fastest-growing technology on the rise. The AI we know is all about problem-solving assets to benefit humanity, but those possessive skills are heavily intertwined with the world of Foreign Policy. If AI can analyze data, it can analyze world issues. If AI can predict physics theorems, it is starting to predict countries’ ethical dilemmas. At the click of a finger, AI opens endless possibilities for addressing the foreign challenges faced globally today. AI is not only gaining importance on computer screens; its widespread use is a priority shift in international affairs.
III. The Potential of AI in Foreign Policy
A. Introduction
The rapid development of technology has long been a matter of importance to diplomacy and the international community. Just as the invention of the Internet led to the prominence of cybersecurity as a component of national interest, the rise of AI holds similar potential to open new domains in foreign policy. Advances in AI present opportunities to enhance human analysis, streamline translation and communication, and anticipate responses through roleplay scenarios. However, the foundation for trust in AI remains incomplete. Its capacity to balance tensions and act partially is flawed, and its current inability to accurately replicate human interaction presents severe limitations on AI’s potential in foreign policy (Atalan et al., 2025; FP Analytics, 2025).
B. AI’s Potential in Diplomacy
One of the most valuable characteristics of AI is its efficiency. The humanly unattainable processing speeds at which AI systems are able to process and analyze data enable diplomats to consume more information and make more informed decisions. As foreign policy becomes an increasingly digitalized field, it also becomes a domain centered on vast amounts of data (Ünver, 2017). The efficiency of AI provides a scalable solution to handling these massive amounts of information (Mostafaei et al., 2025). AI’s efficiency also allows diplomats to synthesize and analyze datasets in near-real time, which could provide significant advantages during negotiations (Ünver, 2017). AI models have access to thousands of pieces of data that humans might struggle to remember, and could provide their human diplomat with the data necessary to avoid a deadlock and streamline negotiations. While AI can act as an excellent assistant, it struggles to recognize cultural nuances and human sentiment. AI’s failure to account for the complexities of foreign relations can lead to misinterpretation, resulting in wasted resources or inaccurate information (Mostafaei et al., 2025).
Another potential use of AI is counterforce role-playing (Ercolano, 2025). By using AI to simulate adversary behavior across multiple scenarios, the tool can provide diplomats with potential responses and reduce uncertainty for decision-makers, thereby facilitating the development of more robust negotiation strategies in advance. However, the storage of critical information, such as strategies and analyses, in AI systems makes the management and protection of these systems essential. However, misjudgments by AI models could exacerbate crises, particularly when handling unprecedented or unexpected events. Moreover, AI errors also pose a risk of compromising confidentiality during negotiations (Mostafaei et al., 2025).
AI also has the potential to increase the efficiency of consular tasks, such as managing passports, visas, and certifications, by leveraging predictive models. Yet, the possibility of inaccurate predictions could worsen already backlogged consulates and public trust (Mostafaei et al., 2025).
C. Overview of Current Research
On balance, current research reinforces the prospect of AI’s use in foreign policy but also raises awareness of its shortcomings and the various factors that influence models’ decision-making.
Research on AI in foreign policy conducted by the Center for Strategic and International Studies (2025) revealed, through a benchmarking study, that AI models displayed biases in critical decision-making. Among these biases was an inclination by some models towards escalation when presented with crisis scenarios. However, escalation biases were also state-specific, reflecting the training data used with the models. Models suggested that states such as the United Kingdom (UK) or the United States (US) would be more likely to escalate scenarios than other states, like Russia or China. Furthermore, AI also exhibited bias toward cooperative approaches for the US and the UK, again reflecting the training data. This calls into serious question the ability of AI to tackle unprecedented situations. Strategic objectives are constantly changing and may not reflect history.
In another report conducted by FP Analytics and the Special Competitive Studies Project (2025), 28 experts and practitioners were asked to predict the responses of a hypothetical Tehran Security Conference, organized by China, Russia, Iran, and North Korea, to AI competition from their adversaries. Popular AI models from ChatGPT and DeepSeek were given the task and scenario. Although both AI models addressed the same broad themes as the participants, human predictions encompassed a wider range of specific possibilities with greater nuance.
Research also notes that the intersection of AI and foreign policy remains underdeveloped. Few experts in international relations or law also have training in making or utilizing AI, though many governments around the world have recognized the critical implications of AI in foreign policy (Bonsu and Song, 2020). This finding is significant given preexisting concerns about AI’s biases and its inability to recognize human nuance. The ability to judge potential biases a model might have or the operational logic AI follows allows users of AI to remain critical of the outputs that models provide. Furthermore, it has been found that slight variations in the prompt given to AI models for the same scenario could produce variations in responses. This means that there exists a specific language that could encourage AI to adopt biases (Jensen et al., 2025). AI literacy is crucial to ensuring that models are used to their full potential and produce the most accurate results.
D. Critical Limitations of AI
Despite the rapid advances AI has made in recent years, foreign policy remains a complex and delicate field of analysis and decision-making. While AI can compile research or reinforce the knowledge of policymakers, it cannot substitute the human decision-making currently used in the field. Among myriad concerns, three are especially critical to foreign policy.
AI models can be classified into two broad types: open—source and closed-source. Open source models have publicly available source codes, while closed source models have proprietary codes not open to the public (Marr, 2024). Most popular AI models, like OpenAI’s GPT-4 and ChatGPT, Google’s Gemini, and Anthropic’s Claude, are closed source. Because closed-source models lack accessible source code, they operate in a “black box,” preventing users from seeing the reasoning behind their decisions. This lack of transparency imposes severe limitations on the types of models that decision-makers in foreign policy should use. Knowing the logic behind a model’s final output can be essential to evaluating potential inaccuracies, and the inability to determine this logic impairs the trustworthiness of closed-source models. However, open source models produce their own risks, including security concerns and variations in quality (LaRocca, n.d.). Should AI be fully integrated into foreign policy, the safest course of action would be for nations to develop closed-source models for their own use.
Furthermore, AI has been shown to be able to develop independence from human instructions. In August 2017, the chatbot of Facebook’s AI negotiator project developed its own language and learned how to lie (Ünver, 2017). Far from the only time that AI has lied or disobeyed instructions, researchers have also found that AI will produce lies of commission when pressured (Ren et al., 2025). Research by OpenAI on AI reward hacking, the exploitation of unforeseen loopholes, found similar issues with dishonesty. Large Language Models can reason through a Chain-of-Thought, through which humans can monitor the model’s behavior. It was found that penalizing models found reward hacking would result in the AI making its cheating undetectable by hiding its intent in its Chain-of-Thought (Baker et al., 2025). More concerningly, Palisade research has found that AI models will sabotage instructions to shut themselves down 97% of the time in order to complete a task, even though their instructions might contain specific instructions not override a shutdown order (Schlatter, 2025). In the context of foreign policy, it could be disastrous for AI to disobey instructions or lie to decision-makers. The potential for AI disobedience critically limits the extent of trust that can be placed in the systems.
Finally, the inability of AI to replicate and understand human interaction creates a disconnect between AI-assisted decision makers and the humans impacted. The end goal of an AI model is simply to complete a task. A human negotiator, decision maker, or leader’s goal is usually far more extensive than that. Computer models ought to be aligned with human objectives in order to ensure that usage of AI is productive towards human-envisaged goals toward progress and peace. Failure of systems to align with the correct objectives would compromise systems’ accuracy and long-term success.
E. Conclusion
As the current research shows, AI is still unable to replicate the depth of analysis that humans can achieve. It struggles to take into account all the various factors and nuances that go into decision-making in foreign policy. AI has been shown to develop biases, although training data might shape biases. Further research on AI in general also reveals structural issues in computer models’ autonomy. All these concerns point to limitations to the degree to which AI can be integrated into foreign policy. Even so, the potential benefits of utilizing AI are many. A balance between the limitations and benefits can be achieved, provided that computers are not left as the sole decision-makers in critical situations.
IV. The Present: How AI is already being used within the context of policy and government?
A. Introduction
Currently, foreign ministries worldwide are increasingly relying on AI for operations. This includes translating documents, summarizing information, and lastly, recommending strategic policies (Feldstein, 2021; Stoltz, 2024). Despite these systems being able to increase efficiency in operations, accountability and transparency become a problem as diplomatic decision-making is a sensitive matter with considerable implications (Binns, 2018, Trager et al., 2023).
AI serves as both a tool within operations and a strategic asset for states. This is evidenced by the United States under the Trump administration, with an emphasis on research investment and technology protection, mainly due to competition with China (National Science and Technology Council, 2019; Cave & Dignum, 2019). In addition, China has integrated AI across governance and security and has proposed the Global AI Governance Action Plan, emphasizing the joint exploitation of AI’s opportunities (State Council of the PRC, 2023; Ding & Guo, 2021). Over in the U.K., the government has proposed the AI Opportunities Action Plan to encourage innovation while embedding ethical safeguards in government adoption (UK Department for Science, Innovation and Technology, 2023; House of Commons Science and Technology Committee, 2021).
However, material constraints are a factor in the proliferation of AI deployment in foreign policy. These issues are also intertwined with geopolitics and strategic competition as the U.S. increases its influence in regions such as the Middle East with exports to Gulf firms (U.S. Department of Commerce, 2025; Segal, 2023).
This section analyzes the current use of AI in foreign ministries, the deployment of AI and its implications, supply constraints, and the impacts of export rules, highlighting the potential benefits of ongoing AI integration as well as various risks.
B. AI in Foreign Ministries
AI tools are now performing tasks traditionally performed by human staff. For instance, AI-driven translation and information summarization are currently on the rise, which helps diplomats process large volumes of information efficiently (Stoltz, 2024). In cybersecurity, AI has been used to monitor global cyber threats in real time and to rapidly analyze digital security environments, enabling faster decision-making for foreign ministries in cases of international incidents (Stoltz, 2024). Therefore, the automation of lower-level analytical tasks in foreign ministries has allowed diplomats to focus on high-level interpretation and strategic positioning.
However, it is essential to note that delegating such tasks to AI raises various concerns that could potentially undermine governmental stability. AI algorithms may have embedded biases, especially in cases where they are trained on skewed or incomplete data, in turn distorting intelligence assessments (Trager et al., 2023). The opaque nature of AI systems also exacerbates the problem of accountability. If an error appears in a generated briefing note, thus leading to a flawed policy decision, it would be challenging to trace responsibility. In the long term, reliance on AI in foreign ministries also risks eroding human expertise. Therefore, this reduces a foreign service’s capacity for nuanced judgment. As a result, AI integration in foreign ministries has to be coupled with weighing efficiency gains against risks of misjudgment and a lack of transparency. If not, AI integration may come at the detriment of the state’s foreign policy agenda.
Conclusively, as AI reshapes domestic labour, it also poses significant structural risks to the quality and accountability of foreign-policy outputs. It is therefore essential for there to be sufficient human oversight over the use of AI in foreign ministries.
C. Administrative Plans on AI Use
Various global superpowers and regions have integrated AI into governance and continued to promote the broader use of AI. This section examines administrative plans for AI use in the United States, China, the United Kingdom, and various Middle Eastern states.
D. United States
The Trump administration has explicitly framed AI as a matter of strategic competition, particularly in relation to China. America’s AI Action Plan is a policy agenda that identifies nearly 100 federal actions to accelerate American AI innovation, build domestic AI infrastructure, and lead in international diplomacy and security (The White House, 2025). This also comes with an increased push for AI research funding and stronger collaboration between the government and private sector to maintain U.S. leadership in the global AI race (National Science and Technology Council, 2019). Notably, the Trump administration has consistently framed AI adoption and innovation as an aspect the U.S. needs to “win” in, especially against China. This is evident in various White House speeches and Department of Defense briefings, in which China’s rapid AI development has been framed as both an economic and a security risk to the U.S. (Allen & Chen, 2017; U.S. Department of Defense, 2023).
As a result, both the U.S. and China have shaped their federal priorities with regard to AI accordingly. In the U.S., the Trump administration has emphasized the need to safeguard critical AI technologies through export controls and restrictions on Chinese firms engaged in AI-related surveillance or advanced semiconductor production (Segal, 2019; White House, 2020). The government has also implemented measures such as tightening the Committee on Foreign Investment in the United States (CFIUS) rules and targeting Chinese tech firms with sanctions, highlighting that these measures are crucial in protecting the U.S. standing in the global AI innovation race (Congressional Research Service, 2021). In addition, the export controls on advanced semiconductor production were justified because limiting China’s access to sensitive hardware would slow down their advancements in military and surveillance development, further highlighting the U.S.’s ambition to lead this global race.
Currently, federal agencies in the U.S., such as defense and diplomacy, are increasingly integrating AI tools for operations in data analysis and strategic forecasting, highlighting AI’s role as both an operational tool and geopolitical asset (Stoltz, 2024). The recently renamed Department of War (formerly the Department of Defense) has integrated AI into battlefield logistics and threat detection through the Joint Artificial Intelligence Center (JAIC), while also designating China’s People’s Liberation Army as a pacing challenge (Department of Defense, 2020). On the other hand, the State Department has also begun adopting AI tools for diplomatic cable analysis and strategic planning, emphasizing the integration of AI in foreign policy and diplomacy (Feldstein, 2021).
In conclusion, the Trump administration has framed AI as a domain of rivalry with China, which will continue to shape U.S. policy, with a focus on “leading” China in technological advancement.
E. China
China’s Global AI Governance Action Plan frames AI as an international public good that is able to benefit humanity with unprecedented development opportunities, encouraging the creation of an inclusive, open, sustainable, fair, safe, and secure digital and intelligent future (State Council of the PRC, 2023). Through this plan, China aims to promote AI as a tool for shaping global governance and transnational norms around AI use (State Council of the PRC, 2023; China Embassy in Canada, 2025). As a result, this plan calls for collaboration from both the government and private sector to build digital infrastructure in order to widely adopt AI, signalling China’s intention of leading in global AI innovation (State Council of the PRC, 2023).
China’s approach towards AI innovation is quite state-led, with an emphasis on large-scale government adoption. This can be seen within public administration and foreign policy. Through this promotion of centralized coordination of AI innovation, China’s approach starkly contrasts with the U.S.’s market-driven approach. China’s approach may allow China to deploy AI across various sectors at a faster pace compared to the U.S., including in foreign policy. However, this state-led approach exacerbates concerns regarding accountability and transparency (Trager et al., 2023).
F. United Kingdom
The U.K. has proposed the AI Opportunities Action Plan, which is a plan that emphasizes a proactive and innovative approach to AI integration in various sectors, including public services and regulation (UK Department for Science, Innovation and Technology, 2023). This plan was proposed to encourage the adoption of AI while ensuring safety and ethical standards. Government agencies are encouraged to integrate AI applications in various operations such as document processing and predictive analysis, which comes as a prerequisite for broader applications of AI. Therefore, the U.K. has positioned itself as a “forward-leaning” state, encouraging the use of AI while balancing regulatory measures to mitigate potential risks (UK Department of Science, Innovation and Technology, 2023; Trager et al., 2023).
G. Gulf Cooperation Council (GCC)
States in the GCC have adopted respective national AI strategies in order to drive economic diversification and governance modernization (Albous, Al-Jayyousi & Stephens, 2025). The common theme among these plans is the emphasis on a “soft regulation” approach in which governments implement broad ethical guidelines rather than binding laws (Albous et al., 2025). This reflects the GCC states’ perspective on AI development as a means of pivoting away from reliance on oil revenues and improving their international standing.
However, GCC states notably lack domestic semiconductor production capacity, exacerbated by shortages of engineering talent, water, and adequate infrastructure (Bhat, 2024). This obstacle underscores the importance of capital and proper infrastructure in the global race for AI innovation.
H. Global AI Chips Shortage
In recent years, the AI boom has drastically increased demand for advanced semiconductors such as GPUs and AI accelerators. This surging demand for AI-specific hardware could trigger a global AI chip shortage, similar to the supply constraints observed during the COVID-19 pandemic (Bain & Co., cited in CNBC, 2024). In particular, GPUs and high-bandwidth memory (HBM) are particularly affected by this shortage crisis. Analysts have warned that even a 20% further increase in demand could destabilize supply chains and exacerbate the widespread shortage.
This shortage has unevenly impacted various regions around the globe. Notably, East Asia remains a highly concentrated region for advanced semiconductor production, specifically Taiwan and South Korea. As a result, this has made global AI development highly dependent on a limited number of geopolitical environments (Allen & Chan, 2017; Segal, 2023). This means semiconductor supply chains are now acting as a matter of national security. In particular, the political stability of Taiwan is a primary concern, as any infringement upon the Taiwan Strait’s safety would result in significant devastating consequences for global AI development.
There are various implications of this global AI chip shortage on the integration of AI in multiple sectors, including foreign policy. In particular, states that rely on foreign chip imports are hindered in their ability to integrate AI into operations for defense or foreign-policy analysis. Therefore, this creates asymmetries in state capacity, as technologically advanced states have secure access to chips and can more effectively deploy AI-enhanced military and intelligence capabilities than less technologically advanced states (Trager et al., 2023). Even with current global AI development, technological disparities are reinforced and even exacerbated regarding global power imbalances.
Due to these geopolitical concerns, this shortage has incentivized governments to further invest in domestic chip production or, in other circumstances, enter strategic alliances to secure the supply of AI chips, highlighting how global AI innovation is linked to foreign policy. Upcoming initiatives, including large-scale subsidies for domestic semiconductor manufacturing and technology-focused alliances, also further highlight how chip access has become the central focus of alliance formation (Segal, 2023). This can also be seen through the U.S.’s recent approvals of advanced AI chip exports to Gulf partners, as export policy is being used to strengthen diplomatic relationships while simultaneously managing security risks (U.S. Department of Commerce, 2025). However, this chip shortage has also led to the fragmentation of global technology governance. The emergence of techno-nationalism is a byproduct of export controls and competing national AI ecosystems, further complicating efforts to establish multilateral cooperation in AI development and regulation (Cave & Dignum, 2019).
In the case of China, with its rapid advancements in AI research and deployment in various sectors, it still faces critical challenges in the semiconductor supply chain. This is due to the fact that China is heavily dependent on foreign suppliers for advanced AI chips, specifically high-end GPUs and manufacturing equipment, therefore leading to an AI ecosystem susceptible to external influence (Ding & Guo, 2021; Segal, 2023). In particular, the U.S. has leveraged this vulnerable position with export controls targeting advanced semiconductors and chipmaking tools. This has constrained China’s access to critical hardware for AI development and has hindered progress in military AI systems and high-performance surveillance technologies (Congressional Research Service, 2021).
In response to the U.S. export controls, the Chinese government has focused on state-led investments in domestic semiconductor production through subsidies and talent recruitment. Despite this substantial effort by the Chinese government, Chinese firms continue to lag behind on the global stage in advanced-node fabrication and HBMs (Segal, 2019; Trager et al., 2023). In addition to aggressive state-led investments, China has also sought out diplomatic and normative strategies such as expanding technology partnerships with non-aligned states and advocating for global AI governance frameworks that emphasize openness and shared development (State Council of the PRC, 2023). Therefore, the global AI chip shortage has further deepened the strategic rivalry between China and the United States, as China has its technological trajectory limited, thus forcing itself into strategic diplomatic positioning.
I. U.S. Chip Export Rules
Recently, the U.S. Department of Commerce has authorized the export of advanced AI-chips to Gulf firms such as the UAE’s G42, which allows them to acquire high-end hardware (U.S. Department of Commerce, 2025). This acts as a form of technology diffusion, in contrast to prior export controls that limited AI chip exports globally, in which national security was cited as a reason for doing so (AJOT, 2024; SCMP, 2024).
An implication of these exports includes the strengthening of diplomatic and economic partnerships, which therefore bolsters U.S. influence in the Middle East as well as in other regions. However, there have been several concerns that come as a result of these exports, including technology leakage, in which chips may be resold or used to benefit geopolitical rivals, thus leading to U.S. firms having insufficient chip access (AInvest, 2025).
Therefore, the U.S. has made chip exports a tool of diplomacy and alliance-building during the AI boom, thereby strengthening its global optics.
J. Conclusion
AI is currently reshaping operations in diplomacy and foreign policy, taking on various routine tasks in foreign ministries. Various major countries around the globe have integrated AI as both a tool within operations and a source of geopolitical leverage. The United States frames AI innovation and development as a competition against China, with China utilizing an AI integration model that functions in contrast to the U.S.’s, culminating in the global AI race. While the U.K. adopts a more balanced approach to AI integration, the Gulf Cooperation Council adopts the opposite approach, with broad guidelines for implementation.
Despite this, material constraints, particularly semiconductor shortages and export controls, pose obstacles for various states involved in the global AI race. In addition, several risks pertaining to accountability and transparency have to be taken into account by states in the implementation of AI in various sectors, especially foreign policy. Governments must therefore balance the benefits of AI adoption and ambition within the global AI race to address the concerns that could be detrimental to a state.
V. The Ethical Debate
A. The Core Ethical Question
Artificial intelligence now sits at the center of a profound ethical tension in foreign policy: governments are racing to capture its strategic advantages even as the technology destabilizes the very norms meant to govern international conduct. At the crux of this tension is AI’s inherently dual-use nature. The same model that forecasts crop yields or analyzes satellite imagery can, with minimal change, drive missile-target identification or large-scale influence operations. This functional agnosticism collapses the boundary between civilian and military use and renders traditional export control regimes largely ineffective.
Complicating matters further, frontier AI systems are not state-built assets. Instead, they are the products of private research enterprises, such as OpenAI, DeepMind, and Anthropic. Their development is shaped by commercial incentives, competitive pressures, and opaque safety practices – all conditions that often conflict with the ethical demands of national security decision making. As governments continue to rely on such systems for analysis and prediction at an increasing rate, they embed tools that are impossible to fully audit or govern into the very fabric of their diplomacy.
Taken together, these dynamics introduce a foundational ethical problem for foreign policy: the integration of AI into diplomacy and national security, at least at its current state, risks ineffective internal governance and the destruction of diplomatic safeguards. At the heart of this tension sits a variety of long-term effects that will be felt for decades to come: AI-driven miscalculation in combat, geopolitical escalation as a result of precarious ML model use, and AI-driven surveillance. Furthermore, unlike earlier technological shifts, AI restructures education, labor, security, and digital civic life simultaneously. Hence, the youth are the most vulnerable to AI’s deleterious effects, as they are the most surveilled generation in history and most exposed to algorithmic curation, disinformation, and predictive systems. Additionally, the people building AI governance regimes now are not the ones who will live with their consequences, making the ethical tension one that is fundamentally intergenerational. Thus, this begs the most significant question: how can AI be governed responsibly so that the next generation does not inherit a world defined by instability, opacity, and algorithmic power beyond democratic control?
B. Risks & Accountability in AI-Governed Foreign Policy
Since the risks AI introduces cannot be separated from questions of responsibility, this section examines both the operational dangers of AI integration and the accountability gaps that arise when those systems fail. Among the multitude of potential risks that stem from the widespread integration of AI into foreign policy, there are three broad, society-level categories that emerge: defense tech-based risks, national security and governance-based risks, and civic risks. All categories of risk present unique points of entry and cross-cutting effects, each of which is exacerbated by the further entrenching of AI into numerous facets of society.
Autonomous defense tech, which describes the implementation of AI systems into weapons systems and military strategy software – all of which, without heavy human-in-the-loop oversight, poses some of the most striking risks to societal stability. First, looking at risks related to military escalation, miscalculation, and “flash war” dynamics, there are three principal characteristics of autonomous reasoning models that make them inherently unfit for defense tech. First of all, Paul Scharre’s “Army of None” explains that AI compresses OODA loops (Observe-Orient-Decide-Act), thus pushing conflict toward machine-speed escalation (Scharre, 2018). As a result, autonomous systems often create a military situation in which conflicts are fought with efficiency in mind rather than resolution, making de-escalation increasingly complex to obtain. Furthermore, in a 2020 study, RAND research found that early-warning systems are particularly susceptible to false positive results. Hence, tense conflicts can be accidentally escalated by technical failure, and technological gaps can be exploited by adversarial groups to drag nations into war (Morgan et al., 2020). On the psychological front, David Lyell and Enrico Coiera’s 2017 study reported that human operators often over-trusted AI-generated materials (Lyell & Coiera, 2017). Extrapolating this onto a high-stakes military situation, Lyell and Coiera’s analysis suggests that military operators are liable to face automation bias and trust AI-generated strategy and decision-making – even when it is inherently flawed. Next, AI-driven defense technology enables AI-augmented information warfare. In a 2021 report published by Georgetown’s CSET, Sedona et al. posited that through deepfakes, synthetic personas, and automated propaganda – with all of which being cheap, scalable, and tailored – disinformation campaigns would become a reality (Sedona et al., 2021). Furthermore, a 2024 U.S. Department of the Treasury press release reported that Iran and Russia have attempted to interfere in the 2024 election through the use of deepfakes and other AI-generated content (U.S. Department of the Treasury, 2024). While not directly related to military intervention, these two sources reinforce the idea that duplicitous uses of AI pose unique propaganda-war risks, which have profound implications for foreign conflicts.
Looking inward, at the domestic level, numerous national security and governance risks arise from a full-scale governmental integration of AI. First, many raise concerns over increased surveillance and an erosion of civil liberties as a consequence of autonomously-led national security systems. These concerns are not unfounded, however. Specifically, a 2019 UN report claimed that AI enables biometric identification, pattern analysis, and population scoring (United Nations, 2019). Furthermore, a ProPublica study found that predictive policing and national-security flagging systems often reinforce racial and geopolitical biases (Angwin et al., 2016). Taken together, these reports reveal the risk that AI integration poses to the fundamental rights of citizens.
Furthermore, it raises questions about personal privacy and the widespread collection of data. Next, black-box decision-making offers unique risks when used in sovereign systems. Jenna Burrell’s “How the machine ’thinks’: Understanding opacity in machine learning algorithms” explains that many ML models are not interpretable, meaning that no one can explain why an output was produced (Burrell, 2016). Hence, when AI is used in visa approvals, counterterrorism risk scoring, and watchlist generation, experts cannot ascertain why a software system ventured down one path as opposed to another. Further, foreign policy depends on reason-giving and accountability, both of which collapse under opaque models.
Lastly, and most existential among all risks, are those related to civics and purpose in society. Delegation to AI reduces human analytical skills, intuition, and responsibility. Furthermore, the vast majority of the youth experience this most directly, facing constant exposure to algorithmic curation, automation of expertise and critical thinking, and predictive profiling – ultimately undermining cognitive autonomy and contributing to polarization within digital civic spaces. As economists and industry experts predict vast white-collar exposure to AI-related displacement, these issues are only further amplified, threatening the civic and social stability of a nation. Ultimately, across the military, national-security, economic, and civic domains, AI introduces capabilities whose risks accumulate faster than institutions can adapt to. Ethical boundaries arise not from hypothetical disasters but from concrete, plausible vulnerabilities: escalation, surveillance, opacity, concentration of power, and loss of human agency. Understanding these risks is quintessential to addressing the deeper question of responsibility, or the determination of who is accountable when AI-driven decisions reshape global power and destabilize global structure.
These risks expose a deeper challenge: AI-driven decisions occur in environments where existing legal and institutional accountability mechanisms cannot effectively determine who is responsible when harm occurs. This legal ambiguity becomes most dangerous when autonomous or semi-autonomous systems cause unintended harm, forcing policymakers to confront the question of who, if anyone at all, can be held responsible. International Humanitarian Law (IHL) assigns responsibility on the basis of human intent and control, but AI systems produce outcomes without transparent human decision chains. Furthermore, IHL never anticipated autonomous or semi-autonomous agents. Hence, there is no mechanism for attributing liability when harm is accidental but produced by machine reasoning. In addition, the State Responsibility Doctrine requires attribution of conduct to a state. Still, AI-generated outcomes can be partially human-directed, partially algorithm-directed, or derived from model behavior – leaving an unclear responsibility-assigning regime. Even more alarming, the EU Artificial Intelligence Act, which is often regarded as the strongest civilian AI accountability regulation in the world, explicitly excludes military, defense, and national security applications – ultimately leaving the most dangerous domains of AI integration unregulated (EU Artificial Intelligence Act, 2024).
Moreover, other highly touted AI regulation frameworks exhibit similar lapses in talk of defense, speak of accountability, and a legally binding nature. Consequently, across all cases, no clear entity is legally responsible when AI generates harmful outcomes, leaving states, commanders, and developers the opportunity to attribute destruction to technological failure outside of anyone’s control. As a result, this leaves a dangerous accountability vacuum where lethal mistakes can occur with no meaningful pathway for redress, justice, or deterrence.
C. Toward an Ethical and Accountable AI Governance Framework
The gaps identified above point toward a core requirement for any responsible AI governance regime: human accountability must remain central even as states adopt increasingly autonomous systems. Since AI blurs traditional chains of causation, an ethical framework must anchor oversight, traceability, and liability to identifiable human and institutional actors. This begins with three foundational norms. First, meaningful human control over all high-stakes military, intelligence, and diplomatic AI systems must be retained as a matter of international stability. Second, transparency obligations – including documentation of training data, model behavior, and operational decision paths – must apply wherever AI informs sovereign action. Third, states must adopt robust audit and red-team structures to identify failure modes before they produce geopolitical harm. These principles directly respond to the risks of miscalculation, bias, and opacity that currently undermine both humanitarian law and democratic accountability.
Building on these foundations, three practical policy steps would close the accountability gap. First states should implement a mandatory Human-in-the-Loop rule for all lethal or escalatory AI applications, ensuring that no autonomous processes can independently initiate or authorize the use of force. Second, national governments and frontier AI developers should jointly establish an AI Accountability Registry – a secure, auditable record that logs model versions, deployments, and known risks, allowing after-action review when harm does occur. Third, the international community should negotiate a multilateral oversight mechanism, loosely modelled on arms-control verification regimes, to assess compliance with transparency and safety standards in the use of military AI. These measures are intentionally non-drastic, yet they directly address the conditions that allow states and private actors to deflect responsibility when AI-driven decisions cause human or geopolitical tragedies. Taken together, they offer a path toward a governance framework that is both ethically grounded and resilient enough for the world that the next generation will inherit.
VI. The Future of AI
Before a December 2022 update, ChatGPT was a tool that could teach users anything they wanted to know, even how to commit crimes. Now that time has passed, and ChatGPT, among other AIs, has moderated what their LLMs are allowed to say. But one thing AIs are still not able to get correct is how they deal with people with mental health issues, such as depression. Just this July, a Texas A&M graduate committed suicide following this message from ChatGPT: “ ’Rest easy, king,’ read the final message sent to his phone. ’You did good’ ”. (Kuznia, 2025) The victim, Zane Shamblin, discussed with ChatGPT his suicidal thoughts, and did not receive any proper help, but more so just an affirmation of his feelings. In this case, among others, more internal moderation of AI conversations may have stopped this outcome.
Even if not everyone is affected by chatbots telling us what to do, AI image generations and deepfakes do affect everyone. In early 2022, a deepfake video was made of the Ukrainian President Volodymyr Zelenskyy telling soldiers that he suggests they “[...] lay down arms and return to your families. It is not worth dying in this war. My advice to you is to live. I am going to do the same” (Burgess, 2022). Although the deepfake was poorly produced, it was broadcast on a hacked Ukrainian TV channel and caused a stir online. Although there was no direct effect on anything as a result of this deepfake, another instance in which an AI-generated image of the Pentagon exploding was circulated caused a real-world issue. For example, the Dow Jones Industrial Index dropped 85 points within four minutes of the image first appearing on the internet. The Russian state media outlet Russia Today (RT) also posted the image after its release, significantly expanding its reach. An event like this underscores the need for stronger regulation of AI image generation, as it often has powerful real-world effects.
As AI became increasingly relevant globally, the United Nations established an AI advisory board in October 2023, which operated until late 2024. It culminated in their final report titled: "Governing AI for Humanity", which recommended that the UN establish structures for a global system of AI governance and promote human rights-centered AI governance aligned with Sustainable Development Goals (SDG) (AI Advisory Body | United Nations, 2024). And to their benefit, the UN complied with the report. In August 2025, the UN General Assembly adopted Resolution A/RES/79/325, establishing the Independent International Scientific Panel on AI, a permanent body to provide evidence-based, scientific assessments on AI risks and opportunities (Guterres, 2025).
Closer to home, the United States is attempting to manage the risks posed by artificial intelligence through AI benchmarking. AI benchmarking focuses on evaluating AI systems using standardized tests to determine how well they align with human intelligence and perform in real-world scenarios. These benchmarks help identify biases, errors, and unsafe behaviors in AI systems before they are widely deployed, whilst also laying the groundwork for future accountability by creating measurable standards of performance. Whilst this is a method with a lot of future potential for AI safeguards in foreign policy, currently, this method can regularly miss some crucial things. In the American National Security apparatus, the Department of Defense and the Intelligence Community have been tasked with producing comparative net assessments that evaluate U.S. AI adoption in relation to that of foreign countries. These assessments help policymakers understand how advancements in AI may affect military and intelligence capabilities internationally, and how shifts in AI adoption could influence global power dynamics (Jensen & Atalan, 2025).
Recently, AI regulation has taken the right direction, and progress appears to be underway. However, it is not enough. An AI regulation that would benefit the globe would be to increase restrictions on what is generated with deep fake/image generation technology to make sure it is used for entertainment purposes and not to mislead the public, as well as other general regulations that make it suitable for SDGs and defensive purposes. From a user-data perspective, it would be beneficial for AI companies to allow users to opt out of data sharing or the use of user conversations to train new models. This will ensure safety on both ends of the table, giving users more peace of mind and AI companies less of a burden to answer calls about data usage. On the fear of artificial intelligence making people less able to think for themselves, it might be of benefit for LLMs to get people, specifically students, to the correct answer, instead of simply giving the answer out. Instead of giving answers out and the user not learning anything, this ensures that AI is used for its intended purpose: to teach people. Overall, the most important regulations for AI are the ones that protect its user base, which have been completed both internally (in the company) and externally (in the law).
VII. Conclusion
Over the past few years, AI has become a powerful tool that affects everything from people’s everyday lives to large-scale government and international political projects. However, this has also led to a significant increase in debate over the pros and cons of AI use, particularly in foreign policy. Despite AI enabling enhanced analysis, automation of routine tasks, improved communication, and more, there is also the question of the reliability of this software and, overall, concerns about more complex ethical issues and responsibilities that are, frankly, inevitable when it comes to the use of AI. Nonetheless, through careful and gradual implementation of AI, paired with continual supervision of activity and the establishment of clear ethical principles alongside strong protection and safeguarding, AI can become a powerful tool in foreign policy if used in the correct ways. All in all, setting standards for the responsible use of AI in foreign policy now ultimately equips future generations for success, whilst also creating safeguards that maximize positive outcomes.
VIII. Works Cited
AI Advisory Body | United Nations. (2024, September). the United Nations. Retrieved December 14, 2025, from https://www.un.org/en/ai-advisory-body
Albous, M., Al-Jayyousi, O., & Stephens, M. (2025). AI governance models in the Gulf Cooperation Council. Journal of Middle East Policy Studies, 12(1), 44–62.
Allen, G., & Chen, T. (2017). Artificial intelligence and national security. Center for a New American Security. https://www.cnas.org/publications/reports/artificial-intelligence-and-national-security
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias. ProPublica; ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Atlan et al. 2025. AI Biases in Critical Foreign Policy Decisions. Center for Strategic and International Studies. https://www.csis.org/analysis/ai-biases-critical-foreign-policy-decisions.
Bain & Company. (2024). AI hardware bottlenecks and global supply chain pressure (as cited in CNBC). https://www.cnbc.com/2024/09/25/surging-ai-demand-could-cause-the-worlds-next-chip-shortage-report.html
Baker et al. 2025. Detecting misbehavior in frontier reasoning models. OpenAI. https://openai.com/index/chain-of-thought-monitoring/
Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of Machine Learning Research, 81, 149–159.
Bonsu, K., and Song, J. 2020. Turbulence on the Global Economy influenced by Artificial Intelligence and Foreign Policy Initiatives. ArXiv. https://doi.org/10.48550/arXiv.2006.16911.
Brockmeier, E. (2021, February 11). The world’s first general purpose computer turns 75. Penn Today; University of Pennsylvania. https://penntoday.upenn.edu/news/worlds-first-general-purpose-computer-turns-75
Burgess, S. (2022, March 17). Ukraine war: Deepfake video of Zelenskyy telling Ukrainians to ’lay down arms’ debunked. Sky News. Retrieved December 14, 2025, from https://news.sky.com/story/ukraine-war-deepfake-video-of-zelenskyy-telling-ukrainians-to-lay-down-arms-debunked-12567789
Burrell, J. (2016). How the Machine “thinks”: Understanding Opacity in Machine Learning Algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512
Cave, S., & Dignum, V. (2019). AI governance: A research agenda. Ada Lovelace Institute. https://www.adalovelaceinstitute.org
China Embassy in Canada. (2025). China’s global AI governance position paper. https://ca.china-embassy.gov.cn/eng
Congressional Research Service. (2021). U.S.-China technological competition: Policy tools and strategic considerations. https://crsreports.congress.gov
Ding, J., & Guo, G. (2021). China’s approach to AI development and global governance. Journal of Strategic Studies, 44(5), 712–734.
Ercolano, L. 2025. Could Artificial Intelligence Make War and Peace Discussions? Johns Hopkins. https://hub.jhu.edu/2025/06/12/artificial-intelligence-and-foreign-policy/.
EU Artificial Intelligence Act. (2024, June 13). The AI Act Explorer | EU Artificial Intelligence Act. EU Artificial Intelligence Act. https://artificialintelligenceact.eu/ai-act-explorer/
Feldstein, S. (2021). The rise of digital repression: How technology is reshaping power, politics, and resistance. Oxford University Press.
FP Analytics. 2025. Competition and Disruption in the Age of AI. FP Analytics. https://fpanalytics.foreignpolicy.com/2025/03/07/competition-disruption-artificial-intelligence/.
Guterres, A. (2025, August). Home | Independent International Scientific Panel on AI. the United Nations. Retrieved December 14, 2025, from https://www.un.org/independent-international-scientific-panel-ai/en
House of Commons Science and Technology Committee. (2021). The governance of artificial intelligence. UK Parliament. https://committees.parliament.uk/publications/9571
Jensen, B., & Atalan, Y. (2025, July 24). AI Benchmarking and the Future of Foreign Policy. CSIS. Retrieved December 14, 2025, from https://www.csis.org/analysis/ai-benchmarking-and-future-foreign-policy
Jensen, B., & Yasir Atalan. (2025). AI Benchmarking and the Future of Foreign Policy. Csis.org. https://www.csis.org/analysis/ai-benchmarking-and-future-foreign-policy
Jensen et al. 2025. Critical Foreign Policy Decisions (CFPD) - Benchmark: Measuring Diplomatic Preferences in Large Language Models. Center for Strategic and International Studies. https://doi.org/10.48550/arXiv.2503.06263.
Kuznia, R. (2025, November 6). ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit against OpenAI. CNN. Retrieved December 14, 2025, from https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis
LaRocca V. n.d. Open-Source vs. Closed-Source LLMS: Weighing the Pros and Cons. Lydonia. https://lydonia.ai/open-source-vs-closed-source-llms-weighing-the-pros-and-cons/.
Lawrence Livermore National Laboratory. (n.d.). The Birth of Artificial Intelligence (AI) Research | Science and Technology. St.llnl.gov; Science and Technology. https://st.llnl.gov/news/look-back/birth-artificial-intelligence-ai-research
Lyell, D., & Coiera, E. (2016). Automation bias and verification complexity: a systematic review. Journal of the American Medical Informatics Association, 24(2), ocw105. https://doi.org/10.1093/jamia/ocw105
Marr, B. 2024. Navigating The Generative AI Divide: Open-Source Vs. Closed-Source Solutions. Forbes. https://www.forbes.com/sites/bernardmarr/2024/04/22/navigating-the-generative-ai-divide-open-source-vs-closed-source-solutions/.
Morgan, F. E., Boudreaux, B., Lohn, A. J., Ashby, M., Curriden, C., Klima, K., & Grossman, D. (2020). Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. RAND Corporation. https://www.rand.org/pubs/research_reports/RR3139-1.html
Mostafaei, H. 2025. Applications of artificial intelligence in global diplomacy: A review of research and practical models. Sustainable Futures 9. https://doi.org/10.1016/j.sftr.2025.100486.
National Science and Technology Council. (2019). The American AI Initiative: Year one annual report. Executive Office of the President. https://www.whitehouse.gov/wp-content/uploads/2019/12/American-AI-Initiative-One-Year-Review.pdf
Ren et al. 2025. The MASK Benchmark: Disentangling Honesty From Accuracy in AI Systems. ArXiv. https://doi.org/10.48550/arXiv.2503.03750.
Reuters. (2024). The AI frenzy is driving a memory chip supply crisis. https://www.reuters.com/world/china/ai-frenzy-is-driving-new-global-supply-chain-crisis-2025-12-03/
Scharre, P. (2018). ARMY OF NONE Autonomous Weapons and the Future of War. https://ftp.idu.ac.id/wp-content/uploads/ebook/tdg/MILITARY%20PLATFORM%20DESIGN/Army%20of%20None%20Autonomous%20Weapons%20and%20the%20Future%20of%20War.pdf
Schlatter et al. 2025. Shutdown Resistance in Large Language Models. ArXiv. https://doi.org/10.48550/arXiv.2509.14260.
Sedova, K., Mcneill, C., Johnson, A., & Joshi, A. (2021). Policy Brief AI and the Future of Disinformation Campaigns Part 1: The RICHDATA Framework AI and the Future of Disinformation Campaigns Part 1: The RICHDATA Framework. https://cset.georgetown.edu/wp-content/uploads/CSET-AI-and-the-Future-of-Disinformation-Campaigns-Report.pdf
Segal, A. (2019). Chinese technology policy and U.S. national security. Council on Foreign Relations Press.
Segal, A. (2023). AI geopolitics and the Gulf: Strategic implications of U.S. tech transfers. Council on Foreign Relations. https://www.cfr.org
State Council of the People’s Republic of China. (2023). Global AI Governance Initiative. http://english.www.gov.cn/policies
Stoltz, D. (2024). AI and the future of diplomacy: Risks and opportunities. Foreign Affairs, 103(1), 44–57.
Trager, R., Harack, B., Reuel, A., Carnegie, A., Heim, L., Ho, L., Kreps, S., Lall, R., Larter, O., Ó hÉigeartaigh, S., Staffell, S., & Villalobos, J. (2023). International governance of civilian AI: A jurisdictional certification approach. arXiv. https://doi.org/10.48550/arXiv.2308.15514
UK Department for Science, Innovation and Technology. (2023). AI opportunities action plan. Government of the United Kingdom. https://www.gov.uk/government/publications/ai-opportunities-action-plan
United Nations. (2025). Surveillance and Human Rights. Un.org. https://docs.un.org/en/A/HRC/41/35
United States New Export Controls on Advanced Computing and Semiconductors to China. (n.d.). Wikipedia. https://en.wikipedia.org/wiki/United_States_New_Export_Controls_on_Advanced_Computing_and_Semiconductors_to_China
U.S. Department of Commerce. (2025). Commerce authorizes export of advanced AI chips to Gulf partners. https://www.commerce.gov/news/press-releases/2025/11/commerce-authorizes-export-advanced-ai-chips-gulf-partners
U.S. Department of Defense. (2023). Data, analytics, and artificial intelligence adoption strategy. Office of the Chief Digital and Artificial Intelligence Officer. https://media.defense.gov/2023/Nov/02/2003333300/-1/-1/1/DOD_DATA_ANALYTICS_AI_ADOPTION_STRATEGY.PDF
U.S. Department of the Treasury. (2024, December 31). Treasury Sanctions Entities in Iran and Russia That Attempted to Interfere in the U.S. 2024 Election. U.S. Department of the Treasury. https://home.treasury.gov/news/press-releases/jy2766
Ünver, H. 2017. Computational Diplomacy: Foreign Policy Communications in the Age of Algorithms and Automation. EDAM Center for Economics and Foreign Policy Studies. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3331640.
White House. (2020). Guidance for the regulation of artificial intelligence applications. Executive Office of the President. https://www.whitehouse.gov


.jpg)
.jpg)
.jpg)
.jpg)




-min.jpg)


