badge icon

This article was automatically translated from the original Turkish version.

Article

OpenAI - United States Department of Defense Agreement (2026)

OpenAI – U.S. Department of Defense Agreement (2026) is a contract between OpenAI and the U.S. Department of Defense that provides for the deployment of OpenAI’s advanced artificial intelligence models within the Department’s classified network environment. The agreement was publicly announced by OpenAI Chief Executive Officer Sam Altman on February 28, 2026.

OpenAI logo - (Anadolu Ajansı)

Path to the Agreement

Pentagon–Anthropic Dispute

Prior to the agreement with OpenAI, in February 2026, tensions arose between the U.S. Department of Defense and Anthropic over conditions for military use of artificial intelligence models.


It was reported that Anthropic refused to permit its developed AI models to be used for mass domestic surveillance or in fully autonomous weapon systems that identify and engage targets without human intervention. In response, Defense Secretary Pete Hegseth argued that the Department of Defense should not be restricted by a supplier’s internal usage rules and asserted that technology must be freely usable within a “lawful use” framework.


On February 24, U.S. Defense Secretary Pete Hegseth issued an ultimatum to Anthropic, stating that if the Pentagon was not permitted to use the company’s AI as required by February 27 at 5:01 p.m., it risked losing its contracts with the government. It was indicated that the Department of Defense would terminate its partnership with Anthropic and classify the company as a “supply chain risk”.


In a statement on February 27, U.S. Department of Defense spokesperson Sean Parnell clarified that the Pentagon had no intention to conduct mass local surveillance or deploy autonomous weapon systems. Parnell emphasized that the Department’s demand was simply to allow Anthropic’s models to be used for all lawful purposes.


On February 27, U.S. President Donald Trump issued a directive to federal agencies to halt use of Anthropic’s technology. However, he granted the Pentagon a six-month transition period to phase out its existing use of the technology.

Anthropic CEO’s Statement

Anthropic CEO Dario Amodei stated that he and his team could not ethically comply with the Pentagon’s demand for “unrestricted use” of its technologies. Amodei emphasized that Anthropic was the first leading AI company to deploy its models within U.S. government networks, make them available in national laboratories, and provide custom models for national security clients, adding: “I sincerely believe that using artificial intelligence to defend the United States and other democracies while defeating our autocratic adversaries is of existential importance.”


Amodei noted that Anthropic’s AI models are widely used in U.S. national security institutions for critical mission applications such as intelligence analysis, modeling and simulation, operational planning, and cyber operations, and added: “We believe there are circumstances in which artificial intelligence could undermine rather than defend democratic values. Some use cases lie beyond the current limits of technology’s safe and reliable capabilities. Two such applications — mass local surveillance and fully autonomous weapons — were never included in our agreements with the Pentagon, and we believe they must never be included.”


Amodei emphasized that the Pentagon intends to sign contracts with AI companies that accept all uses and remove safeguards related to mass local surveillance and fully autonomous weapons, stating: “Threats do not change our stance; we cannot morally capitulate to this demand.”

Announcement of the Agreement

Sam Altman’s Statements

OpenAI Chief Executive Officer Sam Altman announced the agreement with the Pentagon in a post on the X platform on February 28, 2026. In his statement, Altman confirmed that OpenAI had signed an agreement to integrate its AI models into the Department of Defense’s network.


Altman stated that the agreement followed the Department of Defense’s demonstration of “deep respect for security”. He emphasized that OpenAI technology would not be used for “mass domestic surveillance” or “autonomous weapon systems”, and affirmed that human accountability would be preserved in the use of force. Altman said: “We are committed to serving all of humanity to the best of our ability” and added: “The world is complex, fragmented, and sometimes dangerous.” According to Axios, Altman stated: “Two of our most important security principles are the ban on mass domestic surveillance and human accountability in the use of force, including in autonomous weapon systems.” He confirmed that the Department of Defense had accepted these principles.

OpenAI CEO Sam Altman - (Anadolu Ajansı)

Deployment in Classified Networks

In its official statement, OpenAI indicated that the agreement falls under the category of “advanced AI systems in classified environments”. Reuters claimed that OpenAI’s technology would be deployed within the U.S. Department of Defense’s classified network.


The agreement includes integration of AI systems into the Department of Defense’s network infrastructure. OpenAI’s statement clarified that deployment would be cloud-only and that models would not be distributed to “edge devices.”

Core Principles of the Agreement

OpenAI announced that it established three fundamental red lines in its agreement with the U.S. Department of Defense. It was stated that these three red lines were codified as binding provisions in the contract.

Prohibition on Use of OpenAI Technology for Mass Domestic Surveillance

OpenAI declared that its technology would not be used for mass domestic surveillance. The contract stipulates that processing of classified information for intelligence activities must comply with the Fourth Amendment of the U.S. Constitution, the National Security Act of 1947, the Foreign Intelligence Surveillance Act of 1978, Presidential Directive 12333, and relevant Department of Defense directives.


It was also stated that the system would not be used for unlimited monitoring of private data belonging to U.S. persons. Sam Altman noted that one of the most important security principles is the prohibition of mass domestic surveillance, and confirmed that the Department of Defense had accepted this principle.

Prohibition on Use of OpenAI Technology to Direct Autonomous Weapon Systems

OpenAI declared that its technology would not be used to direct autonomous weapon systems. The contract specifies that under any legal, regulatory, or policy requirement for human control, the AI system will not independently direct autonomous weapons.


The text also references Department of Defense Directive 3000.09 dated January 25, 2023, which requires that AI use in autonomous and semi-autonomous systems undergo rigorous validation, verification, and testing prior to deployment.

Prohibition on Use of OpenAI Technology in High-Risk Automated Decision Systems

OpenAI announced as its third red line the prohibition of high-risk automated decision systems. The statement cited social credit-like systems as an example.


The contract stipulates that high-risk decisions requiring human approval must not be made independently by AI systems. It was stated that the agreement prohibits the use of OpenAI technology in mass domestic surveillance, autonomous weapon systems, and high-risk automated decision mechanisms.

Layered Security Architecture

OpenAI announced that to safeguard its red lines, it adopted a multi-layered security approach, which it stated provides more comprehensive protections than previous classified AI deployments.

Cloud-Only Deployment

The agreement specifies that AI systems will be deployed exclusively in cloud environments. It was stated that models will not be distributed to “edge devices.” This architecture is designed to technically prevent direct integration of models with weapon systems, sensors, or operational hardware.


OpenAI clarified that edge deployment is technically required to operate fully autonomous weapon systems, and that the cloud-only deployment model outlined in the agreement would not permit such use. The company stated that this deployment architecture enables independent verification of whether red lines have been violated.

Safety Stack

OpenAI stated that it retains full discretion and control over its safety stack. The models will not be provided without their safety protections intact. The company affirmed that the safety stack will remain active and that safeguards will be regularly updated and enforced.


It was also noted that while some other AI laboratories have weakened or removed safety protections, OpenAI continues to maintain its technical safeguards.

Involvement of Security-Approved Personnel

Under the agreement, OpenAI engineers with security clearance will provide support to the government. It was also stated that security and alignment researchers will participate in the process and contribute to system development. The company emphasized that authorized OpenAI personnel will be “present throughout the process.”

Contractual Safeguards

OpenAI stated that the contract explicitly includes its red lines and references existing U.S. laws. It was specified that processing of classified information for intelligence activities must comply with current constitutional and legal frameworks.


OpenAI declared that it retains the right to terminate the contract if its terms are violated. It was also stated that the contract explicitly references current laws and policies regarding surveillance and autonomous weapons, ensuring that even if these laws change in the future, system usage must remain consistent with the standards established in the agreement.

Legal Framework

Constitutional and Legal Basis

The contract specifies that processing of classified information for intelligence activities must comply with the Fourth Amendment of the U.S. Constitution. It also references the National Security Act of 1947, the Foreign Intelligence Surveillance Act of 1978, Presidential Directive 12333, and relevant Department of Defense directives.


The text states that processing of sensitive data in foreign intelligence activities must be based on a defined foreign intelligence purpose. The contract also explicitly prohibits use of the system for unlimited monitoring of private data belonging to U.S. persons.


It was stated that the AI system may be used in domestic law enforcement only under circumstances permitted by the Posse Comitatus Act and other applicable statutes.

Binding Nature of Contractual Terms

OpenAI stated that the contract explicitly references current laws and policies regarding surveillance and autonomous weapons. It was specified that even if these laws or policies change in the future, OpenAI system usage must remain consistent with the standards established in the agreement.


The company confirmed that it retains the right to terminate the contract in the event of a breach.

OpenAI’s Official Statements

Following the agreement, OpenAI published an official statement and a frequently asked questions document.

Rationale for the Agreement

On its official website, OpenAI stated that the U.S. military requires powerful AI models to support its missions. The company noted that it initially declined to enter into classified deployment agreements, believing its security measures and systems were not yet ready, but later established a framework ensuring the protection of its red lines.


OpenAI also stated that it entered into this agreement to reduce tensions between the Department of Defense and AI laboratories. It declared that it requested the same conditions be offered to other AI laboratories and specifically urged the government to resolve its dispute with Anthropic.

Differences from Anthropic

OpenAI stated that its agreement contains stronger and more enforceable safeguards than previous contracts. The company emphasized that the binding nature of its red lines is reinforced by cloud-only deployment, preservation of the safety stack, and the presence of authorized OpenAI personnel throughout the process.


OpenAI said it did not know why Anthropic failed to reach a similar agreement and expressed hope that other laboratories would also consider adopting a similar model.

Public Reaction

Subscription Cancellations and Shifts to Claude

After the announcement of OpenAI’s agreement with the Pentagon, numerous ChatGPT users canceled their subscriptions. It was reported that users who rejected the agreement shared screenshots on social media of their migration to Anthropic’s AI model, Claude.


It was noted that American singer Katy Perry publicly announced her subscription to Claude, and her post received millions of views. Dutch historian and author Rutger Bregman, residing in the United States, also supported Claude in a post on X, stating: “Anthropic has shown heroic resolve. Let’s all switch to Claude today. Not only because it is the best AI model (which the Pentagon cannot use for mass surveillance or killer drones), but also because they are simply the good guys.”

QuitGPT” Boycott Campaign

In response to the agreement, opponents launched a boycott campaign under the message “ChatGPT accepted Trump’s killer robot deal. It’s time to leave.”, titled “QuitGPT.” By March 1, over 1.5 million people had signed the campaign on quitgpt.org.


In addition to U.S. users, numerous users from other countries expressed their opposition to OpenAI’s agreement with the Pentagon through social media posts.

Author Information

Avatar
AuthorEdanur KarakoçMarch 2, 2026 at 12:01 PM

Tags

Discussions

No Discussion Added Yet

Start discussion for "OpenAI - United States Department of Defense Agreement (2026)" article

View Discussions

Contents

  • Path to the Agreement

    • Pentagon–Anthropic Dispute

    • Anthropic CEO’s Statement

  • Announcement of the Agreement

    • Sam Altman’s Statements

  • Deployment in Classified Networks

  • Core Principles of the Agreement

    • Prohibition on Use of OpenAI Technology for Mass Domestic Surveillance

    • Prohibition on Use of OpenAI Technology to Direct Autonomous Weapon Systems

    • Prohibition on Use of OpenAI Technology in High-Risk Automated Decision Systems

  • Layered Security Architecture

    • Cloud-Only Deployment

    • Safety Stack

    • Involvement of Security-Approved Personnel

    • Contractual Safeguards

  • Legal Framework

    • Constitutional and Legal Basis

    • Binding Nature of Contractual Terms

  • OpenAI’s Official Statements

    • Rationale for the Agreement

    • Differences from Anthropic

  • Public Reaction

    • Subscription Cancellations and Shifts to Claude

    • “QuitGPT” Boycott Campaign

Ask to Küre