OpenAI Revises Pentagon AI Deal After Public Criticism
CEO Sam Altman admitted in an internal message that the agreement — finalized shortly after the Pentagon dropped its previous AI contractor, Anthropic — came across as poorly handled.
"We shouldn't have rushed to get this out," Altman wrote in a message later reposted on U.S. social media company X, adding that the issues surrounding military AI use are "super complex" and require clearer communication.
Altman described the rollout as "opportunistic and sloppy." Under the renegotiated terms, OpenAI will explicitly prohibit its technology from being deployed for domestic mass surveillance or accessed by defense intelligence bodies such as the National Security Agency (NSA). The San Francisco-based company also reaffirmed that a core "red line" remains the use of its models to direct autonomous weapons systems without human oversight.
The original deal had raised immediate red flags, in part because it followed Anthropic's removal as the Pentagon's AI provider — a dismissal that came after Anthropic declared that deploying AI for mass domestic surveillance would be incompatible with democratic values. U.S. President Donald Trump responded by denouncing Anthropic as "left-wing nutjobs" and ordering federal agencies to discontinue use of its technology. The State, Treasury, and Health and Human Services departments subsequently moved to drop Anthropic's products after the DoD designated the company a supply chain risk. Trump has since directed all U.S. government agencies to phase out Anthropic systems entirely, following a directive from Defense Secretary Pete Hegseth.
The backlash against OpenAI's deal was swift and broad. Calls to "delete ChatGPT" spread across X and Reddit, while nearly 900 employees from OpenAI and Google jointly signed an open letter urging their employers to refuse Defense Department requests to deploy AI for domestic mass surveillance or autonomous lethal operations without human oversight. Of the signatories, 796 were Google employees and 98 were OpenAI staff. The letter warned that the U.S. government was pressuring companies individually, calling on corporate leaders to "stand together" against demands that risk eroding democratic norms.
The controversy also drew comparisons to the 2013 Snowden disclosures, which exposed sweeping NSA surveillance programs. Critics, including OpenAI's former head of policy research, Miles Brundage, questioned publicly on X how OpenAI managed to secure an agreement around ethical boundaries that Anthropic had found untenable — suggesting some within the company may have yielded to government pressure, while acknowledging the organization's internal complexity and diverging viewpoints.
Despite OpenAI's insistence that the revised contract carries stronger ethical guardrails than previous classified AI arrangements, scrutiny of the company's judgment remains intense. OpenAI, which counts more than 900 million ChatGPT users globally, said it is committed to ensuring its technology is deployed in accordance with democratic principles while balancing legitimate national security needs.
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.