OpenAI
Reaches A.I. Agreement With Defense Dept. After Anthropic Clash
The deal
came hours after President Trump had ordered federal agencies to stop using
artificial intelligence technology made by Anthropic, an OpenAI rival.
Cade Metz
By Cade
Metz
Reporting
from San Francisco
https://www.nytimes.com/2026/02/27/technology/openai-agreement-pentagon-ai.html
Published
Feb. 27, 2026
Updated
Feb. 28, 2026, 12:27 a.m. ET
OpenAI,
the maker of ChatGPT, said on Friday that it had reached an agreement with the
Pentagon to provide its artificial intelligence technologies for classified
systems, just hours after President Trump ordered federal agencies to stop
using A.I. technology made by rival Anthropic.
Under the
deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful
purpose. The San Francisco company also said it had found a way to ensure that
its technologies would not be applied for domestic surveillance in the United
States or with autonomous weapons by installing specific technical guardrails
on its systems.
“In all
of our interactions, the DoW displayed a deep respect for safety and a desire
to partner to achieve the best possible outcome,” Sam Altman, OpenAI’s chief
executive, said in a social media post, using the initials for the Department
of War, the administration’s preferred name for the Department of Defense.
The
Department of Defense did not immediately respond to a request for comment.
The deal
appeared to be a business and political coup for OpenAI, taking advantage of a
rival’s troubles. Anthropic, which competes with OpenAI, had battled the
Pentagon in recent weeks over how its A.I. could be used. In negotiations over
a $200 million contract, the Pentagon had demanded that it be able to use
Anthropic’s A.I. system for all lawful purposes, or it would cut the company
off from government business.
But
Anthropic said it needed terms that would ensure that its A.I. technology would
not be used for domestic surveillance of Americans or for autonomous lethal
weapons. The Pentagon, in turn, said a private contractor could not decide how
its tools would be used for national security. Their disagreement erupted into
public view this month and escalated as both dug in their heels.
Anthropic
and the Pentagon failed to agree on terms by a 5:01 p.m. deadline on Friday.
Defense Secretary Pete Hegseth then designated Anthropic a “supply-chain risk
to national security,” a label that cuts the A.I. company off from business
with the U.S. government. Mr. Trump also weighed in, calling the start-up a
“radical Left AI company.”
Amid the
maelstrom, OpenAI stepped in. This week, Mr. Altman publicly backed Anthropic’s
position that A.I. should not be used for domestic surveillance or autonomous
weapons. On CNBC on Friday, he said he mostly trusted Anthropic and that “they
really do care about safety.”
At the
same time, Mr. Altman engaged in talks with the Pentagon, starting on
Wednesday, over a deal for its technology, said two people familiar with the
discussions who spoke on the condition of anonymity.
Mr.
Altman negotiated with the Department of Defense in a different way from
Anthropic, agreeing to the use of OpenAI’s technology for all lawful purposes.
Along the way, he also negotiated the right to put safeguards into OpenAI’s
technologies that would prevent its systems from being used in ways that it did
not want them to be.
OpenAI
“will build technical safeguards to ensure our models behave as they should,
which the DoW also wanted,” Mr. Altman said.
These
moves allowed Mr. Altman to uphold safety principles around A.I. while still
landing the Pentagon contract. He added that the Pentagon had agreed to have
some OpenAI employees work alongside government personnel on classified
projects to “to help with our models and to ensure their safety.”
Anthropic
did not respond to a request for comment on OpenAI’s deal.
(The New
York Times sued OpenAI and Microsoft in 2023, accusing them of copyright
infringement of news content related to A.I. systems. The two companies have
denied those claims.)
Mr.
Altman and Dario Amodei, the chief executive of Anthropic, have long been
bitter rivals. Dr. Amodei and several other founders of Anthropic previously
worked at OpenAI. But they left in 2021 after disagreements with Mr. Altman and
others over how A.I. should be funded, built and released.
Last
week, during an A.I. summit in India, Mr. Altman and Dr. Amodei were caught on
video refusing to join hands during a photo session with Prime Minister
Narendra Modi.
It may
take time for OpenAI’s technology to be used by the Pentagon. The company is
not yet approved for classified work in part because its technologies are not
available from Amazon’s cloud computing services, which is how the government
often accesses classified systems.
That
could change after OpenAI signed a partnership with Amazon on Friday. Amazon, a
new investor in OpenAI, is pouring $50 billion into the A.I. start-up as part
of $110 billion in funding that OpenAI raised to pay for its continued growth
and to fuel A.I. development.
The
Pentagon may also use A.I. services from other Anthropic rivals. Google and
Elon Musk’s xAI have contracts with the Defense Department, and the Pentagon
said earlier this week that it had reached an agreement to use xAI’s technology
for classified operations.
Google
has had similar discussions, but it is unclear where those talks stand. In
2018, during the first Trump administration, Google backed away from a military
contract after protests from employees. It has since agreed to work with the
Pentagon again.
This
week, as the Pentagon threatened to sever ties with Anthropic, dozens of OpenAI
employees signed an open letter urging other A.I. companies to support the
stance that the technologies not be used for domestic surveillance or with
autonomous weapons.
“They’re
trying to divide each company with fear that the other will give in,” the
letter read, referring to the Pentagon. “That strategy only works if none of us
know where the others stand. This letter serves to create shared understanding
and solidarity in the face of this pressure from the Department of War.”
Cade Metz
is a Times reporter who writes about artificial intelligence, driverless cars,
robotics, virtual reality and other emerging areas of technology.


Sem comentários:
Enviar um comentário