Pentagon
Gives A.I. Company an Ultimatum
Anthropic
insists on limits on how its technology is used and could be labeled a supply
chain risk if it fails to accept the military’s demands.
Julian E.
Barnes Sheera
Frenkel
By Julian
E. Barnes and Sheera Frenkel
Julian E.
Barnes reported from Washington, and Sheera Frenkel from San Francisco.
https://www.nytimes.com/2026/02/24/us/politics/pentagon-anthropic.html
Feb. 24,
2026
The
Pentagon delivered an ultimatum to Anthropic, the only artificial intelligence
company currently operating on classified military systems, ordering the firm
to bend to its demands by Friday.
If the
firm fails to agree by 5:01 p.m. on Friday, Defense Secretary Pete Hegseth said
the Trump administration would invoke the Defense Production Act, compelling
the use of its model by the military and labeling the company a supply chain
risk, according to a senior Pentagon official. That step would put Anthropic’s
government contracts at risk.
The two
threats are fundamentally at odds: One would prevent the government from using
the company’s products, while the other would force the company to let the
government use the products.
Despite
the contradiction, the threats reflect the level of anger in the top ranks of
the Pentagon toward Anthropic for resisting its demands and how important the
company’s model has become to the military.
“The
Pentagon knows they are issuing an extreme threat. They are using every button
or lever they have,” said Jessica Tillipman, an associate dean at the George
Washington University Law School. “The bigger issue here is that it waters down
these designations. They are transforming what is designed to be national
security tools into a point of leverage for business.”
Mr.
Hegseth summoned Dario Amodei, the Anthropic chief executive, to the Pentagon
on Tuesday for a morning meeting. The tone of the discussion was civil, but
when Anthropic did not agree to Mr. Hegseth’s demands, he leveled the threats
against it, according to people briefed on the meeting.
The New
York Times spoke to people on both sides of the debate over Anthropic’s work
with the military, but they spoke on the condition that their names not be used
to discuss the sensitive negotiations.
Anthropic
has argued that it was asking for reasonable assurances that its model would
not be used for surveillance of Americans or in autonomous weapons, such as
drone operations, that did not involve human oversight.
Anthropic’s
supporters have contended that the company is being punished for being first on
the classified system and creating a special model, Claude Gov, that does not
have the same guardrails and restrictions that their models available to the
public have.
Pentagon
officials have said that using software and weapons lawfully is their
responsibility, one they take seriously. But the officials say they cannot
effectively allow all their contractors to specify how the equipment they sell
to the Pentagon will be used, and that lawful use must be the only constraint.
While the
Defense Production Act gives the Pentagon wide-ranging powers, it is usually
invoked in manufacturing contexts. It would be unusual for the act to be used
on a software company, forcing Anthropic to make its product available for
free.
An
Anthropic spokesman said that the company had continued good-faith
conversations in the meeting at the Pentagon. The spokesman said the company
wanted to support the government but needed to ensure that its models were used
in line with what they could “reliably and responsibly do.”
But the
senior Pentagon official rejected those demands and said the debate had nothing
to do with those issues. The Pentagon wants all artificial intelligence
contracts to stipulate that the military can use the models for any lawful
purpose.
The
official confirmed that the Pentagon has an agreement with Elon Musk’s company
xAI to use its artificial intelligence model, Grok, on the classified system.
But it will take time to integrate Grok onto classified cloud servers and into
software from Palantir, a data analytics company that the military uses. More
important, Anthropic’s Claude is considered a superior product to Grok,
regularly yielding more accurate information.
The
Pentagon also is close to an agreement with Google to bring its Gemini model
onto the classified system, but the senior official said the deal was not
complete.
A person
briefed on the meeting said Anthropic would continue to demand assurances that
its models are not used for autonomous weapons programs or mass surveillance.
Pentagon
officials took issue with Anthropic after Palantir reported a conversation that
one of its employees had had with a counterpart at the artificial intelligence
company regarding the U.S. military operation last month to capture President
Nicolás Maduro of Venezuela.
In the
meeting on Tuesday, Mr. Amodei said there had been a misunderstanding and that
his company had not reached out to Palantir or the Pentagon about the Maduro
operation, according to a person briefed on the meeting.
Mr.
Amodei insisted his company had never objected to or interfered with legitimate
military operations.
A
correction was made on Feb. 24, 2026: An earlier version of this article
incorrectly described Anthropic’s artificial intelligence model Claude Gov. Its
guardrails and restrictions are different from those of the company’s models
available to the public, not from classified models.
Julian E.
Barnes covers the U.S. intelligence agencies and international security matters
for The Times. He has written about security issues for more than two decades.
Sheera
Frenkel is a reporter based in the San Francisco Bay Area, covering the ways
technology affects everyday lives with a focus on social media companies,
including Facebook, Instagram, Twitter, TikTok, YouTube, Telegram and WhatsApp.


Sem comentários:
Enviar um comentário