Pentagon Standoff Is a Decisive Moment for How
A.I. Will Be Used in War
The Pentagon’s contract dispute with Anthropic is
part of a wider clash about the use of artificial intelligence for national
security and who decides on any safeguards.
Adam Satariano Julian E. Barnes Sheera Frenkel
By Adam Satariano Julian E. Barnes and Sheera
Frenkel
Adam Satariano reported from London, Julian
Barnes from Washington and Sheera Frenkel from San Francisco.
https://www.nytimes.com/2026/02/27/technology/defense-department-anthropic-ai-safety.html
Feb. 27, 2026
Updated 12:44 p.m. ET
The fight between the Department of Defense and
the artificial intelligence company Anthropic has ostensibly been about a $200
million contract over the use of A.I. in classified systems.
But as the two sides careen toward a 5:01 p.m.
Friday deadline over terms of the contract, far more is at stake.
Amid the legalese and heated rhetoric are
questions being asked globally about how to use A.I., what the technology’s
risks are and who gets to decide on setting any limits — the makers of A.I. or
national governments.
Underlying it all is fear and awe over the
dizzying pace of A.I. progress and the technology’s uncertain impact on
society.
“Something like this dispute was inevitable,”
said Michael C. Horowitz, who worked on A.I. weapons issues in the Defense
Department during the Biden administration. “Because the technology is
advancing so quickly, we’re having these debates now. A.I. has moved from being
in a niche conversation to something really at the center of global power.”
The clash centers on the Pentagon’s use of a
classified version of Anthropic’s A.I. model, Claude. The company wants to
embed safeguards in its technology to prevent its use for mass domestic
surveillance of Americans or in fully autonomous weapons with no humans in the
loop.
The Pentagon has said that it has no plans to use
the technology for those purposes, but that a private contractor cannot decide
how its tools will be lawfully used for national security, just as a weapons
manufacturer does not determine where its missiles are dropped.
At the Pentagon, the dispute comes at an
important moment. Defense Secretary Pete Hegseth, the former Fox News
contributor who has lashed out at policies and companies he sees as too
liberal, wants to aggressively integrate A.I. in war planning and weapons
development. Mr. Hegseth is echoing his boss, President Trump, who has made the
expansion of A.I. a cornerstone of his policies.
But Anthropic, a five-year-old company worth
about $380 billion, has staked its reputation on A.I. safety and raised
concerns about the technology’s dangers, even as it has collaborated with U.S.
defense and intelligence agencies. It is the only A.I. company currently
operating on the Pentagon’s classified systems.
In recent days, the Pentagon and Anthropic have
showed no signs of backing down. Sean Parnell, the Pentagon spokesman, posted
on social media on Thursday that the Pentagon demanded that Anthropic allow it
to use A.I. “for all lawful purposes,” saying it was a “common-sense request.”
In response, Dario Amodei, Anthropic’s chief
executive, said the Pentagon’s “threats do not change our position: we cannot
in good conscience accede to their request.” Anthropic was prepared to lose its
government contract and help the Pentagon transition to another company’s
technology, he said.
Without a compromise, Mr. Hegseth has threatened
to invoke the rarely used Defense Production Act to force Anthropic to work
with it on its terms, or designate the company a supply chain threat and block
it from doing business with the government.
The confrontation has created new divisions
between Silicon Valley and Washington at a moment when the industry seemed in
step with President Trump’s tech-forward agenda, especially as Google, xAI and
OpenAI are also involved in A.I. work with the Pentagon.
On Thursday, nearly 50 OpenAI employees and 175
Google employees published a letter calling on their leaders to “refuse the
Department of War’s current demands.” More than 100 employees who work on
Google’s A.I. technology expressed concern in another letter to company leaders
about working with the Pentagon. Prominent technologists including Jeff Dean, a
top Google executive, have also said they are concerned about how A.I. can be
misused for surveillance.
(The New York Times has sued OpenAI and
Microsoft, accusing them of copyright infringement of news content related to
A.I. systems. The companies have denied those claims.)
A little over two years ago, A.I. safety and
regulation was a top concern. At a global summit hosted in Britain by then
Prime Minister Rishi Sunak, the United States, China and 26 other countries
signed a pledge to address some of the technology’s potential risks, such as
giving hackers new attack methods and accelerating disinformation.
But as the A.I. race ramped up, the issue has
faded as a priority. Last year, the Trump administration revoked safety
policies imposed under President Biden. Mr. Trump signed an executive order in
December aimed at undercutting state laws that regulate A.I. He has also lifted
restrictions on exports of A.I. semiconductors, despite concerns that the
components could help rivals like China.
The European Union, which passed far-reaching
A.I. regulations in 2024, is now considering rolling some back. At the United
Nations, a yearslong effort to ban certain A.I. weapons has been stalled by
opposition from the United States, Russia and other countries.
On the battlefield, the war in Ukraine has
ushered in an era of drone warfare that turned autonomous weapons from a
futuristic possibility to a near-term reality.
“As A.I. becomes more powerful and more capable,
the incentives to use it also become much stronger,” said Helen Toner, an A.I.
policy expert at Georgetown University and former OpenAI board member. “At the
same time, people’s appetite to talk about risks and how to solve them has gone
down.”
Ms. Toner said the Anthropic-Pentagon dispute
showed a fundamental disconnect. In Washington, officials view A.I. as a new
tool that can be harnessed for specific goals. In Silicon Valley, creators of
the technology see it becoming more like an “entity” with sophisticated
reasoning that may behave in unexpected and dangerous ways without oversight
and refinement, she said.
The fight between the Pentagon and Anthropic
began on Jan. 9, when Mr. Hegseth published a memo calling for A.I. companies
to remove restrictions on their technologies.
“The time is now to accelerate A.I. integration,
and we will put the full weight of the Department’s leadership, resources, and
expanding corps of private sector partners into accelerating America’s Military
A.I. Dominance,” he wrote.
Underpinning Mr. Hegseth’s strategy was a
fundamental shift in military technology. Hardware is in an age of decline.
Military contractors have struggled to deliver ships and fighter planes on time
and on budget.
Software has become an increasingly powerful
tool. Tech executives including Alex Karp, the chief executive of the data
analytics company Palantir, which works closely with the federal government,
have argued that America’s competitive edge over adversaries will be found in
its advances with software.
Anthropic has been a willing partner, providing
the government with a special version of Claude that has fewer restrictions.
Yet some in the Pentagon viewed the start-up with suspicion. Its openness to
talking about safety risks put off some in the department’s leadership, who
have called the San Francisco company “woke.”
When talks between the Pentagon and Anthropic
began over a $200 million contract for use of A.I. in classified systems,
lawyers from both sides quietly traded emails over contract language, said two
people involved in the discussions.
Anthropic asked for two things. The company said
it was willing to loosen its restrictions on the technology, but wanted
guardrails to stop its A.I. from being used for mass surveillance of Americans
or deployed in autonomous weapons with no humans involved. Without those,
Anthropic risks damaging its safety-first reputation.
“This is really about the power of the state to
determine how A.I. is being deployed in the world versus companies,” said
Robert Trager, co-direct of Oxford University’s Martin A.I. Governance
Initiative.
Cordula Droege, the chief lawyer for the
International Committee of the Red Cross, which has called for global limits on
A.I. weapons, said the violent risks of introducing swarms of autonomous
weapons on battlefields is being lost in the wider debate.
“Throughout history, warfare goes in parallel
with the development of technology,” she said.
Adam Satariano is a technology correspondent for
The Times, based in London.
Julian E. Barnes covers the U.S. intelligence
agencies and international security matters


Sem comentários:
Enviar um comentário