Anthropic
sues US government for calling it a risk
11 hours
ago
Kali Hays
Technology
reporter
https://www.bbc.com/news/articles/cq571w5vllxo
Artificial
intelligence (AI) firm Anthropic has filed a first of its kind lawsuit against
the US government over claims that it is a "supply chain risk".
The AI
firm's chief executive Dario Amodei and Defence Secretary Pete Hegseth have
been publicly rowing due to the company's refusal to allow the military
unfettered use of its AI tools.
The
Pentagon retaliated by making Anthropic the first US company to be labelled a
"supply chain risk", but Anthropic said in its lawsuit on Monday
against a list of US government agencies that the government's action was
"unprecedented and unlawful".
A
spokesman for the US Department of Defense declined to comment citing a policy
on active litigation.
"The
Constitution does not allow the government to wield its enormous power to
punish a company for its protected speech", Anthropic wrote. "No
federal statute authorizes the actions taken here."
Anthropic's
lawsuit is against President Donald Trump's executive office; several
government leaders, including Hegseth, Secretary of State Marco Rubio, and
Secretary of Commerce Howard Lutnick; and 16 government agencies, including the
Department of War, Department of Homeland Security and the Department of
Energy.
The
Department of War is a secondary name given by Trump for the Department of
Defense.
Liz
Huston, a spokeswoman for the White House, told the BBC that Anthropic is
"a radical left, woke company" attempting to control military
activity.
"Under
the Trump Administration, our military will obey the United States Constitution
– not any woke AI company's terms of service," Huston said.
Anthropic
argued against this in its legal complaint filed Monday morning in California
federal court.
The
company said that Hegseth demanded it remove any usage restrictions from its
defence contract, despite limitations on "lethal autonomous warfare"
and "surveillance of Americans en masse" always having been part of
its government contracts.
Anthropic
has been used by the US government and military since 2024 and was the first
advanced AI company to have its tools deployed in government agencies doing
classified work.
'Public
castigation'
Anthropic
said that it did work with Hegseth on revising contract language in order to
meet military use needs. While it was nearing a successful negotiation to
continue working with the department that would include limitations regarding
surveillance and weaponry, those talks were abruptly undercut.
Instead,
the Department of Defense "met Anthropic's attempts at compromise with
public castigation".
As
Anthropic was negotiating with defence officials, Trump berated the company as
run by "left wing nut jobs" and directed all government agencies to
stop using Anthropic tools.
Hegseth
quickly followed up on Trump's announcement by labeling Anthropic a
"supply chain risk", meaning tools like Claude were suddenly
considered not secure enough for government use. He also prohibited any company
doing work with the government from using Anthropic tools.
Claude is
one of the most popular AI tools in the world, with Claude Code being an almost
ubiquitous part of work done by some of the biggest technology firms in the US,
including Google, Meta, Amazon and Microsoft.
Those
companies also do work with the government. Last week, Microsoft, Google and
Amazon said they would continue to use Claude outside of any work for defence
agencies.
Nevertheless,
Anthropic claims that it has been "irreparably" harmed as a result of
Trump and Hegseth's comments.
"Current
and future contracts with private parties are also in doubt, jeopardizing
hundreds of millions of dollars in the near-term", the company said.
"On top of those immediate economic harms, Anthropic's reputation and core
First Amendment freedoms are under attack."
Anthropic
also noted the "chilling effect" on free speech that the retaliation
by the Trump Administration is having on other entities.
But by
Monday afternoon, nearly 40 Google and OpenAI employees had filed with the
court a brief supporting Anthropic and its efforts to limit improper uses of
AI, offering their expertise on the dangers posed the technology being used at
scale.
"As
a group, we are diverse in our politics and philosophies, but we are united in
the conviction that today's frontier AI systems present risks when deployed to
enable domestic mass surveillance or the operation of autonomous lethal weapons
systems without human oversight, and that those risks require some kind of
guardrails, whether via technical safeguards or usage restrictions," the
signatories of the brief said.
Google
and OpenAI are both considered rivals to Anthropic when it comes to AI tools,
and both companies also have such tools in government use.
OpenAI
CEO Sam Altman admitted last week to rushing through the company's new contract
with the Department of Defense in the wake of Anthropic's fallout with the
government.
Anthropic
is not seeking monetary damages from its lawsuit, but it is asking the court to
immediately declare that Trump's directive "exceeds the president's
authority" and is in violation of the Constitution, and immediately reject
its having been labelled as supply chain risk.
Carl
Tobias, a chair at the University of Richmond School of Law, said that while a
quick settlement of the lawsuit is possible, he expects the Trump
Administration to take "a scorched earth" approach.
"Anthropic
may very well win in federal court, but this government is not shy about
appealing," Tobias said. "It will probably go to the Supreme
Court."
.webp)
Sem comentários:
Enviar um comentário