terça-feira, 10 de março de 2026
Dario Amodei
Dario
Amodei (born
1983) is the CEO and co-founder of Anthropic, the AI safety and research
company behind the Claude large language model series.
OpenAI
Tenure: Before founding Anthropic, Amodei was the Vice President of Research at
OpenAI, where he played a pivotal role in the development of GPT-2 and GPT-3.
Previous
Roles: He also held research positions at Google (focused on natural language
processing and safety) and Baidu, where he helped lead the Deep Speech 2
project.
Education:
He holds a PhD in Physics from Princeton University and completed his
undergraduate studies in physics at Stanford University (after transferring
from Caltech).
Founding
Vision: Amodei co-founded Anthropic in 2021 with several former OpenAI
colleagues, including his sister Daniela Amodei. The move was driven by a
desire to prioritize AI safety, steerability, and interpretability over rapid
commercialization.
Scaling
Laws: He was one of the early researchers to document "scaling laws,"
observing that AI performance improves predictably as compute and training data
increase.
Recent
Predictions (2026): Amodei has recently gained attention for his
"apocaloptimist" views, predicting that AI could reach a level
equivalent to a "country of geniuses in a data center" within the
next few years.
Economic
Impact: He warns that AI could significantly disrupt up to 50% of entry-level
white-collar jobs in fields like law and finance by 2030, while simultaneously
boosting annual GDP growth by 10-20%.
As of
early 2026, Anthropic's annualized revenue is reported to be approximately $14
billion, with the company planning to spend $50 billion on AI infrastructure in
the U.S.. Amodei continues to advocate for transparency legislation and
international cooperation to manage the national security risks of advanced AI.
On March 9, 2026, the AI firm Anthropic filed a high-stakes lawsuit against the Trump administration after the Pentagon officially designated the company a "supply chain risk". This designation effectively blacklists the company from federal contracts and is the first time such a label has been applied to a U.S.-based firm.
Anthropic
sues US government for calling it a risk
On March
9, 2026, the AI firm Anthropic filed a high-stakes lawsuit against the Trump
administration after the Pentagon officially designated the company a
"supply chain risk". This designation effectively blacklists the
company from federal contracts and is the first time such a label has been
applied to a U.S.-based firm.
Core of
the Dispute
The
conflict stems from Anthropic's refusal to grant the U.S. military
"unfettered" or "unrestricted" use of its AI models,
specifically Claude. Anthropic maintains "red lines" regarding two
specific use cases:
Mass
Surveillance: Refusal to allow the use of its technology for the mass
surveillance of U.S. citizens.
Autonomous
Weapons: Refusal to allow the use of its technology in fully autonomous lethal
weapons systems.
Government
Retaliation
In
response to these restrictions, Defense Secretary Pete Hegseth labeled the
company a supply chain risk, and President Donald Trump issued a directive for
all federal agencies to "immediately cease" using Anthropic's
technology. The administration has publicly criticized Anthropic as a
"radical left, woke company" for imposing these safety guardrails.
The
Lawsuit Details
Anthropic’s
legal challenge, filed in both California and Washington D.C., characterizes
the government's move as "unprecedented and unlawful" retaliation.
Legal
Arguments: Anthropic argues that the designation violates its First Amendment
rights to protected speech (referring to its stated safety principles) and
exceeds the President's statutory authority.
Financial
Impact: The company states the blacklisting could jeopardize multiple billions
of dollars in revenue for 2026 and cause irreparable harm to its reputation as
a trusted partner.
Industry
Support: Nearly 40 employees from rivals Google and OpenAI have filed a court
brief in support of Anthropic, backing the need for AI safety guardrails.
Current
Status
Anthropic
is seeking an immediate court order to vacate the "supply chain risk"
designation and halt the administration's ban. While the Pentagon has declined
to comment on active litigation, the White House is reportedly preparing a
formal executive order to further solidify the exclusion of Anthropic from the
federal government.
Anthropic sues US government for calling it a risk
Anthropic
sues US government for calling it a risk
11 hours
ago
Kali Hays
Technology
reporter
https://www.bbc.com/news/articles/cq571w5vllxo
Artificial
intelligence (AI) firm Anthropic has filed a first of its kind lawsuit against
the US government over claims that it is a "supply chain risk".
The AI
firm's chief executive Dario Amodei and Defence Secretary Pete Hegseth have
been publicly rowing due to the company's refusal to allow the military
unfettered use of its AI tools.
The
Pentagon retaliated by making Anthropic the first US company to be labelled a
"supply chain risk", but Anthropic said in its lawsuit on Monday
against a list of US government agencies that the government's action was
"unprecedented and unlawful".
A
spokesman for the US Department of Defense declined to comment citing a policy
on active litigation.
"The
Constitution does not allow the government to wield its enormous power to
punish a company for its protected speech", Anthropic wrote. "No
federal statute authorizes the actions taken here."
Anthropic's
lawsuit is against President Donald Trump's executive office; several
government leaders, including Hegseth, Secretary of State Marco Rubio, and
Secretary of Commerce Howard Lutnick; and 16 government agencies, including the
Department of War, Department of Homeland Security and the Department of
Energy.
The
Department of War is a secondary name given by Trump for the Department of
Defense.
Liz
Huston, a spokeswoman for the White House, told the BBC that Anthropic is
"a radical left, woke company" attempting to control military
activity.
"Under
the Trump Administration, our military will obey the United States Constitution
– not any woke AI company's terms of service," Huston said.
Anthropic
argued against this in its legal complaint filed Monday morning in California
federal court.
The
company said that Hegseth demanded it remove any usage restrictions from its
defence contract, despite limitations on "lethal autonomous warfare"
and "surveillance of Americans en masse" always having been part of
its government contracts.
Anthropic
has been used by the US government and military since 2024 and was the first
advanced AI company to have its tools deployed in government agencies doing
classified work.
'Public
castigation'
Anthropic
said that it did work with Hegseth on revising contract language in order to
meet military use needs. While it was nearing a successful negotiation to
continue working with the department that would include limitations regarding
surveillance and weaponry, those talks were abruptly undercut.
Instead,
the Department of Defense "met Anthropic's attempts at compromise with
public castigation".
As
Anthropic was negotiating with defence officials, Trump berated the company as
run by "left wing nut jobs" and directed all government agencies to
stop using Anthropic tools.
Hegseth
quickly followed up on Trump's announcement by labeling Anthropic a
"supply chain risk", meaning tools like Claude were suddenly
considered not secure enough for government use. He also prohibited any company
doing work with the government from using Anthropic tools.
Claude is
one of the most popular AI tools in the world, with Claude Code being an almost
ubiquitous part of work done by some of the biggest technology firms in the US,
including Google, Meta, Amazon and Microsoft.
Those
companies also do work with the government. Last week, Microsoft, Google and
Amazon said they would continue to use Claude outside of any work for defence
agencies.
Nevertheless,
Anthropic claims that it has been "irreparably" harmed as a result of
Trump and Hegseth's comments.
"Current
and future contracts with private parties are also in doubt, jeopardizing
hundreds of millions of dollars in the near-term", the company said.
"On top of those immediate economic harms, Anthropic's reputation and core
First Amendment freedoms are under attack."
Anthropic
also noted the "chilling effect" on free speech that the retaliation
by the Trump Administration is having on other entities.
But by
Monday afternoon, nearly 40 Google and OpenAI employees had filed with the
court a brief supporting Anthropic and its efforts to limit improper uses of
AI, offering their expertise on the dangers posed the technology being used at
scale.
"As
a group, we are diverse in our politics and philosophies, but we are united in
the conviction that today's frontier AI systems present risks when deployed to
enable domestic mass surveillance or the operation of autonomous lethal weapons
systems without human oversight, and that those risks require some kind of
guardrails, whether via technical safeguards or usage restrictions," the
signatories of the brief said.
Google
and OpenAI are both considered rivals to Anthropic when it comes to AI tools,
and both companies also have such tools in government use.
OpenAI
CEO Sam Altman admitted last week to rushing through the company's new contract
with the Department of Defense in the wake of Anthropic's fallout with the
government.
Anthropic
is not seeking monetary damages from its lawsuit, but it is asking the court to
immediately declare that Trump's directive "exceeds the president's
authority" and is in violation of the Constitution, and immediately reject
its having been labelled as supply chain risk.
Carl
Tobias, a chair at the University of Richmond School of Law, said that while a
quick settlement of the lawsuit is possible, he expects the Trump
Administration to take "a scorched earth" approach.
"Anthropic
may very well win in federal court, but this government is not shy about
appealing," Tobias said. "It will probably go to the Supreme
Court."
segunda-feira, 9 de março de 2026
Is the US low on Missiles interceptors?
Is the US
low on Missiles interceptors?
The
United States is currently facing significant shortages of missile interceptors
as of March 2026, primarily due to high-intensity conflicts in the Middle East.
Experts and Pentagon officials have warned that sustained combat could exhaust
high-end supplies within days or weeks.
Current
Stockpile Status
THAAD
(Terminal High Altitude Area Defense): Between 20% and 50% of the entire U.S.
THAAD inventory may have been expended during recent 12-day conflicts with
Iran.
Patriot
(PAC-3 MSE): Reports indicate the U.S. currently has only about 25% of the
Patriot interceptors required for its full military plans.
SM-3
(Standard Missile-3): Approximately 20% of the expected 2025 inventory was
fired during engagements, raising concerns about long-term capacity.
Naval
Interceptors (SM-2/SM-6): Roughly 200 interceptors were used defending against
Houthi attacks in the Red Sea over a one-year period.
Key
Challenges
Production
vs. Usage Gap: The U.S. is expending interceptors at a rate that "vastly
outpaces production". For example, Lockheed Martin produced 600 PAC-3 MSE
interceptors in 2025, but regional allies reportedly expended over 800 in just
three days.
High
Replacement Cost: Advanced interceptors like THAAD cost approximately $15.5
million each, while the drones they often target (such as Iranian Shaheds) cost
only $20,000 to $35,000.
Industrial
Constraints: Increasing production is difficult due to long lead times, supply
chain dependencies (including rare earth metals from China), and the transition
to newer missile variants which sacrifices immediate capacity.
Conflicting
Perspectives
While
defense analysts and some Democratic lawmakers express "intense and
alarmed" concerns about "dangerously low" stocks, President
Donald Trump has dismissed these claims, stating that U.S. stockpiles have
"never been higher or better" and that the supply is "virtually
unlimited".

.webp)
.webp)
.jpeg)