Pentagon
Gives A.I. Company an Ultimatum
In a
significant escalation of tensions between the U.S. government and the
technology sector, the Pentagon has issued an ultimatum to the AI company
Anthropic.
The Core
Conflict
Defense
Secretary Pete Hegseth has ordered Anthropic to lift its internal "safety
guardrails" and grant the military unrestricted access to its Claude AI
models by Friday, February 27, 2026, at 5:01 PM.
The
dispute centers on Anthropic’s refusal to allow its technology to be used for:
Fully
autonomous weapons systems that can target and kill without human intervention.
Mass
domestic surveillance of U.S. citizens.
The
Pentagon argues that private companies should not dictate how the military uses
its tools, insisting they must be available for "all lawful
purposes".
The
Ultimatum's Stakes
If
Anthropic does not comply by the deadline, the Pentagon has threatened several
severe punitive measures:
Contract
Cancellation: Terminating Anthropic's $200 million defense contract.
Supply
Chain Risk Designation: Labeling the company a "supply chain risk," a
move typically reserved for foreign adversaries like China, which would
effectively blacklist them from working with any major defense contractors
(e.g., Boeing, Lockheed Martin).
Defense
Production Act (DPA): Invoking emergency federal powers to legally compel the
company to tailor its AI for military requirements.
The
Spark: The Maduro Operation
Tensions
reached a breaking point following reports that Claude was used—via a
partnership with data firm Palantir—in the January 2026 military operation to
capture former Venezuelan leader Nicolás Maduro. Anthropic reportedly
questioned this use case, sparking the Pentagon's current "review" of
the relationship.
While
competitors like OpenAI, Google, and xAI have reportedly moved closer to
meeting the Pentagon's "all lawful purposes" standard, Anthropic has
remained the sole holdout on ethical grounds.

Sem comentários:
Enviar um comentário