European regulators sidelined on Anthropic superhacking model
New AI
tech could pose major cybersecurity risks but many European regulators have
limited oversight.
April 13,
2026 6:11 pm CET
By Pieter
Haeck and Sam Clark
https://www.politico.eu/article/anthropic-apple-microsoft-europe-left-in-the-dark-superhacking-ai/
BRUSSELS
— Regulators in Europe have been left out of the loop as U.S. firm Anthropic
restricts the release of a new, powerful artificial intelligence model.
Anthropic
last week announced it was limiting the release of its latest model Mythos to
dozens of technology partners to allow them to patch systems, claiming the
model outperformed most humans in finding and exploiting tech vulnerabilities.
The decision sent shockwaves through the global AI and cyber community amid
concerns it could enable large-scale cybersecurity breaches.
Anthropic
triaged trusted partners and organizations in granting access: It handpicked 12
tech companies — all headquartered in the U.S. — including Apple, Microsoft and
Amazon as its closest circle. It said it had granted access to another 40
organizations but did not name them, adding it has been in “ongoing discussions
with U.S. government officials” about the model.
POLITICO
spoke to officials from eight national European cyber agencies. Only the German
agency said it had entered into conversations with Anthropic about Mythos, and
had not yet been able to test the model. Several government institutions in
Europe suggested that they had gotten only piecemeal access.
That
contrasts starkly with the U.K., where AI minister Kanishka Narayan confirmed
Friday that the U.K.’s AI Security Institute had recently tested Mythos and had
already “taken action on our findings.” The institute released its assessment
on Monday.
According
to Daniel Privitera, founder of the Berlin-based AI non-profit KIRA, “Mythos
gives us an early taste of how crucial access to frontier AI capabilities is
going to be in the years to come … Europe currently does not have a plan for
how to secure that access.”
The
divisions are also a stark reminder that the world has failed to establish a
global system to address the risks of AI, despite countless warnings by
technologists that AI will have severe repercussions on the economy and the
labor market — and could even wipe out humanity.
Yoshua
Bengio, of the Université de Montréal and one of the three godfathers of AI,
told POLITICO it was “deeply concerning” that it’s up to tech companies —
rather than regulators — to decide how to handle the risks. It showed how
“essential” it was to set up ways for governments or third parties to run
checks on the technology “to protect the public,” he added.
For
continental Europe, the events have laid bare a lack of influence over the
American companies leading AI development and dented the EU’s identity as a
superregulator of tech.
Job
Holzhauer, a spokesperson for the Dutch cybersecurity agency, one of the eight
agencies contacted by POLITICO, said “the actual impact of the vulnerabilities
found is difficult to verify without technical details.”
A
“pressing” question is whether tools “of such extraordinary power” like Mythos
will in future be on the open market, Germany’s chief cybersecurity official
Claudia Plattner, who leads the national cybersecurity agency BSI, said in a
statement to POLITICO.
“That
question, in turn, has profound implications for national and European security
and sovereignty,” Plattner said.
Plattner
said her agency BSI is in “active dialogue” with Anthropic and that, though BSI
has not yet directly tested the tool, conversations it has had with developers
have given it “meaningful insight” into how it works. A spokesperson said the
agency was in touch with Anthropic about the new model a few weeks before the
announcement.
EU cyber
agency ENISA declined to comment on whether it has been in contact with
Anthropic about Mythos.
The
Commission’s AI Office has a dialogue with Anthropic under the EU’s code of
practice that helps the company comply with the requirements of the bloc’s AI
Act. But European Commission spokespeople didn’t comment on whether the new
model was part of those talks and whether the office has had access.
Anthropic
didn’t respond to multiple requests for comment by POLITICO on what access it
had granted to European regulators.
AI safety
police
When
announcing Mythos, Anthropic said the model demonstrated that AI models have
reached a point where they can “surpass all but the most skilled humans” at
spotting and exploiting cyber flaws. That makes the tool a powerful defensive
tool, but also a dangerous weapon in the hands of malicious hackers.
Because
the model has not yet been released for testing, much is still not known about
exactly how it works and the types of flaws it is particularly good at finding.
In recent
years, several initiatives have been launched to set up global norms around AI
safety, including the G7’s Hiroshima Process, the United Nations’ Global
Dialogue on AI Governance and a network of AI Safety and Security Institutes.
But the initiatives have lacked the political backing to deliver actual
results.
AI safety
issues have also taken a back seat as governments around the world have pivoted
to focus on winning the global AI race. The result is that there's no global
mechanism to scrutinize and police what Anthropic and its peers do with risky
technology.
“The fact
that models with far-reaching impact are governed by a private company is
concerning,” said Marietje Schaake, a former European Parliament lawmaker and
former adviser to the European Commission who shepherded an EU code of practice
for developers of the most advanced AI models to help them comply with the EU’s
AI rules.
“Now is a
good moment” for the world to agree on how to disclose “sensitive corporate
information and oversight,” Schaake said.
Laura
Caroli, an independent AI researcher who was a key advisor on the drafting of
the European Union’s 2023 Artificial Intelligence Act, said the EU was
“sidelined … because the model is not released on the market.” If it was,
Anthropic would have binding rules and commitments under EU law.
But,
Caroli said, the EU could keep some form of oversight through the network of AI
safety institutes, of which the Commission’s AI Office is part.
The
European Commission’s digital spokesperson Thomas Regnier said the executive
was “currently assessing possible implications” with regard to EU legislation
and was keeping tab of the “security implications” of the technology in
general.
Under the
AI Act, providers like Anthropic have to address cyber risks stemming from
their models, and the bloc’s Cyber Resilience Act imposes mandatory
cybersecurity requirements “for all products with digital elements placed on
the EU market,” Regnier said.
Still,
European officials are at the mercy of Anthropic and its tech peers, said
Caroli. “It makes you wonder … if it wasn’t Anthropic, but [China’s] DeepSeek?”


Sem comentários:
Enviar um comentário