quinta-feira, 16 de abril de 2026
European regulators have expressed concern over being "sidelined" regarding Anthropic’s unreleased AI model, Claude Mythos, which possesses advanced "super-hacking" capabilities.
European regulators sidelined on Anthropic
superhacking model
European regulators have expressed concern over
being "sidelined" regarding Anthropic’s unreleased AI model, Claude
Mythos, which possesses advanced "super-hacking" capabilities.
While the model has been shared with a select
group of 12 cybersecurity firms and 40 other organizations for defensive
stress-testing, many European oversight bodies have not been granted direct
access.
Key Tensions with European Regulators
Lack of Direct Access: Germany’s national
cybersecurity agency, BSI, and other EU cyber officials have noted they have
not yet directly tested the tool, receiving only "meaningful insight"
through dialogues with developers.
Jurisdictional Limits: Because the model has not
been officially "placed on the market" in the EU, it does not yet
trigger many of the binding rules under the EU AI Act.
Security Implications: Claudia Plattner, head of
the BSI, warned that the model’s power has "profound implications for
national and European security and sovereignty".
Concerns Over Precedent: Experts like Laura
Caroli worry that this sets a precedent where European officials are "at
the mercy" of private U.S. tech firms for security oversight.
Regulatory Response & Endorsements
Staged Rollout Endorsed: Despite the lack of
direct oversight, the European Commission has publicly welcomed Anthropic’s
decision to delay the general release of Mythos, given its potential for
large-scale cyber risk.
Active Dialogue: The EU's AI Office is reportedly
in contact with Anthropic under the EU's code of practice to ensure future
compliance with European standards once the model eventually hits the market.
Anthropic’s Restraint Is a Terrifying Warning Sign
Opinion
Thomas L.
Friedman
Anthropic’s
Restraint Is a Terrifying Warning Sign
April 7,
2026
https://www.nytimes.com/2026/04/07/opinion/anthropic-ai-claude-mythos.html
Thomas L.
Friedman
By Thomas
L. Friedman
Opinion
Columnist
Normally
right now I would be writing about the geopolitical implications of the war
with Iran, and I am sure I will again soon. But I want to interrupt that
thought to highlight a stunning advance in artificial intelligence — one that
arrived sooner than expected and that will have equally profound geopolitical
implications.
The
artificial intelligence company Anthropic announced Tuesday that it was
releasing the newest generation of its large language model, dubbed Claude
Mythos Preview, but to only a limited consortium of roughly 40 technology
companies, including Google, Broadcom, Nvidia, Cisco, Palo Alto Networks,
Apple, JPMorganChase, Amazon and Microsoft. Some of its competitors are among
these partners because this new A.I. model represents a “step change” in
performance that has some critically important positive and negative
implications for cybersecurity and America’s national security.
The good
news is that Anthropic discovered in the process of developing Claude Mythos
that the A.I. could not only write software code more easily and with greater
complexity than any model currently available, but as a byproduct of that
capability, it could also find vulnerabilities in virtually all of the world’s
most popular software systems more easily than before.
The bad
news is that if this tool falls into the hands of bad actors, they could hack
pretty much every major software system in the world, including all those made
by the companies in the consortium.
This is
not a publicity stunt. In the run-up to this announcement, representatives of
leading tech companies have been in private conversation with the Trump
administration about the implications for the security of the United States and
all the other countries that use these now vulnerable software systems,
technologists involved told me.
For good
reason. As Anthropic said in a written statement on Tuesday, in just the past
month, “Mythos Preview has already found thousands of high-severity
vulnerabilities, including some in every major operating system and web
browser. Given the rate of A.I. progress, it will not be long before such
capabilities proliferate, potentially beyond actors who committed to deploying
them safely. The fallout — economics, public safety and national security —
could be severe.’’
Project
Glasswing, Anthropic’s name for the consortium, is an undertaking to work with
the biggest and most trusted tech companies and critical infrastructure
providers, including banks, “to put these capabilities to work for defensive
purposes,” the company added, and to give the leading technology firms a head
start in finding and patching those vulnerabilities.
“We do
not plan to make Claude Mythos Preview generally available, but our eventual
goal is to enable our users to safely deploy Mythos-class models at scale — for
cybersecurity purposes, but also for the myriad other benefits that such highly
capable models will bring,” Anthropic said.
My
translation: Holy cow! Superintelligent A.I. is arriving faster than
anticipated, at least in this area. We knew it was getting amazingly good at
enabling anyone, no matter how computer literate, to write software code. But
even Anthropic reportedly did not anticipate that it would get this good, this
fast, at finding ways to find and exploit flaws in existing code.
Anthropic
said it found critical exposures in every major operating system and Web
browser, many of which run power grids, waterworks, airline reservation
systems, retailing networks, military systems and hospitals all over the world.
If this
A.I. tool were, indeed, to become widely available, it would mean the ability
to hack any major infrastructure system — a hard and expensive effort that was
once essentially the province only of private-sector experts and intelligence
organizations — will be available to every criminal actor, terrorist
organization and country, no matter how small.
I’m
really not being hyperbolic when I say that kids could deploy this by accident.
Mom and Dad, get ready for:
"Honey,
what did you do after school today?”
“Well,
Mom, my friends and I took down the power grid. What’s for dinner?”
That is
why Anthropic is giving carefully controlled versions to key software providers
so they can find and fix the vulnerabilities before the bad guys do — or your
kids.
At
moments like this I prefer to do a deep dive with my technology tutor, Craig
Mundie, a former director of research and strategy at Microsoft, a member of
President Barack Obama’s President’s Council of Advisers on Science and
Technology and an author, with Henry Kissinger and Eric Schmidt, of a book on
A.I. called “Genesis.”
In our
view, no country in the world can solve this problem alone. The solution — this
may shock people — must begin with the two A.I. superpowers, the U.S. and
China. It is now urgent that they learn to collaborate to prevent bad actors
from gaining access to this next level of cyber capability.
Such a
powerful tool would threaten them both, leaving them exposed to criminal actors
inside their countries and terrorist groups and other adversaries outside. It
could easily become a greater threat to each country than the two countries are
to each other.
Indeed,
this is potentially as fundamental and significant a turning point as was the
emergence of mutually assured destruction and the need for nuclear
nonproliferation. The U.S. and China need to work together to protect
themselves, as well as the rest of the world, from humans and autonomous A.I.s
using this technology — a lot more than they need to worry about Russia.
This is
so important and urgent that it should be a top subject on the agenda for the
summit between Trump and President Xi Jinping in Beijing next month.
“What
used to be the province of big countries, big militaries, big companies and big
criminal organizations with big budgets — this ability to develop sophisticated
cyberhacking operations — could become easily available to small actors,”
explained Mundie. “What we are about to see is nothing short of the complete
democratization of cyberattack capabilities.”
It means
that responsible governments, in concert with the companies that build these
A.I. tools and software infrastructure, need to do three things urgently,
Mundie argues.
For
starters, he says, we need to “carefully control the release of these new
superintelligent models and make sure they only go to the most responsible
governments and companies.”
Then we
need to use the time this buys us to distribute defensive tools to the good
actors “so that the software that runs their key infrastructure can have all
their flaws found and fixed before hackers inevitably get these tools one way
or another.” (By the way, the cost of fixing the vulnerabilities that are sure
to be discovered in legacy software systems, like those of telephone companies,
will be significant. Then multiply that across our whole industrial base.)
Finally,
Mundie argues, we need to work with China and all responsible countries to
build safe, protected working spaces, within all the key networks, both public
and private, into which trusted companies and governments “can move all their
critical services — so they will be protected against future hacking attacks.”
It will
be interesting to see what history remembers most about April 7, 2026 — the
postponed U.S. release of bombs over Iran or the carefully controlled release
of the Claude Mythos Preview by Anthropic and its technical allies.
It’s the End of the Internet as We Know ItApril 15, 2026
Opinion
Guest
Essay
It’s the
End of the Internet as We Know ItApril 15, 2026
By Raffi
Krikorian
Mr.
Krikorian is the chief technology officer at Mozilla.
https://www.nytimes.com/2026/04/15/opinion/mythos-open-souce-internet.html
Last
week, Anthropic announced that its newest artificial intelligence model, Claude
Mythos Preview, would not be released to the public, after the company learned
it was capable of finding and exploiting vulnerabilities that have gone
undetected in critical software systems for decades. Instead, Anthropic gave
access to Mythos — and $100 million in credits to use it — to more than 50 of
the world’s largest organizations, including Amazon, Apple, Microsoft, Google
and JPMorgan Chase, as part of a defensive cybersecurity initiative called
Project Glasswing.
Even
before the announcement, publicly available A.I. models were already finding
security vulnerabilities in commonly used software. Anthropic’s researchers
acknowledged that other labs are six to 18 months from building something
comparable. These capabilities, and the threats they pose to cybersecurity,
will proliferate. From streaming platforms to online banking services to search
engines that answer everyday questions, broad swaths of the internet could
become unusable.
If we
don’t respond carefully and decisively, then the millions of people who stand
to gain the most from A.I.’s progress as a programming tool will also be the
ones most exposed to attack. Leaving them to fend for themselves could erode
the internet as we know it.
You might
already be familiar with the concept of vibe coding: using A.I. tools to turn
plain-language descriptions into working software. A shop owner describes the
inventory system she needs, and A.I. creates it. A dentist describes a patient
portal, and A.I. delivers it. Millions of people who never thought of
themselves as software developers — small business owners, clinicians,
nonprofit directors — are creating software for the first time without any
training. But these applications are often written without security review.
Potential flaws, increasingly easy to find as A.I. improves, could let someone
access customer data, take over accounts or shut the entire application down.
For
decades, two kinds of scarcity kept the internet safe — or safe enough. Writing
software was hard, so the people who did it were trained, careful and few.
Finding bugs was also hard, so the worst flaws stayed hidden, sometimes for
decades. It wasn’t a great system. But the difficulty on both sides created a
kind of détente that held.
Now,
thanks to new A.I. tools, anyone can write code. Soon, bad actors could use
those same tools to find out what’s wrong with code. The détente is over.
Most of
the internet was built from open-source software. For example, much of the
video you stream online is quietly delivered by FFmpeg, a free, open-source
program maintained by volunteers whose combined budget is modest by any
corporate standard. OpenBSD, an operating system that runs the firewalls and
gateways protecting sensitive networks from outside attack, and which Anthropic
calls “one of the most security-hardened operating systems in the world,” runs
on donations. Unlike the proprietary software developed by the big firms in
Project Glasswing, these projects exist because someone decided the work
mattered more than the paycheck. They are built by people who have given years
of their lives to code that powers products most of us use every day without
knowing it.
According
to Anthropic, Mythos found a 27-year-old vulnerability in OpenBSD and a
16-year-old vulnerability in FFmpeg, buried in a line of code that, Anthropic
says, other automated security tools had glossed over five million times. (Both
organizations say they have fixed the issues identified.) Even Firefox, the web
browser my own organization builds, wasn’t spared: When Anthropic ran its
previous model against Firefox, it was able to weaponize an already discovered
bug just twice out of several hundred attempts. When Anthropic ran Mythos, it
succeeded nearly every time. Across all these projects and many more, the model
identified thousands of vulnerabilities in code. These are the types of issues
that can allow ransomware to shut down hospitals. They’re how cyberattacks can
disrupt critical infrastructure. And they’re how foreign intelligence services
can compromise government networks.
Beyond
detecting problems in lines of code, Mythos found the seams in the informal
social contract that holds the internet together. It’s long been understood
that developers would share their work openly, help one another fix what’s
broken and maintain the software that all of us depend on — not for pay, but
because that’s how the community has worked. The veteran programmer who has
been patching critical code for 20 years in his spare time is in the same
position as the shop owner who vibe coded her first app last Tuesday. Both are
exposed. Neither has a security team. Neither currently has access to Mythos.
To its
credit, Anthropic is among the first major A.I. companies to decide the
responsible thing was to slow down. The company says it is committing $4
million to open-source security organizations. That’s more than anyone else in
this industry has done.
And yet
the underlying economics haven’t changed; the most valuable software
infrastructure in the world continues to be maintained by people working for
free, while the companies building fortunes on top of it never had to pay for
its upkeep. Now a powerful new capability has arrived — and as we’ve seen
repeatedly in tech, there’s the risk that organizations with resources will
receive it first and learn to protect themselves, while others are left
vulnerable.
The
programmer who gave 20 years of his life to maintain code that runs inside
products used by billions of people? He doesn’t have access to Mythos yet. He
should. The organizations that steward open-source infrastructure know who
these maintainers are and how to reach them, and are ready to help. That’s a
short list and a solvable problem. The shop owner is different. She shouldn’t
need Mythos or a tool just as powerful to defend herself from a cyberattack,
just the confidence that the tools she used were built to protect her from the
start.
So, let’s
change the default. Every company that ships open-source code in its products —
which is most of the technology industry — should invest in the essential
workers who maintain it. That means funding, but it also means that A.I. firms
contribute engineering time, security expertise and staff to the projects we
all depend on. A.I. companies that are building tools like Mythos, beyond
Anthropic, should put them into the hands of these workers. And all of us who
benefit from open-source infrastructure need to treat it as what it has always
been: as critical as any road, bridge or power line.
And for
the millions of new creators building software for the first time, we need to
make it easy for them to build safely. Integrate security into the tools
they’re already using. Make sure the A.I. that writes the code also protects
the code. Not as an add-on and not as a premium feature, but as a default. The
détente is over. The flaws are visible. The creators are everywhere. The only
question is whether we protect all of them — or just the ones who can afford to
protect themselves.




