European regulators sidelined on Anthropic
superhacking model
European regulators have expressed concern over
being "sidelined" regarding Anthropic’s unreleased AI model, Claude
Mythos, which possesses advanced "super-hacking" capabilities.
While the model has been shared with a select
group of 12 cybersecurity firms and 40 other organizations for defensive
stress-testing, many European oversight bodies have not been granted direct
access.
Key Tensions with European Regulators
Lack of Direct Access: Germany’s national
cybersecurity agency, BSI, and other EU cyber officials have noted they have
not yet directly tested the tool, receiving only "meaningful insight"
through dialogues with developers.
Jurisdictional Limits: Because the model has not
been officially "placed on the market" in the EU, it does not yet
trigger many of the binding rules under the EU AI Act.
Security Implications: Claudia Plattner, head of
the BSI, warned that the model’s power has "profound implications for
national and European security and sovereignty".
Concerns Over Precedent: Experts like Laura
Caroli worry that this sets a precedent where European officials are "at
the mercy" of private U.S. tech firms for security oversight.
Regulatory Response & Endorsements
Staged Rollout Endorsed: Despite the lack of
direct oversight, the European Commission has publicly welcomed Anthropic’s
decision to delay the general release of Mythos, given its potential for
large-scale cyber risk.
Active Dialogue: The EU's AI Office is reportedly
in contact with Anthropic under the EU's code of practice to ensure future
compliance with European standards once the model eventually hits the market.

Sem comentários:
Enviar um comentário