terça-feira, 14 de abril de 2026
US-Iran peace talks could resume in next two days, Trump says
US-Iran
peace talks could resume in next two days, Trump says
US
president says negotiations could restart in Islamabad under ‘fantastic’
Pakistani army chief Asim Munir
Julian
Borger, and Saeed Shah in Islamabad
Tue 14
Apr 2026 19.01 BST
Donald
Trump has said that US-Iranian peace talks could resume in Islamabad over the
next two days, and complimented the work of Pakistan’s army chief as mediator.
The US
president was speaking on Tuesday to a New York Post reporter who had gone to
Islamabad for the first round of ceasefire talks over the weekend. After an
interview discussing prospects for negotiations, the reporter said the
president had called her back “with an update”.
“You
should stay there, really, because something could be happening over the next
two days, and we’re more inclined to go there,” Trump said. He added that
Pakistan’s army chief, Field Marshal Asim Munir, was doing a “great job” in
arranging the talks.
“He’s
fantastic, and therefore it’s more likely that we go back there,” Trump said.
Munir is
a powerful figure in Pakistan and has good relations with Trump, who has called
him his “favourite field marshal”, and with Iran’s Revolutionary Guards.
A
Pakistani official said on Tuesday that he expected talks to restart soon, but
it may take a day or two longer than Trump suggested. “The game is on,” the
official said.
Islamabad
is racing to arrange a meeting date that provides enough time for negotiations
before the two-week ceasefire ends on Wednesday 22 April.
Trump’s
comments followed a wave of speculation about a new round of negotiations,
after 21 hours of talks on the weekend. Those ended with the US vice-president,
JD Vance, walking out on Sunday morning, claiming that Iran had failed to make
an “affirmative commitment that they will not seek a nuclear weapon”.
After the
talks ended, Trump declared a US naval blockade on ships using Iranian ports in
the Gulf in an effort to increase pressure on the country’s economy, and as a
counter to Iran’s near-total closure of the strait of Hormuz to ships using
other Gulf ports soon after the US-Israeli attack began on 28 February.
US
Central Command reported that over a 24-hour period “no ships made it past the
US blockade and six merchant vessels complied with direction from US forces to
turn around to re-enter an Iranian port on the Gulf of Oman”.
Independent
reports confirmed that some tankers that had been approaching the strait on
Monday had turned around; one tanker, the Rich Starry, reversed course again
and passed through the waterway.
The
closure of the strait, a gateway through which a fifth of the world’s oil and
liquefied natural gas flows, had led to a spike in oil prices well above $100 a
barrel. Crude prices dipped to about $95 after reports of a possible second
round of talks on Tuesday.
Meanwhile,
Israel and Lebanon have held unprecedented negotiations in Washington about the
cross-border conflict, which erupted as a consequence of the US-Israeli attack
on Iran. Hezbollah sided with Iran and launched rockets at Israel, which
responded with intense bombardment of Beirut and other cities, and launched an
invasion of southern Lebanon.
Hezbollah
has said it will not abide by any agreements made by Israeli and Lebanese
government negotiators in Washington.
Asked
about the possible restart of US-Iranian talks, Vance appeared to be open to
the possibility. “The big question from here on out is whether Iranians will
have enough flexibility,” he told Fox News on Monday evening. He said Iran had
shown some flexibility in Islamabad but “didn’t move far enough”.
On the
question of whether there would be additional talks, he replied it was a
question that would be “best put to the Iranians”.
US
reports on the Islamabad talks said the key sticking point had been the demand
from Vance’s delegation for a 20-year suspension of Iran’s enrichment of
uranium. Iran was reportedly offering a shorter moratorium, of less than 10
years.
An
Iranian official accused the US delegation of making maximalist demands at the
Islamabad talks. “Iran did not surrender at the battlefield, neither will it
surrender behind the table,” the official said.
It is
unclear where negotiations stood when the Islamabad meeting broke up over the
other major proliferation concern: Iran’s stockpile of highly enriched uranium
(HEU). It is close to weapons-grade purity and is believed to be buried in deep
shafts under mountains in central Iran.
At
negotiations in Geneva before the war, Iran offered to dilute the HEU, which
would extend the period it would take to produce a nuclear warhead, but the US
has called for its complete removal.
A
Pakistani official said Iran was insisting that Vance lead the Iranian
delegation to any future talks, as Tehran does not trust Trump’s special envoy,
Steve Witkoff, and the president’s son-in-law, Jared Kushner, as reliable
interlocutors.
Senior
officials from Saudi Arabia, Egypt and Turkey were in Islamabad on Tuesday for
talks with Pakistani officials on the next moves in mediating the conflict.
Pakistan’s
prime minister, Shehbaz Sharif, is due to depart on Wednesday on a trip to
Saudi Arabia, Turkey and Qatar to build support for the peace process, and to
seek help with proposals to reopen the strait of Hormuz and discuss Iran’s
demand for war reparations. Sharif’s regional tour might have to be cut short,
however, if there is a quick return to the negotiating table.
Anthropic’s Restraint Is a Terrifying Warning Sign
Opinion
Thomas L.
Friedman
Anthropic’s
Restraint Is a Terrifying Warning Sign
April 7,
2026
https://www.nytimes.com/2026/04/07/opinion/anthropic-ai-claude-mythos.html
Thomas L.
Friedman
By Thomas
L. Friedman
Opinion
Columnist
Normally
right now I would be writing about the geopolitical implications of the war
with Iran, and I am sure I will again soon. But I want to interrupt that
thought to highlight a stunning advance in artificial intelligence — one that
arrived sooner than expected and that will have equally profound geopolitical
implications.
The
artificial intelligence company Anthropic announced Tuesday that it was
releasing the newest generation of its large language model, dubbed Claude
Mythos Preview, but to only a limited consortium of roughly 40 technology
companies, including Google, Broadcom, Nvidia, Cisco, Palo Alto Networks,
Apple, JPMorganChase, Amazon and Microsoft. Some of its competitors are among
these partners because this new A.I. model represents a “step change” in
performance that has some critically important positive and negative
implications for cybersecurity and America’s national security.
The good
news is that Anthropic discovered in the process of developing Claude Mythos
that the A.I. could not only write software code more easily and with greater
complexity than any model currently available, but as a byproduct of that
capability, it could also find vulnerabilities in virtually all of the world’s
most popular software systems more easily than before.
The bad
news is that if this tool falls into the hands of bad actors, they could hack
pretty much every major software system in the world, including all those made
by the companies in the consortium.
This is
not a publicity stunt. In the run-up to this announcement, representatives of
leading tech companies have been in private conversation with the Trump
administration about the implications for the security of the United States and
all the other countries that use these now vulnerable software systems,
technologists involved told me.
For good
reason. As Anthropic said in a written statement on Tuesday, in just the past
month, “Mythos Preview has already found thousands of high-severity
vulnerabilities, including some in every major operating system and web
browser. Given the rate of A.I. progress, it will not be long before such
capabilities proliferate, potentially beyond actors who committed to deploying
them safely. The fallout — economics, public safety and national security —
could be severe.’’
Project
Glasswing, Anthropic’s name for the consortium, is an undertaking to work with
the biggest and most trusted tech companies and critical infrastructure
providers, including banks, “to put these capabilities to work for defensive
purposes,” the company added, and to give the leading technology firms a head
start in finding and patching those vulnerabilities.
“We do
not plan to make Claude Mythos Preview generally available, but our eventual
goal is to enable our users to safely deploy Mythos-class models at scale — for
cybersecurity purposes, but also for the myriad other benefits that such highly
capable models will bring,” Anthropic said.
My
translation: Holy cow! Superintelligent A.I. is arriving faster than
anticipated, at least in this area. We knew it was getting amazingly good at
enabling anyone, no matter how computer literate, to write software code. But
even Anthropic reportedly did not anticipate that it would get this good, this
fast, at finding ways to find and exploit flaws in existing code.
Anthropic
said it found critical exposures in every major operating system and Web
browser, many of which run power grids, waterworks, airline reservation
systems, retailing networks, military systems and hospitals all over the world.
If this
A.I. tool were, indeed, to become widely available, it would mean the ability
to hack any major infrastructure system — a hard and expensive effort that was
once essentially the province only of private-sector experts and intelligence
organizations — will be available to every criminal actor, terrorist
organization and country, no matter how small.
I’m
really not being hyperbolic when I say that kids could deploy this by accident.
Mom and Dad, get ready for:
"Honey,
what did you do after school today?”
“Well,
Mom, my friends and I took down the power grid. What’s for dinner?”
That is
why Anthropic is giving carefully controlled versions to key software providers
so they can find and fix the vulnerabilities before the bad guys do — or your
kids.
At
moments like this I prefer to do a deep dive with my technology tutor, Craig
Mundie, a former director of research and strategy at Microsoft, a member of
President Barack Obama’s President’s Council of Advisers on Science and
Technology and an author, with Henry Kissinger and Eric Schmidt, of a book on
A.I. called “Genesis.”
In our
view, no country in the world can solve this problem alone. The solution — this
may shock people — must begin with the two A.I. superpowers, the U.S. and
China. It is now urgent that they learn to collaborate to prevent bad actors
from gaining access to this next level of cyber capability.
Such a
powerful tool would threaten them both, leaving them exposed to criminal actors
inside their countries and terrorist groups and other adversaries outside. It
could easily become a greater threat to each country than the two countries are
to each other.
Indeed,
this is potentially as fundamental and significant a turning point as was the
emergence of mutually assured destruction and the need for nuclear
nonproliferation. The U.S. and China need to work together to protect
themselves, as well as the rest of the world, from humans and autonomous A.I.s
using this technology — a lot more than they need to worry about Russia.
This is
so important and urgent that it should be a top subject on the agenda for the
summit between Trump and President Xi Jinping in Beijing next month.
“What
used to be the province of big countries, big militaries, big companies and big
criminal organizations with big budgets — this ability to develop sophisticated
cyberhacking operations — could become easily available to small actors,”
explained Mundie. “What we are about to see is nothing short of the complete
democratization of cyberattack capabilities.”
It means
that responsible governments, in concert with the companies that build these
A.I. tools and software infrastructure, need to do three things urgently,
Mundie argues.
For
starters, he says, we need to “carefully control the release of these new
superintelligent models and make sure they only go to the most responsible
governments and companies.”
Then we
need to use the time this buys us to distribute defensive tools to the good
actors “so that the software that runs their key infrastructure can have all
their flaws found and fixed before hackers inevitably get these tools one way
or another.” (By the way, the cost of fixing the vulnerabilities that are sure
to be discovered in legacy software systems, like those of telephone companies,
will be significant. Then multiply that across our whole industrial base.)
Finally,
Mundie argues, we need to work with China and all responsible countries to
build safe, protected working spaces, within all the key networks, both public
and private, into which trusted companies and governments “can move all their
critical services — so they will be protected against future hacking attacks.”
It will
be interesting to see what history remembers most about April 7, 2026 — the
postponed U.S. release of bombs over Iran or the carefully controlled release
of the Claude Mythos Preview by Anthropic and its technical allies.
European regulators have expressed concern over being "sidelined" regarding Anthropic’s unreleased AI model, Claude Mythos, which possesses advanced "super-hacking" capabilities.
European regulators sidelined on Anthropic
superhacking model
European regulators have expressed concern over
being "sidelined" regarding Anthropic’s unreleased AI model, Claude
Mythos, which possesses advanced "super-hacking" capabilities.
While the model has been shared with a select
group of 12 cybersecurity firms and 40 other organizations for defensive
stress-testing, many European oversight bodies have not been granted direct
access.
Key Tensions with European Regulators
Lack of Direct Access: Germany’s national
cybersecurity agency, BSI, and other EU cyber officials have noted they have
not yet directly tested the tool, receiving only "meaningful insight"
through dialogues with developers.
Jurisdictional Limits: Because the model has not
been officially "placed on the market" in the EU, it does not yet
trigger many of the binding rules under the EU AI Act.
Security Implications: Claudia Plattner, head of
the BSI, warned that the model’s power has "profound implications for
national and European security and sovereignty".
Concerns Over Precedent: Experts like Laura
Caroli worry that this sets a precedent where European officials are "at
the mercy" of private U.S. tech firms for security oversight.
Regulatory Response & Endorsements
Staged Rollout Endorsed: Despite the lack of
direct oversight, the European Commission has publicly welcomed Anthropic’s
decision to delay the general release of Mythos, given its potential for
large-scale cyber risk.
Active Dialogue: The EU's AI Office is reportedly
in contact with Anthropic under the EU's code of practice to ensure future
compliance with European standards once the model eventually hits the market.



