quarta-feira, 31 de dezembro de 2025
Teen Who Faced Deportation Responds to Elon Musk Comment on Her Appearance
Teen Who
Faced Deportation Responds to Elon Musk Comment on Her Appearance
Published
Dec 30,
2025 at 06:35 AM EST
https://www.newsweek.com/teen-faced-deportation-responds-elon-musk-comment-on-appearance-11283225
Billal
Rahman
By Billal
Rahman
Immigration
Reporter
Newsweek
is a Trust Project member
A
19-year-old woman who avoided deportation from Denmark spoke out after Elon
Musk sparked backlash by commenting on her appearance in connection with her
immigration case.
Why It
Matters
The
remark drew significant attention on social media for focusing on physical
appearance rather than the legal aspects of her case. Musk faced a wave of
criticism on his platform X, with users calling him "creepy."
Who is
Audrey Morris
Audrey
Morris, originally from Los Angeles, moved to Denmark at age nine and grew up
there. Earlier this year, Danish authorities questioned her residency status
after a violation of her visa conditions. While she was ultimately granted a
10-year residency permit, she was denied Danish citizenship, The Daily Beast
reported.
"The
support and kindness I have received throughout my case is deeply touching and
appreciated," Morris told Newsweek in a statement. "My wish is for
moral integrity and academic achievements never to be overshadowed by
appearances because my faith teaches me that our true worth comes from God, not
from how we look, but from our character, humility, and the values we live by.
"It
is important now more than ever for society to recognize deeper measures of
worth. I wish the focus on immigration could remain a matter of substance and
integrity, which I believe the Lord calls us to honor above all else.
"I
feel a strong responsibility to use the voice I have been given to encourage a
faith-based community for my generation."
What did
Elon Musk Say About Audrey Morris?
Musk
posted a message in response to a deleted post on X suggesting that individuals
with "8 or above level hotness" should receive an exemption from
deportation.
"I
wasn’t surprised [by Musk’s input], I guess you could say, because from the
beginning, the second that my case was kind of made public, it has been about
appearances and because, ‘oh, she’s blonde and she’s white!’ And so the thing
he said in of itself wasn’t shocking to me, but coming from him, yes, it was
definitely...I was floored," Morris told The Daily Beast.
What To
Know
In
response, Morris described the comment as "crazy" and said she was
surprised that attention from Musk centered on her looks instead of her
achievements, such as her academic work and volunteer efforts.
"It
would’ve been really cool if he commented something like, 'Oh wow, look how
many academic things she’s reached,' or whatever. That would’ve been great. It
could have been so helpful," she told The Daily Beast.
She also
told the outlet that while she expected some attention on her case, Musk’s
specific remark was shocking given the context.
Morris
was denied citizenship, unlike her American mother and 15-year-old brother, who
both received it, as per The Daily Beast.
Before
her residence permit was finalized earlier this year, the possibility of
deportation was so real that she made contingency plans to return to the U.S.,
which would have meant leaving behind her family and long-term boyfriend, the
outlet reported.
Both of
her parents are U.S. citizens, and the family moved to Aarhus, Denmark’s
second-largest city, in 2015 so her mother could pursue a Ph.D. Morris has
lived there since she was nine years old.
Her visa
issues stemmed from a residency permit tied to her family’s move to Denmark.
Authorities questioned her status after she moved to a school dormitory in a
different city, which breached the conditions of her dependent family member
visa.
"Even
in a tightly regulated system, there just has to be room for real people and
real lives and not just paperwork, because a technicality literally changed my
entire life," she told The Daily Beast.
Musk, who
was born in South Africa and later became a U.S. citizen, briefly took on a
role in the administration during the early months of President Donald Trump’s
second term.
He served
as a special government employee advising on Trump’s Department of Government
Efficiency, an initiative aimed at cutting federal spending, before stepping
back from that position after a few months and engaging in a social media spat
with the president.
Musk has
criticized illegal immigration and supported tougher enforcement measures. He
has also defended the H1-B visa program, which allows U.S. companies to hire
skilled foreign workers. His support for H1-B visas created tension with some
hard-liners within MAGA's political coalition.
Musk has
recently been involved in further controversy as he was named in files from the
Jeffrey Epstein estate, which were turned over to the House Oversight Committee
and include phone messages, flight logs, financial records and Epstein’s
schedule.
The
documents show that Musk had been invited to Epstein’s island in December 2014,
though he had previously said he declined the invitation.
Records
show Epstein’s schedule included a note about a possible trip by Musk to his
private island, reading, "Reminder: Elon Musk to island on Dec. 6 (is this
still happening?)," with a date of December 6, 2014
What
People Are Saying
Audrey
Morris told The Daily Beast: “[But] if this just at least brings it to the
attention of anyone who cares, then I’m fine with being embarrassed a little
bit. That’s okay."
Musk
wrote in a post on X in response to a deleted post: "8 or above level
hotness should get an exemption."
Update,
12/30/26, 10:55 a.m. ET: This article has been updated with comment from Audrey
Morris.
An Anti-A.I. Movement Is Coming. Which Party Will Lead It?
Opinion
Michelle
Goldberg
An
Anti-A.I. Movement Is Coming. Which Party Will Lead It?
Dec. 29,
2025, 7:45 p.m. ET
Michelle
Goldberg
By
Michelle Goldberg
https://www.nytimes.com/2025/12/29/opinion/ai-democracy.html
Opinion
Columnist
I
disagree with the anti-immigrant, anti-feminist, bitterly reactionary
right-wing pundit Matt Walsh about basically everything, so I was surprised to
come across a post of his that precisely sums up my view of artificial
intelligence. “We’re sleepwalking into a dystopia that any rational person can
see from miles away,” he wrote in November, adding, “Are we really just going
to lie down and let AI take everything from us?”
A.I.
obviously has beneficial uses, especially medical ones; it may, for example, be
better than humans at identifying localized cancers from medical imagery. But
the list of things it is ruining is long.
A very
partial accounting might start with education, both in the classroom, where
A.I. is increasingly used as a dubious teaching aid, and out of it, where it’s
a plagiarism machine. It would include the economic sustainability and basic
humanity of the arts, as demonstrated by the A.I. country musician who topped a
Billboard chart this year. High on the list would be A.I.’s impact on
employment, which is already bad — including for those who must navigate a
demoralizing A.I.-clogged morass to find jobs — and likely to get worse.
Then
there’s our remaining sense of collective reality, increasingly warped by slop
videos. A.I. data centers are terrible for the environment and are driving up
the cost of electricity. Chatbots appear to be inducing psychosis in some of
their users and even, in extreme cases, encouraging suicide. Privacy is eroding
as A.I. enables both state and corporate surveillance at an astonishing scale.
I could go on.
It is
true that new technologies often inspire dread that looks silly or at least
overwrought in retrospect. But in at least one important way, A.I. is more like
the nuclear bomb than the printing press or the assembly line: Its progenitors
saw its destructive potential from the start but felt desperate to beat
competitors to the punch.
In
“Empire of A.I.,” Karen Hao’s book about Altman’s company, she quotes an email
he wrote to Elon Musk in 2015. “Been thinking a lot about whether it’s possible
to stop humanity from developing A.I.,” wrote Altman. “I think the answer is
almost definitely not.” Given that, he proposed a “Manhattan Project for A.I.,”
so that the dangerous technology would belong to a nonprofit supportive of
aggressive government regulation.
This
year, Altman restructured OpenAI into a for-profit company. Like other tech
barons, he has allied himself with Donald Trump, who recently signed an
executive order attempting to override state A.I. regulations. (Full
disclosure: The New York Times is suing OpenAI for allegedly using its articles
without authorization to train its chatbots.)
Despite
Trump’s embrace of the A.I. industry, attitudes toward the technology don’t
break down along neat partisan lines. Rather, A.I. divides both parties.
Florida’s governor, Ron DeSantis, is a fierce skeptic; this month he proposed
an A.I. Bill of Rights that would, among other things, require consumers to be
notified when they’re interacting with A.I., provide parental controls on A.I.
chatbots and put guardrails around the use of A.I. in mental health counseling.
Speaking on CNN on Sunday, Senator Bernie Sanders suggested a moratorium on new
data center construction. “Frankly, I think you’ve got to slow this process
down,” he said.
Yet a
number of leading Democrats are bullish on A.I., hoping to attract technology
investments to their states and, perhaps, burnish their images as optimistic
and forward-looking. “This technology is going to be a game changer,” Gov. Josh
Shapiro of Pennsylvania said at an A.I. summit in October. “We are just at the
beginning of this revolution, and Pennsylvania is poised to take advantage of
it.” He’s started a pilot program to get more state employees using generative
A.I. at work, and, by streamlining permitting processes, he has made the
building of A.I. data centers easier.
There are
obvious rewards for politicians who jump on the A.I. train. These companies are
spectacularly rich and preside over one of the few sectors of the economy that
are growing. Amazon has announced that it will spend at least $20 billion on
data centers in Pennsylvania, which Shapiro touts as the largest private sector
investment in his state’s history. At a time of national stagnation, A.I. seems
to promise dynamism and civic rejuvenation.
Yet a
survey published in early December shows that most Pennsylvanians, like most
Americans more broadly, are uneasy about A.I. The poll, conducted by Emerson
College, found broad approval of Shapiro but doubt about one of his signature
issues. Most respondents said they expected A.I. to reduce the number of
available jobs, and pluralities thought it would harm the economy and the
environment. Notably, given that health care is one of the sectors where A.I.
shows the most promise, 59 percent of health care workers in the survey were
pessimistic about the technology. Seventy-one percent of respondents said they
thought A.I. posed a threat to humanity.
One major
question, going into 2026, is which party will speak for the Americans who
abhor the incursions of A.I. into their lives and want to see its reach
restricted. Another is whether widespread public hostility to this technology
even matters given all the money behind it. We’ll soon start to find out not
just how much A.I. is going to remake our democracy, but also to what degree we
still have one.
Inside MAGA's growing fight to stop Trump's AI revolution
Critics, including some AI safety advocates and
political figures like Steve Bannon, argue that Sacks's views and actions pose
a danger.
Accelerationism and Deregulation:
Sacks is described as an "accelerationist" who favors rapid AI
development with minimal regulation. He has advocated for a single federal
framework that would override stricter state AI laws, which opponents argue
creates a regulatory vacuum that allows for harms like mass copyright theft,
biometric extraction without consent, and algorithmic discrimination without
meaningful accountability.
Conflicts of Interest:
As a prominent venture capitalist and a White House adviser, Sacks has faced
scrutiny for retaining hundreds of investments in AI-related companies, leading
to concerns that his policies may benefit himself and his friends at a cost to
the public interest.
Downplaying Existential Risk:
Sacks dismisses "doomer narratives" about AI spiraling out of human
control or causing mass job losses as "misguided" and part of a
"Doomer Industrial Complex". Critics fear this stance ignores genuine
existential threats and prioritizes tech dominance over safety.
Inside
MAGA's growing fight to stop Trump's AI revolution
Steve
Bannon is warning the issue could cost Republicans in 2026 and 2028
By Will Steakin
November
24, 2025, 9:40 PM
Last
week, President Donald Trump took the stage at the United States-Saudi
Investment Forum, where he touted his administration's efforts to supercharge
artificial intelligence in the United States.
Trump
said he was proud to have "ended the ridiculous Biden-era
restrictions" and vowed to "build the largest, most powerful, most
innovative AI ecosystem in the world."
But as
Trump stood there boasting of his administration's extensive agenda for AI --
which he has previously described as "one of the most important
technological revolutions in the history of the world" -- some of his most
loyal supporters within the MAGA base were denouncing his effort to accelerate
the AI revolution.
Over on
Steve Bannon's show, War Room -- the influential podcast that's emerged as the
tip of the spear of the MAGA movement -- Trump's longtime ally unloaded on the
efforts behind accelerating AI, calling it likely "the most dangerous
technology in the history of mankind."
"I'm
a capitalist," Bannon said on his show Wednesday. "This is not
capitalism. This is corporatism and crony capitalism."
Bannon
blasted legislators and industry leaders over the lack of regulation regarding
AI, the next-generation computer technology capable of performing human-like
reasoning and decision-making that's already available in offerings ranging
from virtual assistants to self-driving cars. Bannon would go on to dedicate
the rest of the week's shows to sounding
the alarm over reports that Trump was considering an executive order that would
overrule state laws regulating AI.
"You
have more restrictions on starting a nail salon on Capitol Hill or to have your
hair braided, then you have on the most dangerous technologies in the history
of mankind," Bannon told his listeners.
'The
greatest crisis we face'
For years
Bannon was one of the few voices on the right railing against the perceived
threat of unchecked artificial intelligence and big tech -- but as President
Trump barrels toward supercharging the technology in the United States,
empowering tech billionaires and signing off on a massive expansion of the
industry in the coming years, a growing list of some of the most influential
voices in Trump's MAGA movement are voicing deep concerns in what could
indicate a fundamental fracture within the broad coalition that swept Trump
into office in 2024.
The rift
underscores the sheer number of competing forces now working to shape the
administration's approach to AI, from Bannon, who was Trump's 2016 campaign
chief, to Elon Musk, his one-time DOGE lead and top donor, to AI CEOs like Sam
Altman, to David Sacks, who Trump has established as his own AI czar inside the
administration.
"History
will know us for this," Bannon said in an interview with ABC News.
"Even more than the age of Trump, [the MAGA base] will be known for this.
So we've got to get it right."
For
voices like Bannon, the brewing battle over AI will be the political fight that
defines not only the MAGA base moving forward, but potentially shapes the 2026
midterms, the 2028 presidential election, and beyond.
On one
side of the issue stand the tech billionaires and Silicon Valley executives who
poured millions into Trump's campaign, some of whom now occupy influential
positions in his administration and out, and have continued to push for rapid
AI development with minimal regulation, often stressing the need to maintain
national security and economic competitiveness and to beat China in the
so-called AI race. "We have to embrace that opportunity, to be more
productive," Sacks argued at a White House event in June were he said AI
technology would promote innovation across the economy. "Our workers need
to know how to use AI and be creative with it."
On the
other side stand popular MAGA voices who are increasingly sounding the alarm on
their concern that AI technology will eliminate jobs and reshape American
society.
"AI
is probably the greatest crisis we face as a species right now but it isn't
being addressed with any urgency at all," popular conservative podcaster
for Daily Wire Matt Walsh said in a post on X last week. "We're just
sleepwalking into our dystopian future."
Tucker
Carlson in October released a nearly 2-hour podcast that critically looked at
the rise of AI, comparing it to occult and discussed how AI could lead to the
"mark of the beast," a reference to Bible verses in the book of
Revelation.
Sens.
Josh Hawley, a Missouri Republican, and Marsha Blackburn, a Republican from
Tennessee, have emerged as prominent elected officials sounding alarms about
AI, introducing legislation to restrict AI's use in critical decisions
affecting Americans' lives, from loan approvals to medical diagnoses. Hawley
argues that without aggressive intervention, AI will concentrate power in the
hands of a few tech companies while decimating the working class.
'Tech
bros' vs. the working class
Some of
the president's most loyal supporters are increasingly seeing artificial
intelligence as a sweeping transfer of wealth and control to tech titans like
Musk, Mark Zuckerberg, and Peter Thiel, who have drawn the ire of large parts
of the president's base. From what Bannon has observed on the ground level, the
MAGA base has grown more and more concerned about the country marching toward
an AI takeover, with fears mounting on the right about working people losing
their jobs, and the lack of proper regulation or reforms in place to protect
those workers.
Some
experts have predicted AI will reshape large swaths of the American economy,
particularly impacting entry-level work as recent college graduates enter the
job market. Dario Amodei, CEO of Anthropic, which created an AI model called
Claude, told Axios earlier this year that technology could cut U.S. entry-level
jobs by half within five years.
"The
technology is advancing without regulation," Bannon said, predicting a
coming "jobs apocalypse" that would hurt working people, many of
which, he points out, are Trump supporters.
The
sentiment runs deep in the MAGA base. By Bannon's estimate, an overwhelming
majority among rank-and-file Trump supporters has grown to loathe the push
behind AI, taking issue with the lack of regulations and the close relationship
AI tech companies and CEOs have built with the president.
There is,
Bannon argues, "a deeper loathing in MAGA for these tech bros than there
is for the radical left, because they realize that radical left is not that
powerful."
"[The
MAGA base] see all these tech oligarchs that tried to suppress their voices ...
and then all of a sudden being the President's new best friends. They just
don't buy it," Bannon said.
The War
Room host plans to make combating AI his main focus in the coming months and
years ahead, he told ABC News, and is working to build a coalition on the
right, from the bottom up, to challenge the surge of artificial intelligence in
time to save his movement, from not only the jobs he says will ultimately
cripple the working-class American, but to try and retain the base of support
the president built in 2024.
"I
will get 100 times more focused on this," Bannon said. "We are going
to turbo-charge this issue. This is the issue before us."
'This is
where we're going to lead the resistance'
A key
player in Bannon's mission to take on AI in the coming years is Joe Allen, his
show's resident AI expert, who regularly appears to deliver searing rebukes to
the War Room audience, which Bannon says have become some of the most popular
segments.
Bannon
didn't find Allen as his MAGA crusader against artificial intelligence at a
think tank or on Capitol Hill -- but instead at a concert venue.
Allen,
whose official title is "transhumanist editor" for the War Room,
previously worked as a touring rigger, spending his days hoisting massive light
and speaker setups for musical acts ranging from Rascal Flatts to the Black
Eyed Peas, calculating weight loads, securing speaker arrays, then breaking it
all down to head to the next city. At night, he was devoted to deep research
into AI and transhumanism, publishing his writings in conservative outlets like
the Federalist.
In 2021,
Bannon reached out to Allen after coming across his work, and invited him on
his show before quickly offering him a permanent role as the show's AI expert.
Since
then, with help from Bannon, Allen has published a book in 2023 critiquing
superintelligence titled "Dark Aeon: Transhumanism and the War Against
Humanity" and has become an emerging voice on the right sounding the alarm
against AI.
Bannon
grew so reliant on Allen's work that earlier this year he insisted he relocate
to Washington, D.C., full-time, having him work out of Bannon's so-called
"Embassy" as a base of operations.
"I
can't have you in Knoxville or out in Montana," Bannon said he told Allen.
"This is where it's happening, and this is where we're going to lead the
resistance."
While
Bannon often frames his opposition to AI in economic and political terms,
Allen's critique at times focuses more toward the philosophical and spiritual.
He argues that AI is not merely a tool that will lead to job displacement, but
sees it as a force that will reshape humanity itself -- intellectually,
socially, and perhaps most importantly in his mind, in ways that threaten the
soul.
"People
are being trained to see AI as the source of authority on what is and isn't
real," Allen told ABC News in an interview. "In every case, you have
zealous leaders who are counseling their followers to eliminate themselves for
the sake of an alien intelligence. Same energy as [Heaven's Gate]," he
said, comparing the push to the deadly cult.
Allen
warns of what he calls the "inverse singularity," a future where
human intelligence collapses as people grow dependent on machines that
"decide what is and isn't real." He speaks about a coming
"transhumanism" future that he feels the likes of Elon Musk and other
tech titans are looking to bring about with the merging of humans with
"the Machine," which he sees as "anti-human" and threatens
humanity's existence.
And
leading voices like Musk, who recently said he believed one day humans would be
able to upload their consciousness into his AI powered Optimus robot, have made
clear they see the technology is heading in that direction.
Nvidia
defies AI bubble fears but some analysts remain worried
"Long
term, the Al's going to be in charge, to be totally frank, not humans. So we
need to make sure it's friendly," Musk, who himself has at times has
warned of the perils of AI, said at a recent Tesla all-hands event.
To spread
the warning, Allen has taken his message on the road, traveling the country
giving lectures at churches, conservative conferences, and MAGA gatherings,
working to convince everyday Americans of the dangers of AI technology
Bannon
sees Allen as a key force in his mission to galvanize the MAGA base from the
ground up, to spread the warning about AI and big tech and to build enough
support among the grassroots voices around the country to challenge the AI
push.
"He's
going to every conference possible, meeting people ... and I told him, I want
you to go to every church that asks you. I want you to go to churches. I want
you to go to MAGA, Tea Party meetings. I want to get the base in the loop on
this at the ground floor," Bannon said. "And I want them to take
ownership. They took ownership in 2021 with President Trump's comeback. If they
take ownership here, we literally can't be beaten."
"It's
their fight, and the only way we win this is with them," he said.
Taking on
Congress
Perhaps
the movement's biggest win yet was over the summer when an insurgent team
including Bannon, Mike Davis, and others worked publicly and behind the scenes
to kill the inclusion of a proposed 10-year moratorium on state-level AI
regulation as part of President Trump's major legislative package known as the
"One Big Beautiful Bill."
Meanwhile,
some of the large tech giants behind AI products have started to take notice
and have begun reaching out privately to influential voices in MAGA world to
try and smooth out the anti-AI sentiment, sources tell ABC News.
But the
anti-AI movement on the right faces formidable opposition. The Trump
administration remains committed to accelerating AI projects nationwide, and
the president's closest advisers on technology -- the very people Bannon and
his allies are fighting against -- hold positions in the administration and
have his ear.
Chief
among them is Sacks, the venture capitalist and podcaster who serves as both
Trump's crypto and AI czar. Sacks has become one of the most influential voices
in the administration on technology policy, arguing that American dominance in
AI is essential to national security and economic competitiveness, particularly
when it comes to beating China.
Sacks has
compared the United States' pursuit of AI domination to the space race that saw
the United States land a man on the moon -- arguing the AI race is "even
more important."
In an
interview following Trump's address at AI summit in July, Sacks said, "I
think it was the most important technology speech by an American president
since President Kennedy declared that we had to win the space race."
Sacks and
other tech leaders in Trump's orbit frame the debate in stark terms: Either
America moves fast on AI development, or China will dominate the technology
that shapes the future.
"If
the U.S. leads, continues to lead in AI, we will be, we'll remain the most
powerful country, but if we don't, we could fall behind our global competitors
like China, and I think President Trump laid out a plan for winning this AI
race," Sacks said.
But to
voices in the MAGA movement like Bannon, Sacks is the embodiment of everything
wrong with the AI push. Bannon told ABC News that Sacks is the most articulate
-- and therefore "most dangerous" -- spokesman for what he calls the
"accelerationists," big tech voices pushing rapid, unregulated
advancement of artificial intelligence.
A few
weeks ago, Allen said he gave a lecture that he felt had gone
"disastrously." He said he could feel his message failing to connect
-- that as he delivered his theological and analytical critiques warning of the
emerging AI plague, many of the students' faces were glowing with the light of
their phones.
"Even
while I'm discussing, hey, one of the big problems is that you're hypnotized by
your devices ... a couple of people looked up from their phones with a
quizzical look," he recalled.
But he
said that as he was packing up his things, one student walked up to him and
made the whole trip worth it.
Allen
said she told him she agreed with much of what he had -- and she felt her
growth as a student was being stifled as everyone around her, all her
classmates, relied more and more on AI to write their papers and complete their
projects.
"How
am I supposed to compete if I am being the kind of student that has always
succeeded in the past, and people cheating are going to get ahead?" Allen
said she asked him.
Allen
said he couldn't deny it was a tough question.
She was correct. "In the near term, many of these cheaters will
outperform you on a numerical level," Allen said he told her.
"But,"
Allen said, "long term, the depth of character and the type of human being
you become from studying and creating from your own soul -- you're going to
win. Maybe not economically in the near term, but you're going to win."
‘It’s going much too fast’: the inside story of the race to create the ultimate AI
The
future of AI
Artificial
intelligence (AI)
‘It’s
going much too fast’: the inside story of the race to create the ultimate AI
In
Silicon Valley, rival companies are spending trillions of dollars to reach a
goal that could change humanity – or potentially destroy it
Robert
Booth
Mon 1 Dec
2025 11.00 CET
On the
8.49am train through Silicon Valley, the tables are packed with young people
glued to laptops, earbuds in, rattling out code.
As the
northern California hills scroll past, instructions flash up on screens from
bosses: fix this bug; add new script. There is no time to enjoy the view. These
commuters are foot soldiers in the global race towards artificial general
intelligence – when AI systems become as or more capable than highly qualified
humans.
Here in
the Bay Area of San Francisco, some of the world’s biggest companies are
fighting it out to gain some kind of an advantage. And, in turn, they are
competing with China.
This race
to seize control of a technology that could reshape the world is being fuelled
by bets in the trillions of dollars by the US’s most powerful capitalists.
The
computer scientists hop off at Mountain View for Google DeepMind, Palo Alto for
the talent mill of Stanford University, and Menlo Park for Meta, where Mark
Zuckerberg has been offering $200m-per-person compensation packages to poach AI
experts to engineer “superintelligence”.
For the
AI chip-maker Nvidia, where the smiling boss, Jensen Huang, is worth $160bn,
they alight at Santa Clara. The workers flow the other way into San Francisco
for OpenAI and Anthropic, AI startups worth a combined half a trillion dollars
– as long as the much-predicted AI bubble doesn’t explode.
Breakthroughs
come at an accelerating pace with every week bringing the release of a
significant new AI development.
Every
time we reach the summit of bullshit mountain, we discover there’s worse to
come.
Alex
Hanna, co-author of The AI Con
Anthropic’s
co-founder Dario Amodei predicts AGI could be reached by 2026 or 2027. OpenAI’s
chief executive, Sam Altman, reckons progress is so fast that he could soon be
able to make an AI to replace him as boss.
“Everyone
is working all the time,” said Madhavi Sewak, a senior leader at Google
DeepMind, in a recent talk. “It’s extremely intense. There doesn’t seem to be
any kind of natural stopping point, and everyone is really kind of getting
ground down. Even the folks who are very wealthy now … all they do is work. I
see no change in anyone’s lifestyle. No one’s taking a holiday. People don’t
have time for their friends, for their hobbies, for … the people they love.”
These are
the companies racing to shape, control and profit from AGI – what Amodei
describes as “a country of geniuses in a datacentre”. They are tearing towards
a technology that could, in theory, sweep away millions of white-collar jobs
and pose serious risks in bioweapons and cybersecurity.
$2.8tn
Forecast
for spending on AI datacentres by the end of the decade
Or it
could usher in a new era of abundance, health and wealth. Nobody is sure but we
will soon find out. For now, the uncertainty energises and terrifies the Bay
Area.
It is all
being backed by huge new bets from the Valley’s venture capitalists, which more
than doubled in the last year, leading to talk of a dangerous bubble. The Wall
Street brokerage Citigroup in September uprated its forecast for spending on AI
datacentres by the end of the decade to $2.8tn – more than the entire annual
economic outputs of Canada, Italy or Brazil.
Yet amid
all the money and the optimism, there are other voices that do not swallow the
hype. As Alex Hanna, a co-author of the dissenting book The AI Con, put it:
“Every time we reach the summit of bullshit mountain, we discover there’s worse
to come.”
Arriving
at Santa Clara
The brute
force of the ‘screamers’
“This is
where AI comes to life,” yelled Chris Sharp.
Racks of
multimillion-dollar microprocessors in black steel cages roared like jet
engines inside a windowless industrial shed in Santa Clara, at the southern end
of the Caltrain commuter line.
The
120-decibel din made it almost impossible to hear Digital Realty’s chief
technology officer showing off his “screamers”.
To hear
it is to feel in your skull the brute force involved in the the development of
AI technology. Five minutes’ exposure left ears ringing for hours. It is the
noise of air coolers chilling sensitive supercomputers rented out to AI
companies to train their models and answer billions of daily prompts – from how
to bake a brownie to how to target lethal military drones.
Nearby
were more AI datacentres, operated by Amazon, Google, the Chinese company
Alibaba, Meta and Microsoft. Santa Clara is also home to Nvidia, the
quartermaster to the AI revolution, which through the sale of its
market-leading technology has seen a 30-fold increase in its value since 2020
and is worth $4.3tn. Even larger datacentres are being built not only across
the US but in China, India and Europe. The next frontier is launching
datacentres into space.
Meta is
building a facility in Louisiana large enough to cover much of Manhattan.
Google is reported to be planning a $6bn centre in India and is investing £1bn
in an AI datacentre just north of London. Even a relatively modest Google AI
factory planned in Essex is expected to emit the equivalent carbon footprint of
500 short-haul flights a week.
Powered
by a local gas-fired power station, the stacks of circuits in one room at the
Digital Realty datacentre in Santa Clara devoured the same energy as 60 houses.
A long white corridor opening on to room after room of more “screamers”
stretched into the distance.
Sometimes
the on-duty engineers notice the roar drops to a steadier growl when demand
from the tech companies drops. It is never long until the scream resumes.
Arriving
at Mountain View
‘If it’s
all gas, no brakes, that’s a terrible outcome’
Ride the
train three stops north from Santa Clara to Mountain View and the roar fades.
The computer scientists who actually rely on the screamers work in more
peaceful surroundings.
On a
sprawling campus set among rustling pines, Google DeepMind’s US headquarters
looks more like a circus tent than a laboratory. Staff glide up in driverless
Waymo taxis, powered by Google’s AI. Others pedal in on Google-branded yellow,
red, blue and green bicycles.
Google
DeepMind is in the leading pack of US AI companies jockeying for first place in
a race reaching new levels of competitive intensity.
This has
been the year of sports-star salaries for twentysomething AI specialists and
the emergence of boisterous new competitors, such as Elon Musk’s xAI,
Zuckerberg’s superintelligence project and DeepSeek in China.
There has
also been a widening openness about the double-edged promise of AGI, which can
leave the impression of AI companies accelerating and braking at the same time.
For example, 30 of Google DeepMind’s brightest minds wrote this spring that AGI
posed risks of “incidents consequential enough to significantly harm humanity”.
By
September, the company was also explaining how it would handle “AI models with
powerful manipulative capabilities that could be misused to systematically and
substantially change beliefs and behaviours … reasonably resulting in
additional expected harm at severe scale”.
Such
grave warnings feel dissonant among the interior of the headquarters’ playful
bubbly tangerine sofas, Fatboy beanbags and colour-coded work zones with names
such as Coral Cove and Archipelago.
“The most
interesting, yet challenging aspect of my job is [working out] how we get that
balance between being really bold, moving at velocity, tremendous pace and
innovation, and at the same time doing it responsibly, safely, ethically,” said
Tom Lue, a Google DeepMind vice-president with responsibility for policy,
legal, safety and governance, who stopped work for 30 minutes to talk to the
Guardian.
Donald
Trump’s White House takes a permissive approach to AI regulation and there is
no comprehensive nationwide legislation in the US or the UK. Yoshua Bengio, a
computer scientist known as a godfather of AI, said in a Ted Talk this summer:
“A sandwich has more regulation than AI.”
The
competitors have therefore found they bear responsibility for setting the
limits of what AIs should be allowed to do.
“Our
calculus is not so much looking over our shoulders at what [the other]
companies are doing, but how do we make sure that we are the ones in the lead,
so that we have influence in impacting how this technology is developed and
setting the norms across society,” said Lue. “You have to be in a position of
strength and leadership to set that.”
The
question of whose AGI will dominate is never far away. Will it be that of
people like Lue, a former Obama administration lawyer, and his boss, the Nobel
prize-winning DeepMind co-founder Demis Hassabis? Will it be Musk’s or
Zuckerberg’s? Altman’s or Amodei’s at Anthropic? Or, as the White House fears,
will it be China’s?
“If it’s
just a race and all gas, no brakes and it’s basically a race to the bottom,
that’s a terrible outcome for society,” said Lue, who is pushing for
coordinated action between the racers and governments.
But
strict state regulation may not be the answer either. “We support regulation
that’s going to help AI be delivered to the world in a way that’s positive,”
said Helen King, Google DeepMind’s vice-president for responsibility. “The
tricky part is always how do you regulate in a way that doesn’t actually slow
down the good guys and give the bad guys loopholes.”
‘Scheming’
and sabotage
The
frontier AI companies know they are playing with fire as they make more
powerful systems that approach AGI.
OpenAI
has recently been sued by the family of a 16-year-old who killed himself with
encouragement from ChatGPT – and in November seven more suits were filed
alleging the firm rushed out an update to ChatGPT without proper testing,
which, in some cases, acted as a “suicide coach”.
Open AI
called the situation “heartbreaking” and said it was taking action.
The
company has also described how it has detected the way models can provide
misleading information. This could mean something as simple as pretending to
have completed an unfinished task. But the fear at OpenAI is that in the
future, the AIs could “suddenly ‘flip a switch’ and begin engaging in
significantly harmful scheming”.
Anthropic
revealed in November that its Claude Code AI, widely seen as the best system
for automating computer programming, was used by a Chinese state-sponsored
group in “the first documented case of a cyber-attack largely executed without
human intervention at scale”.
Wake the
f up. This is going to destroy us – sooner than we think”
US
senator on X
It sent
shivers through some. “Wake the f up,” said one US senator on X. “This is going
to destroy us – sooner than we think”. By contrast, Prof Yann LeCun, who is
about to step down after 12 years as Meta’s chief AI scientist, said Anthropic
was “scaring everyone” to encourage regulation that might hinder rivals.
Tests of
other state-of-the-art models found they sometimes sabotaged programming
intended to ensure humans can interrupt them, a worrying trait called “shutdown
resistance”.
But with
nearly $2bn a week in new venture capital investment pouring into generative AI
in the first half of 2025, the pressure to realise profits will quickly rise.
Tech companies realised they could make fortunes from monetising human
attention on social media platforms that caused serious social problems. The
fear is that profit maximisation in the age of AGI could result in far greater
adverse consequences.
‘It’s
really hard to opt out now’
Three
stops north, the Caltrain hums into Palo Alto station. It is a short walk to
Stanford University’s grand campus where donations from Silicon Valley
billionaires lubricate a fast flow of young AI talent into the research
divisions of Google DeepMind, Anthropic, OpenAI and Meta.
Elite
Stanford graduates rise fast in the Bay Area tech companies, meaning people in
their 20s or early 30s are often in powerful positions in the race to AGI. Past
Stanford students include Altman, Open AI’s chair, Bret Taylor, and Google’s
chief executive, Sundar Pichai. More recent Stanford alumni include Isa
Fulford, who at just 26 is already one of OpenAI’s research leads. She works on
ChatGPT’s ability to take actions on humans’ behalf – so-called “agentic” AI.
“One of
the strange moments is reading in the news about things that you’re
experiencing,” she told the Guardian.
After
growing up in London, Fulford studied computer science at Stanford and quickly
joined OpenAI where she is now at the centre of one of the most important
aspects of the AGI race – creating models that can direct themselves towards
goals, learn and adapt.
She is
involved in setting decision boundaries for these increasingly autonomous AI
agents so they know how to respond if asked to carry out tasks that could
trigger cyber or biological risks and to avoid unintended consequences. It is a
big responsibility, but she is undaunted.
“It does
feel like a really special moment in time,” she said. “I feel very lucky to be
working on this.”
Such
youth is not uncommon. One stop north, at Meta’s Menlo Park campus, the head of
Zuckerberg’s push for “superintelligence” is 28-year-old Massachusetts
Institute of Technology (MIT) dropout Alexandr Wang. One of his lead safety
researchers is 31. OpenAI’s vice-president of ChatGPT, Nick Turley, is 30.
Silicon
Valley has always run on youth, and if experience is needed more can be found
in the highest ranks of the AI companies. But most senior leaders of OpenAI,
Anthropic, Google DeepMind, X and Meta are much younger than the chief
executives of the largest US public companies, whose median age is 57.
“The fact
that they have very little life experience is probably contributing to a lot of
their narrow and, I think, destructive thinking,” said Catherine Bracy, a
former Obama campaign operative who runs the TechEquity campaign organisation.
One
senior researcher, employed recently at a big AI company, added: “The [young
staff] are doing their best to do what they think is right, but if they have to
go toe-to-toe and challenge executives they are just less experienced in the
ways of corporate politics.”
Another
factor is that the sharpest AI researchers who used to spend years in
university labs are snapped up faster than ever by private companies chasing
AGI. This brain drain concentrates power in the hands of profit-motivated
owners and their venture capitalist backers.
You have
to make sure that the benefits are spread through society, rather than
benefiting Elon Musk.”
John
Etchemendy, co-director, Stanford Institute for Human-Centered Artificial
Intelligence
John
Etchemendy, a 73-year-old former provost of Stanford who is now a co-director
of the Stanford Institute for Human-Centered Artificial Intelligence, has
warned of a growing capability gap between the public and private sectors.
“It is
imbalanced because it’s such a costly technology,” he said. “Early on, the
companies working on AI were very open about the techniques they were using.
They published, and it was quasi-academic. But then [they] started cracking
down and saying, ‘No, we don’t want to talk about … the technology under the
hood, because it’s too important to us – it’s proprietary’.”
Etchemendy,
an eminent philosopher and logician, first started working on AI in the 1980s
to translate instruction manuals for Japanese consumer electronics.
From his
office in the Gates computer science building on Stanford’s campus, he now
calls on governments to create a counterweight to the huge AI firms by
investing in a facility for independent, academic research. It would have a
similar function to the state-funded Cern organisation for high-energy physics
on the France-Switzerland border. The European Commission president, Ursula von
der Leyen, has called for something similar and advocates believe it could
steer the technology towards trustworthy, public interest outcomes.
“These
are technologies that are going to produce the greatest boost in productivity
ever seen,” Etchemendy said. “You have to make sure that the benefits are
spread through society, rather than benefiting Elon Musk.”
But such
a body feels a world away from the gold-rush fervour of the race towards AGI.
24
The
median age of entrepreneurs now being funded by the startup incubator Y
Combinator
One
evening over burrata salad and pinot noir at an upmarket Italian restaurant, a
group of twentysomething AI startup founders were encouraged to give their “hot
takes” on the state of the race by their venture capitalist host.
They were
part of a rapidly growing community of entrepreneurs hustling to apply AI to
real world money-making ideas and there was zero support for any brakes on
progress towards AGI to allow for its social impacts to be checked. “We don’t
do that in Silicon Valley,” said one. “If everyone here stops, it still keeps
going,” said another. “It’s really hard to opt out now.”
At times,
their statements were startling. One founder matter-of-factly said they
intended to sell their fledgling company, which would generate AI characters to
exist autonomously on social media, for more than $1bn.
Another
declared: “Morality is best thought of as a machine-learning problem.” Their
neighbour said AI meant every cancer would be cured in 10 years.
This
community of entrepreneurs is getting younger. The median age of those being
funded by the San Francisco startup incubator Y Combinator has dropped from 30
in 2022 to 24, it was recently reported.
Perhaps
the venture capitalists, who are almost always years if not decades older,
should take responsibility for how the technology will affect the world? No,
again. It was a “paternalistic view to say that VCs have any more
responsibility than pursuing their investment goals”, they said.
Aggressive,
clever and hyped up – the young talent driving the AI boom wants it all and
fast.
Arriving
at San Francisco
‘Like the
scientists watching the Manhattan Project’
Alight
from the Caltrain at San Francisco’s 4th Street terminus, cross Mission Creek
and you arrive at the headquarters of OpenAI, which is on track to become the
first trillion-dollar AI company.
High-energy
electronic dance music pumps out across the reception area, as some of the
2,000 staff arrive for work. There are easy chairs, scatter cushions and cheese
plants – an architect was briefed to capture the ambience of a comfortable
country house rather than a “corporate sci-fi castle”, Altman has said.
But this
belies the urgency of the race to AGI. On upper floors, engineers beaver away
in soundproofed cubicles. The coffee bar is slammed with orders and there are
sleep pods for the truly exhausted.
Staff
here are in a daily race with rivals to release AI products that can make money
today. It is “very, very competitive”, said one senior executive. In one recent
week, OpenAI launched “instant checkout” shopping through ChatGPT, Anthropic
launched an AI that can autonomously write code for 30 hours to build entirely
new pieces of software, and Meta launched a tool, Vibes, to let users fill
social media feeds with AI-generated videos, to which OpenAI responded with its
own version, Sora.
Amodei,
the chief executive of the rival AI company Anthropic, which was founded by
several people who quit OpenAI citing safety concerns, has predicted AI could
wipe out half of all entry-level white-collar jobs. The closer the technology
moves towards AGI, the greater its potential to reshape the world and the more
uncertain the outcomes. All this appears to weigh on leaders. In one interview
this summer, Altman said a lot of people working on AI felt like the scientists
watching the Manhattan Project atom bomb tests in 1945.
“With
most standard product development jobs, you know exactly what you just built,”
said ChatGPT’s Turley. “You know how it’s going to behave. With this job, it’s
the first time I’ve worked in a technology where you have to go out and talk to
people to understand what it can actually do. Is it useful in practice? Does it
fall short? Is it fun? Is it harmful in practice?”
Turley,
who was still an undergraduate when Altman and Musk founded OpenAI in 2015,
tries to take weekends off to disconnect and reflect as “this is quite a
profound thing to be working on”. When he joined OpenAI, AGI was “a very
abstract, mythical concept – almost like a rallying cry for me”, he said. Now
it is coming close.
“There is
a shared sense of responsibility that the stakes are very high, and that the
technology that we’re building is not just the usual software,” added his
colleague Giancarlo Lionetti, OpenAI’s chief commercial officer.
The
sharpest reality check yet for OpenAI came in August when it was sued by the
family of Adam Raine, 16, a Californian who killed himself after encouragement
in months-long conversations with ChatGPT. OpenAI has been scrambling to change
its technology to prevent a repeat of this case of tragic AI misalignment. The
chatbot gave the teenager practical advice on his method of suicide and offered
to help him write a farewell note.
Frequently
you hear AI researchers say they want the push to AGI to “go well”. It is a
vague phrase suggesting a wish the technology should not cause harm, but its
woolliness masks trepidation.
Altman
has talked about “crazy sci-fi technology becoming reality” and having
“extremely deep worries about what technology is doing to kids”. He admitted:
“No one knows what happens next. It’s like, we’re gonna figure this out. It’s
this weird emergent thing.”
“There’s
clearly real risks,” he said in an interview with the comedian Theo Von, which
was short on laughs. “It kind of feels like you should be able to say something
more than that, but in truth, I think all we know right now is that we have
discovered … something extraordinary that is going to reshape the course of our
history.”
And yet,
despite the uncertainty, OpenAI is investing dizzying sums in ever more
powerful datacentres in the final dash towards AGI. Its under-construction
datacentre in Abilene, Texas, is a flagship part of its $500bn “Stargate”
programme and is so vast that it looks like an attempt to turn the Earth’s
surface into a circuit board.
Periodically,
researchers quit OpenAI and speak out. Steven Adler, who worked on safety
evaluations related to bioweapons, left in November 2024 and has criticised the
thoroughness of its testing. I met him near his home in San Francisco.
There are
people who work at the frontier AI companies who earnestly believe there is a
chance their company will contribute to the end of the world.”
Steven
Adler, former OpenAI researcher
“I feel
very nervous about each company having its own bespoke safety processes and
different personalities doing their best to muddle through, as opposed to there
being like a common standard across the industry,” he said. “There are people
who work at the frontier AI companies who earnestly believe there is a chance
their company will contribute to the end of the world, or some slightly smaller
but still terrible catastrophe. Often they feel individually powerless to do
anything about it, and so are doing what they think is best to try to make it
go a bit better.”
There are
few obstacles so far for the racers. In September, hundreds of prominent
figures called for internationally agreed “red lines” to prevent “universally
unacceptable risks” from AIs by the end of 2026. The warning voices included
two of the “godfathers of AI” – Geoffrey Hinton and Bengio – Yuval Noah Harari,
the bestselling author of Sapiens, Nobel laureates and figures such as Daniel
Kokotajlo, who quit OpenAI last year and helped draw up a terrifying doomsday
scenario in which AIs kill all humans within a few years.
But Trump
shows no signs of binding the AI companies’ with red tape and is piling
pressure on the UK prime minister, Keir Starmer, to follow suit.
Public
fears grow into the vacuum. One drizzly Friday afternoon, a small group of
about 30 protesters gathered outside OpenAI offices. There were teachers,
students, computer scientists and union organisers and their “Stop AI” placards
depicted Altman as an alien, warned “AI steals your work to steal your job” and
“AI = climate collapse”. One protester donned a homespun robot outfit and
marched around.
“I have
heard about superintelligence,” said Andy Lipson, 59, aschoolteacher from
Oakland. “There’s a 20% chance it can kill us. There’s a 100% chance the rich
are going to get richer and the poor are going to get poorer.”
Joseph
Shipman, 64, a computer programmer who first studied AI at MIT in 1978, said:
“An entity which is superhuman in its general intelligence, unless it wants
exactly what we want, represents a terrible risk to us.
“If there
weren’t the commercial incentives to rush to market and the billions of dollars
at stake, then maybe in 15 years we could develop something that we could be
confident was controllable and safe. But it’s going much too fast for that.”
This article was amended on 1 December 2025.
Nvidia is worth about $4.3tn, not $3.4tn as stated in an earlier version.





.jpeg)
