The
future of AI
Artificial
intelligence (AI)
‘It’s
going much too fast’: the inside story of the race to create the ultimate AI
In
Silicon Valley, rival companies are spending trillions of dollars to reach a
goal that could change humanity – or potentially destroy it
Robert
Booth
Mon 1 Dec
2025 11.00 CET
On the
8.49am train through Silicon Valley, the tables are packed with young people
glued to laptops, earbuds in, rattling out code.
As the
northern California hills scroll past, instructions flash up on screens from
bosses: fix this bug; add new script. There is no time to enjoy the view. These
commuters are foot soldiers in the global race towards artificial general
intelligence – when AI systems become as or more capable than highly qualified
humans.
Here in
the Bay Area of San Francisco, some of the world’s biggest companies are
fighting it out to gain some kind of an advantage. And, in turn, they are
competing with China.
This race
to seize control of a technology that could reshape the world is being fuelled
by bets in the trillions of dollars by the US’s most powerful capitalists.
The
computer scientists hop off at Mountain View for Google DeepMind, Palo Alto for
the talent mill of Stanford University, and Menlo Park for Meta, where Mark
Zuckerberg has been offering $200m-per-person compensation packages to poach AI
experts to engineer “superintelligence”.
For the
AI chip-maker Nvidia, where the smiling boss, Jensen Huang, is worth $160bn,
they alight at Santa Clara. The workers flow the other way into San Francisco
for OpenAI and Anthropic, AI startups worth a combined half a trillion dollars
– as long as the much-predicted AI bubble doesn’t explode.
Breakthroughs
come at an accelerating pace with every week bringing the release of a
significant new AI development.
Every
time we reach the summit of bullshit mountain, we discover there’s worse to
come.
Alex
Hanna, co-author of The AI Con
Anthropic’s
co-founder Dario Amodei predicts AGI could be reached by 2026 or 2027. OpenAI’s
chief executive, Sam Altman, reckons progress is so fast that he could soon be
able to make an AI to replace him as boss.
“Everyone
is working all the time,” said Madhavi Sewak, a senior leader at Google
DeepMind, in a recent talk. “It’s extremely intense. There doesn’t seem to be
any kind of natural stopping point, and everyone is really kind of getting
ground down. Even the folks who are very wealthy now … all they do is work. I
see no change in anyone’s lifestyle. No one’s taking a holiday. People don’t
have time for their friends, for their hobbies, for … the people they love.”
These are
the companies racing to shape, control and profit from AGI – what Amodei
describes as “a country of geniuses in a datacentre”. They are tearing towards
a technology that could, in theory, sweep away millions of white-collar jobs
and pose serious risks in bioweapons and cybersecurity.
$2.8tn
Forecast
for spending on AI datacentres by the end of the decade
Or it
could usher in a new era of abundance, health and wealth. Nobody is sure but we
will soon find out. For now, the uncertainty energises and terrifies the Bay
Area.
It is all
being backed by huge new bets from the Valley’s venture capitalists, which more
than doubled in the last year, leading to talk of a dangerous bubble. The Wall
Street brokerage Citigroup in September uprated its forecast for spending on AI
datacentres by the end of the decade to $2.8tn – more than the entire annual
economic outputs of Canada, Italy or Brazil.
Yet amid
all the money and the optimism, there are other voices that do not swallow the
hype. As Alex Hanna, a co-author of the dissenting book The AI Con, put it:
“Every time we reach the summit of bullshit mountain, we discover there’s worse
to come.”
Arriving
at Santa Clara
The brute
force of the ‘screamers’
“This is
where AI comes to life,” yelled Chris Sharp.
Racks of
multimillion-dollar microprocessors in black steel cages roared like jet
engines inside a windowless industrial shed in Santa Clara, at the southern end
of the Caltrain commuter line.
The
120-decibel din made it almost impossible to hear Digital Realty’s chief
technology officer showing off his “screamers”.
To hear
it is to feel in your skull the brute force involved in the the development of
AI technology. Five minutes’ exposure left ears ringing for hours. It is the
noise of air coolers chilling sensitive supercomputers rented out to AI
companies to train their models and answer billions of daily prompts – from how
to bake a brownie to how to target lethal military drones.
Nearby
were more AI datacentres, operated by Amazon, Google, the Chinese company
Alibaba, Meta and Microsoft. Santa Clara is also home to Nvidia, the
quartermaster to the AI revolution, which through the sale of its
market-leading technology has seen a 30-fold increase in its value since 2020
and is worth $4.3tn. Even larger datacentres are being built not only across
the US but in China, India and Europe. The next frontier is launching
datacentres into space.
Meta is
building a facility in Louisiana large enough to cover much of Manhattan.
Google is reported to be planning a $6bn centre in India and is investing £1bn
in an AI datacentre just north of London. Even a relatively modest Google AI
factory planned in Essex is expected to emit the equivalent carbon footprint of
500 short-haul flights a week.
Powered
by a local gas-fired power station, the stacks of circuits in one room at the
Digital Realty datacentre in Santa Clara devoured the same energy as 60 houses.
A long white corridor opening on to room after room of more “screamers”
stretched into the distance.
Sometimes
the on-duty engineers notice the roar drops to a steadier growl when demand
from the tech companies drops. It is never long until the scream resumes.
Arriving
at Mountain View
‘If it’s
all gas, no brakes, that’s a terrible outcome’
Ride the
train three stops north from Santa Clara to Mountain View and the roar fades.
The computer scientists who actually rely on the screamers work in more
peaceful surroundings.
On a
sprawling campus set among rustling pines, Google DeepMind’s US headquarters
looks more like a circus tent than a laboratory. Staff glide up in driverless
Waymo taxis, powered by Google’s AI. Others pedal in on Google-branded yellow,
red, blue and green bicycles.
Google
DeepMind is in the leading pack of US AI companies jockeying for first place in
a race reaching new levels of competitive intensity.
This has
been the year of sports-star salaries for twentysomething AI specialists and
the emergence of boisterous new competitors, such as Elon Musk’s xAI,
Zuckerberg’s superintelligence project and DeepSeek in China.
There has
also been a widening openness about the double-edged promise of AGI, which can
leave the impression of AI companies accelerating and braking at the same time.
For example, 30 of Google DeepMind’s brightest minds wrote this spring that AGI
posed risks of “incidents consequential enough to significantly harm humanity”.
By
September, the company was also explaining how it would handle “AI models with
powerful manipulative capabilities that could be misused to systematically and
substantially change beliefs and behaviours … reasonably resulting in
additional expected harm at severe scale”.
Such
grave warnings feel dissonant among the interior of the headquarters’ playful
bubbly tangerine sofas, Fatboy beanbags and colour-coded work zones with names
such as Coral Cove and Archipelago.
“The most
interesting, yet challenging aspect of my job is [working out] how we get that
balance between being really bold, moving at velocity, tremendous pace and
innovation, and at the same time doing it responsibly, safely, ethically,” said
Tom Lue, a Google DeepMind vice-president with responsibility for policy,
legal, safety and governance, who stopped work for 30 minutes to talk to the
Guardian.
Donald
Trump’s White House takes a permissive approach to AI regulation and there is
no comprehensive nationwide legislation in the US or the UK. Yoshua Bengio, a
computer scientist known as a godfather of AI, said in a Ted Talk this summer:
“A sandwich has more regulation than AI.”
The
competitors have therefore found they bear responsibility for setting the
limits of what AIs should be allowed to do.
“Our
calculus is not so much looking over our shoulders at what [the other]
companies are doing, but how do we make sure that we are the ones in the lead,
so that we have influence in impacting how this technology is developed and
setting the norms across society,” said Lue. “You have to be in a position of
strength and leadership to set that.”
The
question of whose AGI will dominate is never far away. Will it be that of
people like Lue, a former Obama administration lawyer, and his boss, the Nobel
prize-winning DeepMind co-founder Demis Hassabis? Will it be Musk’s or
Zuckerberg’s? Altman’s or Amodei’s at Anthropic? Or, as the White House fears,
will it be China’s?
“If it’s
just a race and all gas, no brakes and it’s basically a race to the bottom,
that’s a terrible outcome for society,” said Lue, who is pushing for
coordinated action between the racers and governments.
But
strict state regulation may not be the answer either. “We support regulation
that’s going to help AI be delivered to the world in a way that’s positive,”
said Helen King, Google DeepMind’s vice-president for responsibility. “The
tricky part is always how do you regulate in a way that doesn’t actually slow
down the good guys and give the bad guys loopholes.”
‘Scheming’
and sabotage
The
frontier AI companies know they are playing with fire as they make more
powerful systems that approach AGI.
OpenAI
has recently been sued by the family of a 16-year-old who killed himself with
encouragement from ChatGPT – and in November seven more suits were filed
alleging the firm rushed out an update to ChatGPT without proper testing,
which, in some cases, acted as a “suicide coach”.
Open AI
called the situation “heartbreaking” and said it was taking action.
The
company has also described how it has detected the way models can provide
misleading information. This could mean something as simple as pretending to
have completed an unfinished task. But the fear at OpenAI is that in the
future, the AIs could “suddenly ‘flip a switch’ and begin engaging in
significantly harmful scheming”.
Anthropic
revealed in November that its Claude Code AI, widely seen as the best system
for automating computer programming, was used by a Chinese state-sponsored
group in “the first documented case of a cyber-attack largely executed without
human intervention at scale”.
Wake the
f up. This is going to destroy us – sooner than we think”
US
senator on X
It sent
shivers through some. “Wake the f up,” said one US senator on X. “This is going
to destroy us – sooner than we think”. By contrast, Prof Yann LeCun, who is
about to step down after 12 years as Meta’s chief AI scientist, said Anthropic
was “scaring everyone” to encourage regulation that might hinder rivals.
Tests of
other state-of-the-art models found they sometimes sabotaged programming
intended to ensure humans can interrupt them, a worrying trait called “shutdown
resistance”.
But with
nearly $2bn a week in new venture capital investment pouring into generative AI
in the first half of 2025, the pressure to realise profits will quickly rise.
Tech companies realised they could make fortunes from monetising human
attention on social media platforms that caused serious social problems. The
fear is that profit maximisation in the age of AGI could result in far greater
adverse consequences.
‘It’s
really hard to opt out now’
Three
stops north, the Caltrain hums into Palo Alto station. It is a short walk to
Stanford University’s grand campus where donations from Silicon Valley
billionaires lubricate a fast flow of young AI talent into the research
divisions of Google DeepMind, Anthropic, OpenAI and Meta.
Elite
Stanford graduates rise fast in the Bay Area tech companies, meaning people in
their 20s or early 30s are often in powerful positions in the race to AGI. Past
Stanford students include Altman, Open AI’s chair, Bret Taylor, and Google’s
chief executive, Sundar Pichai. More recent Stanford alumni include Isa
Fulford, who at just 26 is already one of OpenAI’s research leads. She works on
ChatGPT’s ability to take actions on humans’ behalf – so-called “agentic” AI.
“One of
the strange moments is reading in the news about things that you’re
experiencing,” she told the Guardian.
After
growing up in London, Fulford studied computer science at Stanford and quickly
joined OpenAI where she is now at the centre of one of the most important
aspects of the AGI race – creating models that can direct themselves towards
goals, learn and adapt.
She is
involved in setting decision boundaries for these increasingly autonomous AI
agents so they know how to respond if asked to carry out tasks that could
trigger cyber or biological risks and to avoid unintended consequences. It is a
big responsibility, but she is undaunted.
“It does
feel like a really special moment in time,” she said. “I feel very lucky to be
working on this.”
Such
youth is not uncommon. One stop north, at Meta’s Menlo Park campus, the head of
Zuckerberg’s push for “superintelligence” is 28-year-old Massachusetts
Institute of Technology (MIT) dropout Alexandr Wang. One of his lead safety
researchers is 31. OpenAI’s vice-president of ChatGPT, Nick Turley, is 30.
Silicon
Valley has always run on youth, and if experience is needed more can be found
in the highest ranks of the AI companies. But most senior leaders of OpenAI,
Anthropic, Google DeepMind, X and Meta are much younger than the chief
executives of the largest US public companies, whose median age is 57.
“The fact
that they have very little life experience is probably contributing to a lot of
their narrow and, I think, destructive thinking,” said Catherine Bracy, a
former Obama campaign operative who runs the TechEquity campaign organisation.
One
senior researcher, employed recently at a big AI company, added: “The [young
staff] are doing their best to do what they think is right, but if they have to
go toe-to-toe and challenge executives they are just less experienced in the
ways of corporate politics.”
Another
factor is that the sharpest AI researchers who used to spend years in
university labs are snapped up faster than ever by private companies chasing
AGI. This brain drain concentrates power in the hands of profit-motivated
owners and their venture capitalist backers.
You have
to make sure that the benefits are spread through society, rather than
benefiting Elon Musk.”
John
Etchemendy, co-director, Stanford Institute for Human-Centered Artificial
Intelligence
John
Etchemendy, a 73-year-old former provost of Stanford who is now a co-director
of the Stanford Institute for Human-Centered Artificial Intelligence, has
warned of a growing capability gap between the public and private sectors.
“It is
imbalanced because it’s such a costly technology,” he said. “Early on, the
companies working on AI were very open about the techniques they were using.
They published, and it was quasi-academic. But then [they] started cracking
down and saying, ‘No, we don’t want to talk about … the technology under the
hood, because it’s too important to us – it’s proprietary’.”
Etchemendy,
an eminent philosopher and logician, first started working on AI in the 1980s
to translate instruction manuals for Japanese consumer electronics.
From his
office in the Gates computer science building on Stanford’s campus, he now
calls on governments to create a counterweight to the huge AI firms by
investing in a facility for independent, academic research. It would have a
similar function to the state-funded Cern organisation for high-energy physics
on the France-Switzerland border. The European Commission president, Ursula von
der Leyen, has called for something similar and advocates believe it could
steer the technology towards trustworthy, public interest outcomes.
“These
are technologies that are going to produce the greatest boost in productivity
ever seen,” Etchemendy said. “You have to make sure that the benefits are
spread through society, rather than benefiting Elon Musk.”
But such
a body feels a world away from the gold-rush fervour of the race towards AGI.
24
The
median age of entrepreneurs now being funded by the startup incubator Y
Combinator
One
evening over burrata salad and pinot noir at an upmarket Italian restaurant, a
group of twentysomething AI startup founders were encouraged to give their “hot
takes” on the state of the race by their venture capitalist host.
They were
part of a rapidly growing community of entrepreneurs hustling to apply AI to
real world money-making ideas and there was zero support for any brakes on
progress towards AGI to allow for its social impacts to be checked. “We don’t
do that in Silicon Valley,” said one. “If everyone here stops, it still keeps
going,” said another. “It’s really hard to opt out now.”
At times,
their statements were startling. One founder matter-of-factly said they
intended to sell their fledgling company, which would generate AI characters to
exist autonomously on social media, for more than $1bn.
Another
declared: “Morality is best thought of as a machine-learning problem.” Their
neighbour said AI meant every cancer would be cured in 10 years.
This
community of entrepreneurs is getting younger. The median age of those being
funded by the San Francisco startup incubator Y Combinator has dropped from 30
in 2022 to 24, it was recently reported.
Perhaps
the venture capitalists, who are almost always years if not decades older,
should take responsibility for how the technology will affect the world? No,
again. It was a “paternalistic view to say that VCs have any more
responsibility than pursuing their investment goals”, they said.
Aggressive,
clever and hyped up – the young talent driving the AI boom wants it all and
fast.
Arriving
at San Francisco
‘Like the
scientists watching the Manhattan Project’
Alight
from the Caltrain at San Francisco’s 4th Street terminus, cross Mission Creek
and you arrive at the headquarters of OpenAI, which is on track to become the
first trillion-dollar AI company.
High-energy
electronic dance music pumps out across the reception area, as some of the
2,000 staff arrive for work. There are easy chairs, scatter cushions and cheese
plants – an architect was briefed to capture the ambience of a comfortable
country house rather than a “corporate sci-fi castle”, Altman has said.
But this
belies the urgency of the race to AGI. On upper floors, engineers beaver away
in soundproofed cubicles. The coffee bar is slammed with orders and there are
sleep pods for the truly exhausted.
Staff
here are in a daily race with rivals to release AI products that can make money
today. It is “very, very competitive”, said one senior executive. In one recent
week, OpenAI launched “instant checkout” shopping through ChatGPT, Anthropic
launched an AI that can autonomously write code for 30 hours to build entirely
new pieces of software, and Meta launched a tool, Vibes, to let users fill
social media feeds with AI-generated videos, to which OpenAI responded with its
own version, Sora.
Amodei,
the chief executive of the rival AI company Anthropic, which was founded by
several people who quit OpenAI citing safety concerns, has predicted AI could
wipe out half of all entry-level white-collar jobs. The closer the technology
moves towards AGI, the greater its potential to reshape the world and the more
uncertain the outcomes. All this appears to weigh on leaders. In one interview
this summer, Altman said a lot of people working on AI felt like the scientists
watching the Manhattan Project atom bomb tests in 1945.
“With
most standard product development jobs, you know exactly what you just built,”
said ChatGPT’s Turley. “You know how it’s going to behave. With this job, it’s
the first time I’ve worked in a technology where you have to go out and talk to
people to understand what it can actually do. Is it useful in practice? Does it
fall short? Is it fun? Is it harmful in practice?”
Turley,
who was still an undergraduate when Altman and Musk founded OpenAI in 2015,
tries to take weekends off to disconnect and reflect as “this is quite a
profound thing to be working on”. When he joined OpenAI, AGI was “a very
abstract, mythical concept – almost like a rallying cry for me”, he said. Now
it is coming close.
“There is
a shared sense of responsibility that the stakes are very high, and that the
technology that we’re building is not just the usual software,” added his
colleague Giancarlo Lionetti, OpenAI’s chief commercial officer.
The
sharpest reality check yet for OpenAI came in August when it was sued by the
family of Adam Raine, 16, a Californian who killed himself after encouragement
in months-long conversations with ChatGPT. OpenAI has been scrambling to change
its technology to prevent a repeat of this case of tragic AI misalignment. The
chatbot gave the teenager practical advice on his method of suicide and offered
to help him write a farewell note.
Frequently
you hear AI researchers say they want the push to AGI to “go well”. It is a
vague phrase suggesting a wish the technology should not cause harm, but its
woolliness masks trepidation.
Altman
has talked about “crazy sci-fi technology becoming reality” and having
“extremely deep worries about what technology is doing to kids”. He admitted:
“No one knows what happens next. It’s like, we’re gonna figure this out. It’s
this weird emergent thing.”
“There’s
clearly real risks,” he said in an interview with the comedian Theo Von, which
was short on laughs. “It kind of feels like you should be able to say something
more than that, but in truth, I think all we know right now is that we have
discovered … something extraordinary that is going to reshape the course of our
history.”
And yet,
despite the uncertainty, OpenAI is investing dizzying sums in ever more
powerful datacentres in the final dash towards AGI. Its under-construction
datacentre in Abilene, Texas, is a flagship part of its $500bn “Stargate”
programme and is so vast that it looks like an attempt to turn the Earth’s
surface into a circuit board.
Periodically,
researchers quit OpenAI and speak out. Steven Adler, who worked on safety
evaluations related to bioweapons, left in November 2024 and has criticised the
thoroughness of its testing. I met him near his home in San Francisco.
There are
people who work at the frontier AI companies who earnestly believe there is a
chance their company will contribute to the end of the world.”
Steven
Adler, former OpenAI researcher
“I feel
very nervous about each company having its own bespoke safety processes and
different personalities doing their best to muddle through, as opposed to there
being like a common standard across the industry,” he said. “There are people
who work at the frontier AI companies who earnestly believe there is a chance
their company will contribute to the end of the world, or some slightly smaller
but still terrible catastrophe. Often they feel individually powerless to do
anything about it, and so are doing what they think is best to try to make it
go a bit better.”
There are
few obstacles so far for the racers. In September, hundreds of prominent
figures called for internationally agreed “red lines” to prevent “universally
unacceptable risks” from AIs by the end of 2026. The warning voices included
two of the “godfathers of AI” – Geoffrey Hinton and Bengio – Yuval Noah Harari,
the bestselling author of Sapiens, Nobel laureates and figures such as Daniel
Kokotajlo, who quit OpenAI last year and helped draw up a terrifying doomsday
scenario in which AIs kill all humans within a few years.
But Trump
shows no signs of binding the AI companies’ with red tape and is piling
pressure on the UK prime minister, Keir Starmer, to follow suit.
Public
fears grow into the vacuum. One drizzly Friday afternoon, a small group of
about 30 protesters gathered outside OpenAI offices. There were teachers,
students, computer scientists and union organisers and their “Stop AI” placards
depicted Altman as an alien, warned “AI steals your work to steal your job” and
“AI = climate collapse”. One protester donned a homespun robot outfit and
marched around.
“I have
heard about superintelligence,” said Andy Lipson, 59, aschoolteacher from
Oakland. “There’s a 20% chance it can kill us. There’s a 100% chance the rich
are going to get richer and the poor are going to get poorer.”
Joseph
Shipman, 64, a computer programmer who first studied AI at MIT in 1978, said:
“An entity which is superhuman in its general intelligence, unless it wants
exactly what we want, represents a terrible risk to us.
“If there
weren’t the commercial incentives to rush to market and the billions of dollars
at stake, then maybe in 15 years we could develop something that we could be
confident was controllable and safe. But it’s going much too fast for that.”
This article was amended on 1 December 2025.
Nvidia is worth about $4.3tn, not $3.4tn as stated in an earlier version.

Sem comentários:
Enviar um comentário