Account
The
Techno-Futuristic Philosophy Behind Elon Musk’s Mania
From the
White House to Mars, the tech billionaire has his sights set on the long term.
By Matthew
Purdy
May 29, 2025
Updated 8:52
a.m. ET
https://www.nytimes.com/2025/05/29/business/elon-musk-longtermism-effective-altruism-doge.html
As Elon Musk
prepared to make a less than triumphant exit from Washington, he told the Fox
News host Jesse Watters earlier this month that his rampage through the
bureaucracy had made “significant progress” in cutting waste and fraud. But
there was no hiding that the man whose rockets can gracefully return to earth
standing tall on their launchpads had made a bit of a crash landing.
His
projected cut of $2 trillion from the federal budget had shrunk on paper, at
that point, to $165 billion. Tesla stock took a nosedive along with his
personal wealth and popularity. And, as he rewrote Silicon Valley’s mantra into
“move fast, break things and get out of town,” no one was urging him to stay.
Prompted by
Watters, Musk shifted the interview seamlessly from eradicating waste to
another obsession: the looming eradication of life on earth. “The sun is
gradually expanding, so we do at some point need to be a multiplanetary
civilization because earth will be incinerated,” Musk said.
“I’m hearing
this for the first time,” Watters said, bemused.
“We have
several hundred million years, so don’t hold your breath,” Musk assured him.
Say what you
will about Elon Musk, the man thinks ahead.
Over the
past couple of decades, Musk has devoted himself to three grand engineering
projects, all with the long-term mission of sustaining humanity far into the
future. The goal of his rocket company SpaceX is to establish a city on Mars.
Tesla is accelerating the transition to sustainable energy, autonomous vehicles
and humanoid robots. Neuralink aims to eventually wire artificial intelligence
into human brains so people can keep pace with machines.
“The guy is
our Einstein,” Jamie Dimon, the JPMorgan Chase chief executive, said two days
after President Trump’s inauguration.
That was
before Musk etched a new image for himself — a hyped-up man all in black with
dark sunglasses, strutting before a roaring crowd, punching the air with a
chain saw, declaring it “the chain saw for bureaucracy.”
Musk
embraced Trump after a MAGA conversion, motivated by his fear that regulation
was choking innovation and his vow to “destroy the woke mind virus” after one
of his children underwent a gender transition. But Musk split with the
president on tariffs and the budget bill and the billionaire buddies formally
fissured on Wednesday night. Musk took to X to thank the president for allowing
him to serve and indicated that he was permanently moving on.
Musk’s
tumultuous four-month adventure shifted his gaze from the long term to the
ultimate short-term arena — politics. The wild ride through the halls of
official power perhaps fed his taste for drama, celebrity and the adolescent
heroism of comic-books and sci-fi.
But there is
another strand that runs strong in Musk, a techno-futuristic philosophy that
might help explain how the man who has fancied himself Batman on a mission to
save humanity could also play the dark jokester — the world’s richest man who
gleefully proclaimed the demise of aid for the world’s poorest with a callous
quip about how he and his DOGE troops had “spent the weekend feeding U.S.A.I.D.
into the wood chipper.”
That
philosophy emerged from the world Musk now returns to full time, a world of
engineers and utilitarian thinkers and tech billionaires, who seem to have
designs on everything — past, present and, especially, future.
Existential
Threats, Technological Solutions
In 2022,
Musk reposted a link to a 2003 paper by Nick Bostrom, a philosopher who was
then at Oxford, with the line, “Likely the most important paper ever written.”
In the
paper, titled “Astronomical Waste: The Opportunity Cost of Delayed
Technological Development,” Bostrom took a stab at calculating the potential
lives lost by delaying the development of technology needed to survive “in the
accessible region of the universe” for millions of years. “The potential for
over 10 trillion potential human beings is lost for every second of
postponement of colonization of our supercluster,” he wrote.
Such pie in
the sky calculations urging the colonizing of space might seem like unusual
territory for a philosophy professor — even one with advanced degrees in
physics and computational neuroscience. But Bostrom is among a group of
philosophers and technologists that promotes a strain of thinking clunkily
labeled longtermism. It’s a worldview that aligns with — and supports — Musk’s
futurist, sometimes fantastical, vision.
Longtermism
is deeply entwined with effective altruism, a more widely known movement.
Effective altruism, which developed from ideas put forth by the philosopher
Peter Singer in the early 1970s, argues that well-off people and societies are
morally obligated to combat poverty, even far from home. It encourages a
strict, utilitarian process for calculating how philanthropy can do the
greatest good for the greatest number of people. Insecticide-treated bed nets
that protect against mosquito-borne malaria in remote regions on the other side
of the world, for example, are far more “effective” when it comes saving lives
than donations to a local food bank.
The
longtermists radically changed the equation by asserting that we have a similar
moral obligation to the well-being of our brethren yet to come, those thousands
or even millions of years in the future. Of course, there are potentially many,
many, many more future people than there are current ones, particularly when
you throw in the possibility of nonhuman sentient beings, which some
longtermists do.
So, simply
by the numbers, the case can be made that ensuring the existence of future
human civilization by preparing for species-ending risks like a massive
asteroid strike or global nuclear annihilation outweighs addressing poverty or
starvation for a few hundred million current people.
For
longtermists, the most pressing threats are often existential, and technology
is almost always the cure.
This is
Musk’s sweet spot. The focus of so much of his technology — rocketry, humanoid
robotics, even his tunneling company — is intended to converge on making “human
consciousness” multiplanetary, an urgent mission he complains is frustrated by
rules and regulators. He calls colonizing Mars “life insurance of life
collectively.” Like an insurance salesman, he has his pitch down.
“For the
first time in the four-and-a-half-billion-year history of earth, it is possible
to extend consciousness beyond our home planet,” he said on Joe Rogan’s podcast
in February.
“We have to
see this as a race against time,” he said. “Can we make Mars self-sufficient
before civilization has some sort of future fork in the road where there is
either like a nuclear war or something, or we get hit by a meteor or simply
civilization might just die with a whimper in adult diapers instead of with a
bang?”
Musk took
his first step toward Mars in 2001 when he donated $5,000 to the Mars Society,
which was started by Robert Zubrin, an aerospace engineer who had written the
book “The Case for Mars: The Plan to Settle the Red Planet and Why We Must.”
The next year, Musk started SpaceX.
Coincidentally
or not, this was around the same time Bostrom published a paper titled
“Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.”
It looked at a range of threats: a meteor strike; global disease; runaway
artificial intelligence; even the admittedly slim possibility that a future
society created a simulated reality that we were now unknowingly living in and
which could be turned off. Musk has occasionally invoked this “simulation
theory,” which hearkens back to “The Matrix,” one of his favorite movies.
“The balance
of evidence is such that it would appear unreasonable not to assign a
substantial probability to the hypothesis that an existential disaster will do
us in,” Bostrom wrote, adding later in the paper, “With technology, we have
some chance, although the greatest risks now turn out to be those generated by
technology itself.”
Whether or
not Musk read the paper, he has echoed Bostrom and other proponents of
longtermism, including the philosopher William MacAskill. MacAskill became
something of a celebrity intellectual among technologists and financiers, to
whom he preached an “earning-to-give” approach to philanthropy. Sam
Bankman-Fried, the now disgraced crypto magnate, was one of his biggest
acolytes. Musk touted MacAskill’s 2022 book, “What We Owe the Future,” saying
on X — the social media network that he owns — that the explication of
longtermist thinking is “a close match to my philosophy.”
Not
surprisingly, Musk, who did not respond to a request for comment, does not
explicitly identify himself with any movement defined by anyone else — or
didn’t before falling in with MAGA. Bostrom takes no credit for Musk’s long
view, saying in an email, “My impression is that Musk is not a follower of any
one particular school of thought but is rather inclined to do his own thinking
and to reach his own conclusions.”
But it is
unmistakable how frequently Musk warns of existential threats, usually arrived
at via a “fork in the road.”
In 2023, he
said artificial intelligence becoming “far smarter than the smartest human” was
“one of the existential risks that we face, and it’s potentially the most
pressing one.”
In February,
referring to “the woke mind virus,” he said in a post on X “the biggest
existential danger to humanity is having it programmed into the A.I.” He
claimed that some A.I. platforms (although not the one he owns) will answer
that misgendering someone is worse than thermonuclear war. “The existential
problem with that extrapolation is that a super powerful A.I. could decide that
the only 100 percent certain way to stop misgendering is to kill all humans,”
he said.
In 2022, he
posted that “population collapse due to low birthrates is a much bigger risk to
civilization than global warming.”
Earlier this
year, wielding his celebrity and money unsuccessfully to swing a Wisconsin
Supreme Court election, he told a crowd of supporters: “I feel like this is one
of those things that may not seem that it’s going to affect the entire destiny
of humanity, but I think it will.”
And last
August, in a conversation with Trump on X, he said he was endorsing him for
president because “I think we are at the fork in the road of destiny of
civilization.”
There
appears to be a tautology to Musk’s longtermism: If Musk is battling a threat,
it is, by definition, existential.
“Musk is a
hero in the Homeric sense, in his mind and in his action,” said Zubrin, of the
Mars Society. “He is someone who is striving to do great deeds to earn eternal
glory. He has done some. Just as in Homer, this sort of motivation also has a
pathological side.”
For Musk,
countering existential risks gives him broad license. Fathering 13 children by
several women is Musk combating the societal risk of falling birthrates
worldwide. “If you don’t make new humans, there’s no humanity,” he said in a
live interview last fall with Peter Diamandis, an entrepreneur and a Musk
associate. “I do have a lot of kids, and I encourage others to have lots of
kids.”
In a
long-running legal battle between Musk and Tesla shareholders over his
projected $55.8 billion compensation package, an email emerged that he wrote to
a company lawyer in 2017, explaining that: “The added comp is just so I can put
as much as possible toward minimizing existential risk by putting the money
toward Mars if I am successful in leading Tesla to be one of the world’s most
valuable companies. This is kinda crazy, but it is true.”
From that
perspective, one can see why it makes sense for Musk to feed tens of billions
worth of government programs for the global poor into the “wood chipper” while,
two months later, $5.9 billion in government contracts was fed into Musk’s
space company.
Like many
donors to Trump, Musk has gotten a return on his investment in the election.
Trump promoted a Mars mission in his inaugural address. A close Musk associate
is now the head of NASA. Before the election Musk complained that because of
the profusion of regulations, it would “eventually become illegal to do very
large projects, and we won’t be able to get to Mars.” Now many of Musk’s
governmental annoyances are melting away.
Critics of
longtermism say it appeals to wealthy tech moguls precisely because it adds a
sheen of morality to their masters-of-the-universe projects. They also say that
the moguls’ ultimate goal is a utopian civilization of humans, biological and
robotic, all A.I. enhanced.
Mollie
Gleiberman, an anthropologist at the University of Antwerp who has studied the
rise of effective altruism, highlights a paradox of the futuristic tech moguls:
Some of the same people warning of the dangers of superintelligent A.I. are
also developing A.I., like Musk himself. It’s another tautology — technological
risk necessitates a technological response. “The vivid articulation of a fear
conjures the thing to be feared into existence,” she wrote in a 2023 paper.
Take
humanoid robots, for example. During an interview with Senator Ted Cruz,
Republican of Texas, earlier this year, Musk was asked how real the prospect
was of killer robots annihilating humanity. “Twenty percent likely,” he shot
back.
But given
his belief that robots will unleash never-before-seen levels of productivity,
Musk reassured Cruz that, barring human annihilation, it is “80 percent likely
we will have extreme prosperity for all.”
At a Tesla
promotional event last year captured on video, Musk’s robots were on the move,
serving drinks, posing for pictures and dancing. “It can do anything you want,
so it can be a teacher, babysit your kids, walk your dog, mow your lawn, get
the groceries, just be your friend, serve drinks,” Musk told guests. “I think
this will be the biggest product ever of any kind.”
Max Tegmark,
an M.I.T. physicist and A.I. researcher, said he bonded with Musk a decade ago
over a shared belief that “A.I. was going to hit us like a tsunami.” Tegmark
said Musk distinguishes himself among business leaders by his devotion to not
just thinking long term, but acting on it. “He makes money now and spends it on
making the long-term future good,” Tegmark said.
Regardless
of whether his ultimate goal is reached, the ambition births breakthrough
creations along the way for the here and now — reusable rockets, electric cars,
a satellite-based internet service, a human-computer link aiding people with
neurological damage.
Tegmark is
president of the Future of Life Institute, which is devoted to the safety of
technology, bioengineering and nuclear weapons. Musk, an adviser to the group,
donated about $10 million to the institute, which used some of the money to
help fund a now-shuttered center at Oxford founded by Bostrom called Future of
Humanity Institute.
“People can
quibble about his methods and politics,” Tegmark said. But he said Musk’s focus
on the long-term future and protecting against threats has remained
unchanged.Committed longtermists aren’t convinced that Musk has it right,
though. “Longtermism isn’t about ignoring present-day suffering in favor of
speculative futures,” MacAskill, the philosopher, wrote in an email, adding
that the best way to safeguard the future is by “maintaining the international
cooperation needed to address global risks — not dismantling the very
institutions that make such cooperation possible.”
Singer, the
retired Princeton professor whose 1972 essay “Famine, Affluence and Morality”
was the initial spur for the effective altruism movement, is similarly
skeptical. “If you were altruistic at all, you would have paid more attention
to the impact that you are having on hundreds of millions of people by the
cutback in U.S.A.I.D., or the freeze in U.S.A.I.D., plus the many other things
that are happening as well,” he said.
Musk is now
headed back to the future. Days ago, SpaceX launched another test flight of its
supersize Starship. The spacecraft spun out of control and wound up as a debris
field in the Indian Ocean. But Musk claims he still plans to shoot for Mars
next year.
Even Zubrin,
the Mars Society president, thinks Musk’s Mars colonization plan is “nuts.” He
wants the United States to pursue Mars for the challenge and for science. And
he worries that Musk’s short-term dive into politics has hurt the long-term
goal, since the Mars mission might now be seen as a Musk project and become
prone to political turbulence.
In February,
still deep in the turbulence of Washington, Musk regaled Rogan, the podcast
host, with tales of waste from the federal budget. Like the U.S. Agency for
International Development, the care and feeding of migrants was a particular
DOGE target. It’s plausible a tough, careful audit could find money poorly
spent, but Musk also came to a more ominous and more predictable conclusion: He
had found a new existential threat.
“We’ve got
civilizational suicidal empathy going on,” he told Rogan. “The fundamental
weakness of Western civilization is empathy. The empathy exploit. They are
exploiting a bug in Western civilization, which is the empathy response. And I
think empathy is good, but you need to think it through and not just be
programmed like a robot.”
Sem comentários:
Enviar um comentário