Forget
ideology, liberal democracy’s newest threats come from technology
and bioscience
John Naughton
A
groundbreaking book by historian Yuval Harari claims that artificial
intelligence and genetic enhancements will usher in a world of
inequality and powerful elites. How real is the threat?
Sunday 28 August
2016 05.15 BST
The BBC Reith
Lectures in 1967 were given by Edmund Leach, a Cambridge social
anthropologist. “Men have become like gods,” Leach began. “Isn’t
it about time that we understood our divinity? Science offers us
total mastery over our environment and over our destiny, yet instead
of rejoicing we feel deeply afraid.”
That was nearly half
a century ago, and yet Leach’s opening lines could easily apply to
today. He was speaking before the internet had been built and long
before the human genome had been decoded, and so his claim about men
becoming “like gods” seems relatively modest compared with the
capabilities that molecular biology and computing have subsequently
bestowed upon us. Our science-based culture is the most powerful in
history, and it is ceaselessly researching, exploring, developing and
growing. But in recent times it seems to have also become plagued
with existential angst as the implications of human ingenuity begin
to be (dimly) glimpsed.
The title that Leach
chose for his Reith Lecture – A Runaway World – captures our
zeitgeist too. At any rate, we are also increasingly fretful about a
world that seems to be running out of control, largely (but not
solely) because of information technology and what the life sciences
are making possible. But we seek consolation in the thought that “it
was always thus”: people felt alarmed about steam in George Eliot’s
time and got worked up about electricity, the telegraph and the
telephone as they arrived on the scene. The reassuring implication is
that we weathered those technological storms, and so we will weather
this one too. Humankind will muddle through.
But in the last five
years or so even that cautious, pragmatic optimism has begun to
erode. There are several reasons for this loss of confidence. One is
the sheer vertiginous pace of technological change. Another is that
the new forces at loose in our society – particularly information
technology and the life sciences – are potentially more
far-reaching in their implications than steam or electricity ever
were. And, thirdly, we have begun to see startling advances in these
fields that have forced us to recalibrate our expectations.
Advertisement
A classic example is
the field of artificial intelligence (AI), defined as the quest to
enable machines to do things that would require intelligence if
performed by a human. For as long as most of us can remember, AI in
that sense was always 20 years away from the date of prediction.
Maybe it still is. But in the last few years we have seen that the
combination of machine learning, powerful algorithms, vast processing
power and so-called “Big Data” can enable machines to do very
impressive things – real-time language translation, for example, or
driving cars safely through complex urban environments – that
seemed implausible even a decade ago.
And this, in turn,
has led to a renewal of excited speculation about the possibility –
and the existential risks – of the “intelligence explosion”
that would be caused by inventing a machine that was capable of
recursive self-improvement. This possibility was first raised in 1965
by the British cryptographer IJ Good, who famously wrote: “The
first ultraintelligent machine is the last invention that man need
ever make, provided that the machine is docile enough to tell us how
to keep it under control.” Fifty years later, we find contemporary
thinkers like Nick Bostrom and Murray Shanahan taking the idea
seriously.
There’s a sense,
therefore, that we are approaching another “end of history”
moment – but with a difference. In his famous 1989 article, the
political scientist Francis Fukuyama argued that the collapse of the
Soviet empire meant the end of the great ideological battle between
east and west and the “universalisation of western liberal
democracy as the final form of human government”. This was a bold,
but not implausible, claim at the time. What Fukuyama could not have
known is that a new challenge to liberal democracy would eventually
materialise, and that its primary roots would lie not in ideology but
in bioscience and information technology.
For that, in a
nutshell, is the central argument of Yuval Noah Harari’s new book,
Homo Deus: A Brief History of Tomorrow. In a way, it’s a logical
extension of his previous book, Sapiens: A Brief History of
Humankind, which chronicled the entire span of human history, from
the evolution of Homo sapiens up to the political and technological
revolutions of the 21st century, and deservedly became a world
bestseller.
Those who ride
the train of progress will acquire divine abilities. Those left
behind face extinction
Most writers on the
implications of new technology focus too much on the technology and
too little on society’s role in shaping it. That’s partly because
those who are interested in these things are (like the engineers who
create the stuff) determinists: they believe that technology drives
history. And, at heart, Harari is a determinist too. “In the early
21st century,” he writes in a striking passage, “the train of
progress is again pulling out of the station – and this will
probably be the last train ever to leave the station called Homo
sapiens. Those who miss this train will never get a second chance. In
order to get a seat on it, you need to understand 21st century
technology, and in particular the powers of biotechnology and
computer algorithms.”
Advertisement
He continues: “
These powers are far more potent than steam and the telegraph, and
they will not be used mainly for the production of food, textiles,
vehicles and weapons. The main products of the 21st century will be
bodies, brains and minds, and the gap between those who know how to
engineer bodies and brains and those who do not will be wider than
the gap between Dickens’s Britain and the Madhi’s Sudan. Indeed,
it will be bigger than the gap between Sapiens and Neanderthals. In
the 21st century, those who ride the train of progress will acquire
divine abilities of creation and destruction, while those left behind
will face extinction.”
This looks like
determinism on steroids. What saves it from ridicule is that Harari
sets the scientific and technological story within an historically
informed analysis of how liberal democracy evolved. And he provides a
plausible account of how the defining features of the liberal
democratic order might indeed be upended by the astonishing knowledge
and tools that we have produced in the last half-century. So while
one might, in the end, disagree with his conclusions, one can at
least see how he reached them.
In a way, it’s a
story about the evolution and nature of modernity. For most of human
history, Harari argues, humans believed in a cosmic order. Their
world was ruled by omnipotent gods who exercised their power in
capricious and incomprehensible ways. The best one could do was to
try to placate these terrifying powers and obey (and pay taxes to)
the priesthoods who claimed to be the anointed intermediaries between
mere humans and gods. It may have been a dog’s life but at least
you knew where you stood, and in that sense belief in a
transcendental order gave meaning to human lives.
But then came
science. Harari argues that the history of modernity is best told as
a struggle between science and religion. In theory, both were
interested in truth – but in different kinds of truth. Religion was
primarily interested in order, whereas science, as it evolved, was
primarily interested in power – the power that comes from
understanding why and how things happen, and enables us to cure
diseases, fight wars and produce food, among other things.
In the end, in some
parts of the world at least, science triumphed: belief in a
transcendental order was relegated to the sidelines – or even to
the dustbin of history. As science progressed, we did indeed start to
acquire powers that in pre-modern times were supposed to be possessed
only by gods (Edmund Leach’s point). But if God was dead, as
Nietzsche famously said, where would humans find meaning? “The
modern world,” writes Harari, “promised us unprecedented power –
and the promise has been kept. Now what about the price? In exchange
for power, the modern deal expects us to give up on meaning. How did
humans handle this chilling demand? ... How did morality, beauty and
even compassion survive in a world of gods, of heaven or hell?”
The answer, he
argues, was in a new kind of religion: humanism – a belief system
that “sanctifies the life, happiness and power of Homo sapiens”.
So the deal that defined modern society was a covenant between
humanism and science in which the latter provided the means for
achieving the ends specified by the former.
And our looming
existential crisis, as Harari sees it, comes from the fact that this
covenant is destined to fall apart in this century. For one of the
inescapable implications of bioscience and information technology (he
argues) is that they will undermine and ultimately destroy the
foundations on which humanism is built. And since liberal democracy
is constructed on the worship of humanist goals (“life, liberty and
the pursuit of happiness” by citizens who are “created equal”,
as the American founders put it), then our new powers are going to
tear liberal democracy apart.
How come? Well,
modern society is organised round a combination of individualism,
human rights, democracy and the free market. And each of these
foundations is being eaten away by 21st-century science and
technology. The life sciences are undermining the individualism so
celebrated by the humanist tradition with research suggesting that
“the free individual is just a fictional tale concocted by an
assembly of biochemical algorithms”. Similarly with the idea that
we have free will. People may have freedom to choose between
alternatives but the range of possibilities is determined elsewhere.
And that range is increasingly determined by external algorithms as
the “surveillance capitalism” practised by Google, Amazon and co
becomes ubiquitous – to the point where internet companies will
eventually know what your desires are before you do. And so on.
Here Harari ventures
into the kind of dystopian territory that Aldous Huxley would
recognise. He sees three broad directions.
1. Humans will lose
their economic and military usefulness, and the economic system will
stop attaching much value to them.
2. The system will
still find value in humans collectively but not in unique
individuals.
3. The system will,
however, find value in some unique individuals, “but these will be
a new race of upgraded superhumans rather than the mass of the
population”. By “system”, he means the new kind of society that
will evolve as bioscience and information technology progress at
their current breakneck pace. As before, this society will be based
on a deal between religion and science but this time humanism will be
displaced by what Harari calls “dataism” – a belief that the
universe consists of data flows, and the value of any entity or
phenomenon is determined by its contribution to data processing.
We require
machines that are super-intelligent: intelligence is necessary;
consciousness is an optional extra
Personally, I’m
not convinced by his dataism idea: the technocratic ideology
underpinning our current obsession with “Big Data” will
eventually collapse under the weight of its own absurdity. But in two
other areas, Harari is exceedingly perceptive. The first is that our
confident belief that we cannot be superseded by machines – because
we have consciousness and they cannot have it – may be naive. Not
because machine consciousness will be possible but because for
Harari’s dystopia to arrive, consciousness is not required. We
require machines that are super-intelligent: intelligence is
necessary; consciousness is an optional extra which in most cases
would simply be a nuisance. And it’s therefore not a showstopper
for AI development.
The second is that
I’m sure that his reading of the potential of bioscience is
accurate. Even the Economist magazine recently ran a cover story
entitled: “Cheating death: the science that can extend your
lifespan.” But the exciting new possibilities offered by genetic
technology will be expensive and available only to elites. So the
long century in which medicine had a “levelling up” effect on
human populations, bringing good healthcare within the reach of most
people, has come to an end. Even today, rich people live longer and
healthier lives. In a couple of decades, that gap will widen into a
chasm.
Homo Deus is a
remarkable book, full of insights and thoughtful reinterpretations of
what we thought we knew about ourselves and our history. In some
cases it seems (to me) to be naive about the potential of information
technology. But what’s really valuable about it is the way it
grounds speculation about sci-tech in the context of how liberal
democracy evolved.
One measure of
Harari’s achievement is that one has to look a long way back – to
1934, in fact, the year when Lewis Mumford’s Technics and
Civilization was published – for a book with comparable ambition
and scope. Not bad going for a young historian.
Sem comentários:
Enviar um comentário