The
rise of robots: forget evil AI – the real risk is far more
insidious
It’s far more likely that
robots would inadvertently harm or frustrate humans while carrying
out our orders than they would rise up against us
Olivia Solon in San
Francisco
Tuesday 30 August
2016 14.00 BST
When we look at the
rise of artificial intelligence, it’s easy to get carried away with
dystopian visions of sentient machines that rebel against their human
creators. Fictional baddies such as the Terminator’s Skynet or Hal
from 2001: A Space Odyssey have a lot to answer for.
However, the real
risk posed by AI – at least in the near term – is much more
insidious. It’s far more likely that robots would inadvertently
harm or frustrate humans while carrying out our orders than they
would become conscious and rise up against us. In recognition of
this, the University of California, Berkeley has this week launched a
center to focus on building people-pleasing AIs.
The Center for
Human-Compatible Artificial Intelligence, launched this week with
$5.5m in funding from the Open Philanthropy Project, is lead by
computer science professor and artificial intelligence pioneer Stuart
Russell. He’s quick to dispel any “unreasonable and melodramatic”
comparisons to the threats posed in science fiction.
“The risk doesn’t
come from machines suddenly developing spontaneous malevolent
consciousness,” he said. “It’s important that we’re not
trying to prevent that from happening because there’s absolutely no
understanding of consciousness whatsoever.”
Russell is well
known in the artificial intelligence community and in 2015 penned an
open letter calling for researchers to look beyond the goal of simply
making AI more capable and powerful to think about maximizing its
social benefit. The letter has been signed by more than 8,000
scientists and entrepreneurs including physicist Stephen Hawking,
entrepreneur Elon Musk and Apple co-founder Steve Wozniak.
“The potential
benefits [of AI research] are huge, since everything that
civilization has to offer is a product of human intelligence; we
cannot predict what we might achieve when this intelligence is
magnified by the tools AI may provide, but the eradication of disease
and poverty are not unfathomable,” the letter reads.
“Because of the
great potential of AI, it is important to research how to reap its
benefits while avoiding potential pitfalls.”
It’s precisely
this thinking that underpins the new center.
Up until now, AI has
primarily been applied to very limited contexts such as playing Chess
or Go or recognizing objects in images, where there isn’t much
scope for the system to do much damage. As they start to make
decisions on our behalf within the real world, the stakes are much
higher.
Technology is
killing the myth of human centrality – let's embrace our demotion
Read more
“As soon as you
put things in the real world, with self-driving cars, digital
assistants … as soon as they buy things on your behalf, turn down
appointments, then they have to align with human values,” Russell
said.
He uses autonomous
vehicles to illustrate the type of problem the center will try to
solve. Someone building a self-driving car might instruct it never to
go through a red light, but the machine might then hack into the
traffic light control system so that all of the lights are changed to
green. In this case the car would be obeying orders but in a way that
humans didn’t expect or intend. Similarly, an artificially
intelligent hedge fund designed to maximize the value of its
portfolio could be incentivized to short consumer stocks, buy long on
defence stocks and then start a war – as suggested by Elon Musk in
Werner Herzog’s latest documentary.
“Even when you
think you’ve put fences around what an AI system can do it will
tend to find loopholes just as we do with our tax laws. You want an
AI system that isn’t motivated to find loopholes,” Russell said.
“The problem isn’t
consciousness, but competence. You make machines that are incredibly
competent at achieving objectives and they will cause accidents in
trying to achieve those objectives.”
To address this,
Russell and his colleagues at the center propose making AI systems
that observe human behavior and try to work out what the human’s
objective is, then behave accordingly and learn from mistakes. So
instead of trying to give the machine a long list of rules to follow,
the machine is told that its main objective is to do what the human
wants them to do.
It sounds simple,
but it’s not how engineers have been building systems for the past
50 years.
But if AI systems
can be designed to learn from humans in this way, it should ensure
that they remain under human control even when they develop
capabilities that exceed our own.
In addition to
watching humans directly using cameras and other sensors, robots can
learn about us by reading history books, legal documents, novels,
newspaper stories as well as by watching videos and movies. From this
they can start to build up an understanding of human values.
It won’t be easy
for machines. “People are irrational, inconsistent, weak-willed,
computationally limited, heterogenous and sometimes downright evil,”
Russell said.
“Some are
vegetarians and some really like a nice juicy steak. And the fact
that we don’t behave anything close to perfectly is a serious
difficulty.”
Sem comentários:
Enviar um comentário