‘The Godfather of A.I.’ Leaves Google and Warns
of Danger Ahead
For half a century, Geoffrey Hinton nurtured the
technology at the heart of chatbots like ChatGPT. Now he worries it will cause
serious harm.
Dr. Geoffrey Hinton is leaving Google so that he can
freely share his concern that artificial intelligence could cause the world
serious harm.
Cade Metz
By Cade
Metz
Cade Metz
reported this story in Toronto.
May 1, 2023
https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html
Geoffrey
Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of
his graduate students at the University of Toronto created technology that
became the intellectual foundation for the A.I. systems that the tech
industry’s biggest companies believe is a key to their future.
On Monday,
however, he officially joined a growing chorus of critics who say those
companies are racing toward danger with their aggressive campaign to create
products based on generative artificial intelligence, the technology that
powers popular chatbots like ChatGPT.
Dr. Hinton
said he has quit his job at Google, where he has worked for more than a decade
and became one of the most respected voices in the field, so he can freely
speak out about the risks of A.I. A part of him, he said, now regrets his
life’s work.
“I console
myself with the normal excuse: If I hadn’t done it, somebody else would have,”
Dr. Hinton said during a lengthy interview last week in the dining room of his
home in Toronto, a short walk from where he and his students made their
breakthrough.
Dr.
Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment
for the technology industry at perhaps its most important inflection point in
decades. Industry leaders believe the new A.I. systems could be as important as
the introduction of the web browser in the early 1990s and could lead to
breakthroughs in areas ranging from drug research to education.
But gnawing
at many industry insiders is a fear that they are releasing something dangerous
into the wild. Generative A.I. can already be a tool for misinformation. Soon,
it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers
say, it could be a risk to humanity.
“It is hard
to see how you can prevent the bad actors from using it for bad things,” Dr.
Hinton said.
A brave new
world. A new crop of chatbots powered by artificial intelligence has ignited a
scramble to determine whether the technology could upend the economics of the
internet, turning today’s powerhouses into has-beens and creating the
industry’s next giants. Here are the bots to know:
ChatGPT.
ChatGPT, the artificial intelligence language model from a research lab,
OpenAI, has been making headlines since November for its ability to respond to
complex questions, write poetry, generate code, plan vacations and translate
languages. GPT-4, the latest version introduced in mid-March, can even respond
to images (and ace the Uniform Bar Exam).
Bing. Two
months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner,
added a similar chatbot, capable of having open-ended text conversations on
virtually any topic, to its Bing internet search engine. But it was the bot’s
occasionally inaccurate, misleading and weird responses that drew much of the
attention after its release.
Bard.
Google’s chatbot, called Bard, was released in March to a limited number of
users in the United States and Britain. Originally conceived as a creative tool
designed to draft emails and poems, it can generate ideas, write blog posts and
answer questions with facts or opinions.
Ernie. The
search giant Baidu unveiled China’s first major rival to ChatGPT in March. The
debut of Ernie, short for Enhanced Representation through Knowledge
Integration, turned out to be a flop after a promised “live” demonstration of
the bot was revealed to have been recorded.
After the
San Francisco start-up OpenAI released a new version of ChatGPT in March, more
than 1,000 technology leaders and researchers signed an open letter calling for
a six-month moratorium on the development of new systems because A.I.
technologies pose “profound risks to society and humanity.”
Several
days later, 19 current and former leaders of the Association for the
Advancement of Artificial Intelligence, a 40-year-old academic society,
released their own letter warning of the risks of A.I. That group included Eric
Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s
technology across a wide range of products, including its Bing search engine.
Dr. Hinton,
often called “the Godfather of A.I.,” did not sign either of those letters and
said he did not want to publicly criticize Google or other companies until he
had quit his job. He notified the company last month that he was resigning, and
on Thursday, he talked by phone with Sundar Pichai, the chief executive of
Google’s parent company, Alphabet. He declined to publicly discuss the details
of his conversation with Mr. Pichai.
Google’s
chief scientist, Jeff Dean, said in a statement: “We remain committed to a
responsible approach to A.I. We’re continually learning to understand emerging
risks while also innovating boldly.”
Dr. Hinton,
a 75-year-old British expatriate, is a lifelong academic whose career was
driven by his personal convictions about the development and use of A.I. In
1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced
an idea called a neural network. A neural network is a mathematical system that
learns skills by analyzing data. At the time, few researchers believed in the
idea. But it became his life’s work.
In the
1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon
University, but left the university for Canada because he said he was reluctant
to take Pentagon funding. At the time, most A.I. research in the United States
was funded by the Defense Department. Dr. Hinton is deeply opposed to the use
of artificial intelligence on the battlefield — what he calls “robot soldiers.”
In 2012,
Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex
Krishevsky, built a neural network that could analyze thousands of photos and
teach itself to identify common objects, such as flowers, dogs and cars.
Google
spent $44 million to acquire a company started by Dr. Hinton and his two
students. And their system led to the creation of increasingly powerful
technologies, including new chatbots like ChatGPT and Google Bard. Mr.
Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and
two other longtime collaborators received the Turing Award, often called “the
Nobel Prize of computing,” for their work on neural networks.
Around the
same time, Google, OpenAI and other companies began building neural networks
that learned from huge amounts of digital text. Dr. Hinton thought it was a
powerful way for machines to understand and generate language, but it was
inferior to the way humans handled language.
Then, last
year, as Google and OpenAI built systems using much larger amounts of data, his
view changed. He still believed the systems were inferior to the human brain in
some ways but he thought they were eclipsing human intelligence in others.
“Maybe what is going on in these systems,” he said, “is actually a lot better
than what is going on in the brain.”
As
companies improve their A.I. systems, he believes, they become increasingly
dangerous. “Look at how it was five years ago and how it is now,” he said of
A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year,
he said, Google acted as a “proper steward” for the technology, careful not to
release something that might cause harm. But now that Microsoft has augmented
its Bing search engine with a chatbot — challenging Google’s core business —
Google is racing to deploy the same kind of technology. The tech giants are
locked in a competition that might be impossible to stop, Dr. Hinton said.
His
immediate concern is that the internet will be flooded with false photos,
videos and text, and the average person will “not be able to know what is true
anymore.”
He is also
worried that A.I. technologies will in time upend the job market. Today,
chatbots like ChatGPT tend to complement human workers, but they could replace
paralegals, personal assistants, translators and others who handle rote tasks.
“It takes away the drudge work,” he said. “It might take away more than that.”
Down the
road, he is worried that future versions of the technology pose a threat to
humanity because they often learn unexpected behavior from the vast amounts of
data they analyze. This becomes an issue, he said, as individuals and companies
allow A.I. systems not only to generate their own computer code but actually
run that code on their own. And he fears a day when truly autonomous weapons —
those killer robots — become reality.
“The idea
that this stuff could actually get smarter than people — a few people believed
that,” he said. “But most people thought it was way off. And I thought it was
way off. I thought it was 30 to 50 years or even longer away. Obviously, I no
longer think that.”
Many other
experts, including many of his students and colleagues, say this threat is
hypothetical. But Dr. Hinton believes that the race between Google and
Microsoft and others will escalate into a global race that will not stop
without some sort of global regulation.
But that
may be impossible, he said. Unlike with nuclear weapons, he said, there is no
way of knowing whether companies or countries are working on the technology in
secret. The best hope is for the world’s leading scientists to collaborate on
ways of controlling the technology. “I don’t think they should scale this up
more until they have understood whether they can control it,” he said.
Dr. Hinton
said that when people used to ask him how he could work on technology that was
potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S.
effort to build the atomic bomb: “When you see something that is technically
sweet, you go ahead and do it.”
He does not
say that anymore.
Cade Metz
is a technology reporter and the author of “Genius Makers: The Mavericks Who
Brought A.I. to Google, Facebook, and The World.” He covers artificial
intelligence, driverless cars, robotics, virtual reality and other emerging
areas. @cademetz
Sem comentários:
Enviar um comentário