Risks of
artificial intelligence
In 2023,
Hinton expressed concerns about the rapid progress of AI. He had previously
believed that artificial general intelligence (AGI) was "30 to 50 years or
even longer away." However, in a March 2023 interview with CBS, he
said that "general-purpose AI" might be fewer than 20 years away and could
bring about changes "comparable in scale with the industrial revolution or
electricity."
In an
interview with The New York Times published on 1 May 2023, Hinton announced his
resignation from Google so he could "talk about the dangers of AI without
considering how this impacts Google." He noted that "a part of him
now regrets his life's work".
In early
May 2023, Hinton said in an interview with the BBC that AI might soon surpass
the information capacity of the human brain. He described some of the risks
posed by these chatbots as "quite scary". Hinton explained that
chatbots can learn independently and share knowledge, so that whenever one copy
acquires new information, it is automatically disseminated to the entire group,
allowing AI chatbots to accumulate knowledge far beyond the capacity of any
individual.[109] In 2025, he said "My greatest fear is that, in the long
run, it'll turn out that these kind of digital beings we're creating are just a
better form of intelligence than people. […] We'd no longer be needed. […] If
you want to know how it's like not to be the apex intelligence, ask a chicken.
Existential
risk from AGI
Hinton
has expressed concerns about the possibility of an AI takeover, stating that
"it's not inconceivable" that AI could "wipe out humanity".
Hinton said in 2023 that AI systems capable of intelligent agency would be
useful for military or economic purposes. He worries that generally intelligent
AI systems could "create sub-goals" that are unaligned with their
programmers' interests. He says that AI systems may become power-seeking or
prevent themselves from being shut off, not because programmers intended them
to, but because those sub-goals are useful for achieving later goals.[109] In
particular, Hinton says "we have to think hard about how to control"
AI systems capable of self-improvement.
Catastrophic
misuse
Hinton
reports concerns about deliberate misuse of AI by malicious actors, stating
that "it is hard to see how you can prevent the bad actors from using [AI]
for bad things." In 2017, Hinton called for an international ban on lethal
autonomous weapons. In 2025, in an interview, Hinton cited the use of AI by bad
actors to create lethal viruses one of the greatest existential threats posed
in the short term. "It just requires one crazy guy with a grudge...you can
now create new viruses relatively cheaply using AI. And you don't need to be a
very skilled molecular biologist to do it."
Economic
impacts
Hinton
was previously optimistic about the economic effects of AI, noting in 2018
that: "The phrase 'artificial general intelligence' carries with it the
implication that this sort of single robot is suddenly going to be smarter than
you. I don't think it's going to be that. I think more and more of the routine
things we do are going to be replaced by AI systems."[116] Hinton had also
argued that AGI would not make humans redundant: "[AI in the future is]
going to know a lot about what you're probably going to want to do... But it's
not going to replace you."
In 2023,
however, Hinton became "worried that AI technologies will in time upend
the job market" and take away more than just "drudge work". He
said in 2024 that the British government would have to establish a universal
basic income to deal with the impact of AI on inequality. In Hinton's view, AI
will boost productivity and generate more wealth. But unless the government
intervenes, it will only make the rich richer and hurt the people who might
lose their jobs. "That's going to be very bad for society," he said.
At
Christmas 2024, he had become somewhat more pessimistic, saying there was a
"10 to 20 per cent chance" that AI would cause human extinction
within the next three decades (he had previously suggested a 10% chance,
without a timescale). He expressed surprise at the speed with which AI was
advancing, and said that most experts expected AI to advance, probably in the
next 20 years, to be "smarter than people ... a scary thought. ... So just
leaving it to the profit motive of large companies is not going to be
sufficient to make sure they develop it safely. The only thing that can force
those big companies to do more research on safety is government
regulation." Another "godfather of AI", Yann LeCun, disagreed,
saying AI "could actually save humanity from extinction".

Sem comentários:
Enviar um comentário