Artificial intelligence could lead to extinction,
experts warn
Published
12 hours
ago
By Chris
Vallance
Technology
reporter
https://www.bbc.com/news/uk-65746524
Artificial
intelligence could lead to the extinction of humanity, experts - including the
heads of OpenAI and Google Deepmind - have warned.
Dozens have
supported a statement published on the webpage of the Centre for AI Safety.
"Mitigating
the risk of extinction from AI should be a global priority alongside other
societal-scale risks such as pandemics and nuclear war" it reads.
But others
say the fears are overblown.
Sam Altman,
chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of
Google DeepMind and Dario Amodei of Anthropic have all supported the statement.
The Centre
for AI Safety website suggests a number of possible disaster scenarios:
- AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons
- AI-generated misinformation could destabilise society and "undermine collective decision-making"
- The power of AI could become increasingly concentrated in fewer and fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship"
- Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E"
- Dr Geoffrey Hinton, who issued an earlier warning about risks from super-intelligent AI, has also supported the Centre for AI Safety's call.
Yoshua
Bengio, professor of computer science at the university of Montreal, also
signed.
Dr Hinton,
Prof Bengio and NYU Professor Yann LeCun are often described as the
"godfathers of AI" for their groundbreaking work in the field - for
which they jointly won the 2018 Turing Award, which recognises outstanding
contributions in computer science.
But Prof
LeCun, who also works at Meta, has said these apocalyptic warnings are
overblown tweeting that "the most common reaction by AI researchers to
these prophecies of doom is face palming".
'Fracturing
reality'
Many other
experts similarly believe that fears of AI wiping out humanity are unrealistic,
and a distraction from issues such as bias in systems that are already a
problem.
Arvind
Narayanan, a computer scientist at Princeton University, has previously told
the BBC that sci-fi-like disaster scenarios are unrealistic: "Current AI
is nowhere near capable enough for these risks to materialise. As a result,
it's distracted attention away from the near-term harms of AI".
Oxford's
Institute for Ethics in AI senior research associate Elizabeth Renieris told
BBC News she worried more about risks closer to the present.
"Advancements
in AI will magnify the scale of automated decision-making that is biased,
discriminatory, exclusionary or otherwise unfair while also being inscrutable
and incontestable," she said. They would "drive an exponential
increase in the volume and spread of misinformation, thereby fracturing reality
and eroding the public trust, and drive further inequality, particularly for
those who remain on the wrong side of the digital divide".
Many AI
tools essentially "free ride" on the "whole of human experience
to date", Ms Renieris said. Many are trained on human-created content,
text, art and music they can then imitate - and their creators "have
effectively transferred tremendous wealth and power from the public sphere to a
small handful of private entities".
But Centre
for AI Safety director Dan Hendrycks told BBC News future risks and present
concerns "shouldn't be viewed antagonistically".
"Addressing
some of the issues today can be useful for addressing many of the later risks
tomorrow," he said.
Superintelligence
efforts
Media
coverage of the supposed "existential" threat from AI has snowballed
since March 2023 when experts, including Tesla boss Elon Musk, signed an open
letter urging a halt to the development of the next generation of AI
technology.
That letter
asked if we should "develop non-human minds that might eventually
outnumber, outsmart, obsolete and replace us".
In
contrast, the new campaign has a very short statement, designed to "open
up discussion".
The
statement compares the risk to that posed by nuclear war. In a blog post OpenAI
recently suggested superintelligence might be regulated in a similar way to
nuclear energy: "We are likely to eventually need something like an IAEA
[International Atomic Energy Agency] for superintelligence efforts" the
firm wrote.
'Be
reassured'
Both Sam
Altman and Google chief executive Sundar Pichai are among technology leaders to
have discussed AI regulation recently with the prime minister.
Speaking to
reporters about the latest warning over AI risk, Rishi Sunak stressed the
benefits to the economy and society.
"You've
seen that recently it was helping paralysed people to walk, discovering new
antibiotics, but we need to make sure this is done in a way that is safe and
secure," he said.
"Now
that's why I met last week with CEOs of major AI companies to discuss what are
the guardrails that we need to put in place, what's the type of regulation that
should be put in place to keep us safe.
"People
will be concerned by the reports that AI poses existential risks, like
pandemics or nuclear wars.
"I
want them to be reassured that the government is looking very carefully at
this."
He had
discussed the issue recently with other leaders, at the G7 summit of leading
industrialised nations, Mr Sunak said, and would raise it again in the US soon.
The G7 has
recently created a working group on AI.
Sem comentários:
Enviar um comentário