Former
OpenAI safety researcher brands pace of AI development ‘terrifying’
Steven Adler
expresses concern industry taking ‘very risky gamble’ and raises doubts about
future of humanity
Dan Milmo
Global technology editor
Tue 28 Jan
2025 20.06 CET
A former
safety researcher at OpenAI says he is “pretty terrified” about the pace of
development in artificial intelligence, warning the industry is taking a “very
risky gamble” on the technology.
Steven Adler
expressed concerns about companies seeking to rapidly develop artificial
general intelligence (AGI), a theoretical term referring to systems that match
or exceed humans at any intellectual task.
Adler, who
left OpenAI in November, said in a series of posts on X that he’d had a “wild
ride” at the US company and would miss “many parts of it”.
However, he
said the technology was developing so quickly it raised doubts about the future
of humanity.
“I’m pretty
terrified by the pace of AI development these days,” he said. “When I think
about where I’ll raise a future family, or how much to save for retirement, I
can’t help but wonder: will humanity even make it to that point?”
Some
experts, such as Nobel prize winner Geoffrey Hinton, fear that powerful AI
systems could evade human control with potentially catastrophic consequences.
Others, such as Meta’s chief AI scientist, Yann LeCun, have played down the
existential threat, saying AI “could actually save humanity from extinction”.
According to
Adler’s LinkedIn profile, he led safety-related research for “first-time
product launches” and “more speculative long-term AI systems” in a four-year
career at OpenAI.
Referring to
the development of AGI, OpenAI’s core goal, Adler added: “An AGI race is a very
risky gamble, with huge downside.” Adler said no research lab had a solution to
AI alignment – the process of ensuring that systems adhere to a set of human
values – and that the industry might be moving too fast to find one.
“The faster
we race, the less likely that anyone finds one in time.”
Adler’s X
posts came as China’s DeepSeek, which is also seeking to develop AGI, rattled
the US tech industry by unveiling a model that rivalled OpenAI’s technology
despite being developed with apparently fewer resources.
Warning that
the industry appeared to be “stuck in a really bad equilibrium”, Adler said
“real safety regs” were needed.
“Even if a
lab truly wants to develop AGI responsibly, others can still cut corners to
catch up, maybe disastrously.”
Adler and
OpenAI have been contacted for comment.
Sem comentários:
Enviar um comentário