terça-feira, 2 de maio de 2023

What Exactly Are the Dangers Posed by A.I.?





What Exactly Are the Dangers Posed by A.I.?

 

A recent letter calling for a moratorium on A.I development blends real threats with speculation. But concern is growing among experts.

 


Cade Metz

By Cade Metz

Cade Metz writes about artificial intelligence and other emerging technologies.

May 1, 2023

https://www.nytimes.com/2023/05/01/technology/ai-problems-danger-chatgpt.html

 

In late March, more than 1,000 technology leaders, researchers and other pundits working in and around artificial intelligence signed an open letter warning that A.I. technologies present “profound risks to society and humanity.”

 

The group, which included Elon Musk, Tesla’s chief executive and the owner of Twitter, urged A.I. labs to halt development of their most powerful systems for six months so that they could better understand the dangers behind the technology.

 

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

 

The letter, which now has over 27,000 signatures, was brief. Its language was broad. And some of the names behind the letter seemed to have a conflicting relationship with A.I. Mr. Musk, for example, is building his own A.I. start-up, and he is one of the primary donors to the organization that wrote the letter.

 

But the letter represented a growing concern among A.I. experts that the latest systems, most notably GPT-4, the technology introduced by the San Francisco start-up OpenAI, could cause harm to society. They believed future systems will be even more dangerous.

 

Some of the risks have arrived. Others will not for months or years. Still others are purely hypothetical.

 

“Our ability to understand what could go wrong with very powerful A.I. systems is very weak,” said Yoshua Bengio, a professor and A.I. researcher at the University of Montreal. “So we need to be very careful.”

 

Why Are They Worried?

 

Dr. Bengio is perhaps the most important person to have signed the letter.

 

Working with two other academics — Geoffrey Hinton, until recently a researcher at Google, and Yann LeCun, now chief A.I. scientist at Meta, the owner of Facebook — Dr. Bengio spent the past four decades developing the technology that drives systems like GPT-4. In 2018, the researchers received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

 

A neural network is a mathematical system that learns skills by analyzing data. About five years ago, companies like Google, Microsoft and OpenAI began building neural networks that learned from huge amounts of digital text called large language models, or L.L.M.s.

 

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

 

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

 

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

 

Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

 

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

 

By pinpointing patterns in that text, L.L.M.s learn to generate text on their own, including blog posts, poems and computer programs. They can even carry on a conversation.

 

This technology can help computer programmers, writers and other workers generate ideas and do things more quickly. But Dr. Bengio and other experts also warned that L.L.M.s can learn unwanted and unexpected behaviors.

 

These systems can generate untruthful, biased and otherwise toxic information. Systems like GPT-4 get facts wrong and make up information, a phenomenon called “hallucination.”

 

Companies are working on these problems. But experts like Dr. Bengio worry that as researchers make these systems more powerful, they will introduce new risks.

 

Because these systems deliver information with what seems like complete confidence, it can be a struggle to separate truth from fiction when using them. Experts are concerned that people will rely on these systems for medical advice, emotional support and the raw information they use to make decisions.

 

“There is no guarantee that these systems will be correct on any task you give them,” said Subbarao Kambhampati, a professor of computer science at Arizona State University.

 

Experts are also worried that people will misuse these systems to spread disinformation. Because they can converse in humanlike ways, they can be surprisingly persuasive.

 

“We now have systems that can interact with us through natural language, and we can’t distinguish the real from the fake,” Dr. Bengio said.

 

Oren Etzioni, the founding chief executive of the Allen Institute for AI, a lab in Seattle, said “rote jobs” could be hurt by A.I.Credit...Kyle Johnson for The New York Times

 

Experts are worried that the new A.I. could be job killers. Right now, technologies like GPT-4 tend to complement human workers. But OpenAI acknowledges that they could replace some workers, including people who moderate content on the internet.

 

They cannot yet duplicate the work of lawyers, accountants or doctors. But they could replace paralegals, personal assistants and translators.

 

A paper written by OpenAI researchers estimated that 80 percent of the U.S. work force could have at least 10 percent of their work tasks affected by L.L.M.s and that 19 percent of workers might see at least 50 percent of their tasks impacted.

 

“There is an indication that rote jobs will go away,” said Oren Etzioni, the founding chief executive of the Allen Institute for AI, a research lab in Seattle.

 

Long-Term Risk: Loss of Control

Some people who signed the letter also believe artificial intelligence could slip outside our control or destroy humanity. But many experts say that’s wildly overblown.

 

The letter was written by a group from the Future of Life Institute, an organization dedicated to exploring existential risks to humanity. They warn that because A.I. systems often learn unexpected behavior from the vast amounts of data they analyze, they could pose serious, unexpected problems.

 

They worry that as companies plug L.L.M.s into other internet services, these systems could gain unanticipated powers because they could write their own computer code. They say developers will create new risks if they allow powerful A.I. systems to run their own code.

 

“If you look at a straightforward extrapolation of where we are now to three years from now, things are pretty weird,” said Anthony Aguirre, a theoretical cosmologist and physicist at the University of California, Santa Cruz and co-founder of the Future of Life Institute.

 

“If you take a less probable scenario — where things really take off, where there is no real governance, where these systems turn out to be more powerful than we thought they would be — then things get really, really crazy,” he said.

 

Dr. Etzioni said talk of existential risk was hypothetical. But he said other risks — most notably disinformation — were no longer speculation.

 

”Now we have some real problems,” he said. “They are bona fide. They require some responsible reaction. They may require regulation and legislation.”

 

Cade Metz is a technology reporter and the author of “Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and The World.” He covers artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas. @cademetz

Sem comentários: