Wired:

Six months ago this week, many prominent AI researchers, engineers, and entrepreneurs signed an open letter calling for a six-month pause on development of AI systems more capable than OpenAI’s latest GPT-4 language generator. It argued that AI is advancing so quickly and unpredictably that it could eliminate countless jobs, flood us with disinformation, and—as a wave of panicky headlines reported—destroy humanity. Whoops!

As you may have noticed, the letter did not result in a pause in AI development, or even a slow down to a more measured pace. Companies have instead accelerated their efforts to build more advanced AI.

Elon Musk, one of the most prominent signatories, didn’t wait long to ignore his own call for a slowdown. In July he announced xAI, a new company he said would seek to go beyond existing AI and compete with OpenAI, Google, and Microsoft. And many Google employees who also signed the open letter have stuck with their company as it prepares to release an AI model called Gemini, which boasts broader capabilities than OpenAI’s GPT-4.

WIRED reached out to more than a dozen signatories of the letter to ask what effect they think it had and whether their alarm about AI has deepened or faded in the past six months. None who responded seemed to have expected AI research to really grind to a halt.

“I never thought that companies were voluntarily going to pause,” says Max Tegmark, an astrophysicist at MIT who leads the Future of Life Institute, the organization behind the letter—an admission that some might argue makes the whole project look cynical. Tegmark says his main goal was not to pause AI but to legitimize conversation about the dangers of the technology, up to and including the fact that it might turn on humanity. The result “exceeded my expectations,” he says.

The responses to my follow-up also show the huge diversity of concerns experts have about AI—and that many signers aren’t actually obsessed with existential risk.

Lars Kotthoff, an associate professor at the University of Wyoming, says he wouldn’t sign the same letter today because many who called for a pause are still working to advance AI. “I’m open to signing letters that go in a similar direction, but not exactly like this one,” Kotthoff says. He adds that what concerns him most today is the prospect of a “societal backlash against AI developments, which might precipitate another AI winter” by quashing research funding and making people spurn AI products and tools.

Other signers told me they would gladly sign again, but their big worries seem to involve near-term problems, such as disinformation and job losses, rather than Terminator scenarios.

“In the age of the internet and Trump, I can more easily see how AI can lead to destruction of human civilization by distorting information and corrupting knowledge,” says Richard Kiehl, a professor working on microelectronics at Arizona State University.

“Are we going to get Skynet that’s going to hack into all these military servers and launch nukes all over the planet? I really don’t think so,” says Stephen Mander, a PhD student working on AI at Lancaster University in the UK. He does see widespread job displacement looming, however, and calls it an “existential risk” to social stability. But he also worries that the letter may have spurred more people to experiment with AI and acknowledges that he didn’t act on the letter’s call to slow down. “Having signed the letter, what have I done for the last year or so? I’ve been doing AI research,” he says.

Despite the letter’s failure to trigger a widespread pause, it did help propel the idea that AI could snuff out humanity into a mainstream topic of discussion. It was followed by a public statement signed by the leaders of OpenAI and Google’s DeepMind AI division that compared the existential risk posed by AI to that of nuclear weapons and pandemics. Next month, the British government will host an international “AI safety” conference, where leaders from numerous countries will discuss possible harms AI could cause, including existential threats.

Source link

MuskWire TLDR:

Six months ago, many prominent AI researchers, engineers, and entrepreneurs signed an open letter calling for a six-month pause on the development of AI systems more advanced than OpenAI’s GPT-4 language generator. The letter argued that the rapid advancement of AI could lead to job loss, disinformation, and even the destruction of humanity. However, the letter did not result in a slowdown in AI development, and companies have actually accelerated their efforts to build more advanced AI.

Elon Musk, one of the signatories, announced the creation of xAI, a company that aims to compete with OpenAI, Google, and Microsoft in the AI field. Many Google employees who also signed the letter have continued working on the development of an AI model called Gemini, which has broader capabilities than OpenAI’s GPT-4.

WIRED reached out to several signatories to gauge the impact of the letter and whether their concerns about AI have changed in the past six months. None of the respondents expected a pause in AI research, with Max Tegmark, the astrophysicist leading the Future of Life Institute, stating that his main goal was to legitimize conversation about the dangers of AI. The responses also revealed that many signatories are not solely focused on existential risks but are also concerned about near-term problems like disinformation and job losses.

Despite the letter’s failure to slow down AI development, it did bring the idea of AI posing a threat to humanity into the mainstream. It was followed by a public statement signed by leaders from OpenAI and Google’s DeepMind AI division, comparing the existential risk of AI to nuclear weapons and pandemics. Next month, the British government will host an international “AI safety” conference to discuss potential harms AI could cause, including existential threats.