Hundreds of celebrated artificial intelligence (AI) researchers including Tesla owner Elon Musk have undersigned an open letter recommending AI labs to revisit gigantic AI systems, ringing alarm bells over the “profound risks" these bots pose to society and humanity.
According to the letter, published by the nonprofit Future of Life Institute, AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.”
"AI systems with human-competitive intelligence can pose
profound risks to society and humanity," said the open letter.
"Powerful
AI systems should be developed only once we are confident that their effects
will be positive and their risks will be manageable.
AI engineers
around the world want to make sure that these powerful AI systems should be
allowed to take logical time for the researchers to make sure they were
safe.
Among the signatories of the letter are author Yuval Noah
Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn,
politician Andrew Yang, and several well-known AI researchers and CEOs,
including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad
Mostaque.
The letter was mainly prompted by the release of GPT-4 from the
San Francisco firm OpenAI.
The company says its latest model is
much more powerful than the previous version, which was used to power
ChatGPT, a bot capable of generating tracts of text from the briefest of
prompts.
“Therefore, we call on all AI labs to immediately pause
for at least 6 months the training of AI systems more powerful than GPT-4,”
says the letter. “This pause should be public and verifiable, and include
all key actors. If such a pause cannot be enacted quickly, governments
should step in and institute a moratorium.”
Musk was an initial
investor in OpenAI, spent years on its board, and his car firm Tesla
develops AI systems to help power its self-driving technology, among other
applications.
The letter, hosted by the Musk-funded Future of Life Institute,
was signed by prominent critics as well as competitors of OpenAI like
Stability AI chief Emad Mostaque.
The letter quoted from a blog
written by OpenAI founder Sam Altman, who suggested that "at some point, it
may be important to get independent review before starting to train future
systems".
"We agree. That point is now," the authors of the open
letter wrote.
"Therefore, we call on all AI labs to immediately
pause for at least 6 months the training of AI systems more powerful than
GPT-4."
They called for governments to step in and impose a
moratorium if companies failed to agree.
The six months should be
used to develop safety protocols, AI governance systems, and refocus
research on ensuring AI systems are more accurate, safe, "trustworthy and
loyal".
The letter did not detail the dangers revealed by
GPT-4.