A group of artificial intelligence experts and technology industry executives called for a six-month break from training powerful artificial intelligence systems.
A group of artificial intelligence experts and technology industry executives has called for a six-month break in training powerful artificial intelligence systems, arguing they pose a potential threat to humanity.
In an open letter, they allege that labs working on this technology are in “an out-of-control race to develop and implement increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict or control”.
The statement was signed by more than 1,000 people, including entrepreneur Elon Musk, Apple co-founder Steve Wozniak and Stability AI CEO Emad Mostaque, as well as researchers from DeepMind.
In the letter, they ask that companies that develop this type of program “immediately pause, for at least six months, the training of artificial intelligence systems more powerful than GPT-4”.
GPT-4 is the most advanced version of ChatGPT, one of the most powerful artificial intelligence systems in the world, developed by OpenAI company.
Both GPT-4 and ChatGPT are a type of generative artificial intelligence, that is, they use algorithms and predictive text to create new content based on instructions.
“This pause must be public and verifiable, and include all key players. If this pause cannot be implemented quickly, governments must step in and institute a suspension,” it adds.
Issued by the non-profit Future of Life Institute, which counts Elon Musk among its outside advisers, the statement warns that these systems could pose “profound risks to society and humanity.”
The institute argues that powerful artificial intelligence systems can generate misinformation and replace jobs with automation.
‘300 million jobs’ may disappear
A recent report by investment bank Goldman Sachs says that artificial intelligence could replace the equivalent of 300 million full-time jobs.
This technology could replace a quarter of work tasks in the US and Europe, he adds, but it could also create new jobs that didn’t exist until now and lead to increased productivity.
Experts interviewed by the BBC say that for now it is very difficult to predict the effect that this technology will have on the job market.
Should we create non-human minds?
The letter signed by the experts asks the following question: “Should we create non-human minds that can eventually surpass us, be smarter, make us obsolete and replace us?”
In a recent blog post cited in the letter, OpenAI, the company behind GPT-4 (one of the language systems used by ChatGPT), also warned of the technology’s potential risks.
“Misaligned superintelligence can do serious harm to the world; an autocratic regime with decisive superintelligence can do that too,” the company wrote in a blog post.
OpenAI has not publicly commented on the letter.
Elon Musk co-founded OpenAI, although he resigned from the organization’s board of directors a few years ago and posted critical messages on Twitter about the company’s direction.
Autonomous driving features developed by its automaker Tesla, like most other similar systems, use artificial intelligence technology.
Recently, several proposals for technology regulation have been tabled in the US, UK, and European Union. However, the UK has ruled out creating a regulatory body dedicated to artificial intelligence.
#warning #thousand #experts #risk #humanity