Stopping AI development is not enough. Everything has to stop

In previous publication we reported that more than 1.100 artificial experts s, industry leaders and researchers signed a petition calling on developers to stop training models more powerful than OpenAI's ChatGPT-4 for at least six months.

Among those who refrained from signing was Eliezer Yudkowsky, a US decision theorist and principal investigator at the Institute for Engineering Intelligence Research. He has been working on the alignment of Artificial General Intelligence since 2001 and is widely regarded as the founder of the field.

artificial intelligence

"This 6-month moratorium would be better than no moratorium," Yudkowsky writes in an opinion piece for Time magazine.

"I refrained from signing because I believe the letter underestimates the seriousness of the situation and asks too little to solve the problem."

Yudkowsky ups the ante by stating, "If someone were to build a very powerful AI, under the current conditions, I expect every member of the human species and all biological life on Earth would die shortly thereafter."

Here is an excerpt from his article:

The key issue is not "human-competitive" intelligence (as the open letter puts it). is what will happen when the become smarter than human intelligence. The key features there may not be obvious, and we certainly cannot predict what will happen. But right now a research lab can cross critical lines without noticing.

It's not that we can't, in principle, survive by creating something much smarter than ourselves. It's that it would require precision, preparation, and new scientific knowledge, and probably no artificial intelligence systems made up of giant unexplored arrays of fractional numbers.

It took more than 60 years from the first proposal to study the concept of Artificial Intelligence to reach today's possibilities. Solving the security of superhuman intelligence – not perfect security, security in the sense of "not literally killing them all" – could reasonably take at least half a year. The with superhuman intelligence tests is that if you get it wrong on the first try, you won't be able to learn from your mistakes, because you'll be dead. Humanity will not learn from its mistakes and try again, as in other challenges we have overcome in our history, because we will all be gone.

Trying to get everything right on the first critical try is an extraordinary quest in science and engineering. We are not prepared. We are not ready to be prepared in any reasonable amount of time. There is no plan. Advances in AI capabilities run far, far ahead of advances in AI alignment or even advances in understanding what the hell is going on inside these systems. If we do that, we'll all die.

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.082 registrants.
Artificial Intelligence

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).