OpenAI CEO Sam Altman: We fear the dangers of artificial intelligence

Sam Altman, CEO of OpenAI, the company that developed the AI ​​app ChatGPT, has warned that the technology poses real risks as it reshapes society.

Altman, 37, stressed that regulators and society should be careful with the technology to guard against potential negative consequences for humanity. "We have to be careful here," Altman said on ABC News on Thursday, adding: "I think people should be happy that we're scared."

openai

"I am particularly concerned that these models could be used for large-scale disinformation," Altman said. “Now that they're getting better at writing code , θα μπορούσαν να χρησιμοποιηθούν για επιθετικές κυβερνο".

However, despite the risks, he said, they could be "the greatest technology that humanity has yet developed".

The warning came as OpenAI released its latest of the artificial intelligence model, GPT-4, in less than four months since the initial release, making it the fastest growing consumer app in history.

In the interview, the AI ​​engineer said that while the new version isn't "perfect," he scored a 90% on the US Bar exam and a near perfect on the high school SAT math test. He could also write computer code in most programming languages.

Fears for her consumer-facing AI concerns center on machines replacing humans. But Altman opthat artificial intelligence only works under the guidance of humans.

"He's waiting for someone to give him instructions," he said. "It's a tool that's very much under human control," but he said he's concerned about which humans will be in control of the instructions.

"There will be people who do not set some of the safety limits that we set. Society, I think, has a limited amount of time to figure out how to react, how to regulate it, and how to manage it all."

Tesla CEO Elon Musk, one of the first investors in OpenAI when it was still a non-profit company, has repeatedly issued warnings that AI is more dangerous than a nuclear weapon.

Musk expressed concern that Microsoft, which hosts ChatGPT on the machine s Bing, had disbanded the ethics watchdog. “There is no regulatory oversight of AI, which is a *major* problem. I request an arrangement for the of artificial intelligence for over a decade!” Musk said in a tweet in December.

This week, Musk also tweeted: “What is left for us humans to do?”

On Thursday, Altman acknowledged that the latest version uses deductive reasoning rather than memorization, a process that can lead to strange answers.

"What I'm trying to warn people about the most is what we call the 'illusion problem,'" Altman said. “The model will confidently report entirely fabricated things as if they were facts.

“The right way to think about the models we create is as a reasoning engine, not as a framework from facts"

Although the technology could act as a database of facts, "what we want them to do is something closer to the ability to reason, rather than memorization."

What you get out directly depends on what you put in.

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.100 registrants.
OpenAI, Sam Altman, ChatGPT

Written by giorgos

George still wonders what he's doing here ...

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).