Sam Altman, CEO of OpenAI, the company that developed the AI app ChatGPT, has warned that the technology poses real risks as it reshapes society.
Altman, 37, stressed that regulators and society should be careful with the technology to guard against potential negative consequences for humanity. "We have to be careful here," Altman said on ABC News on Thursday, adding: "I think people should be happy that we're scared."
"I am particularly concerned that these models could be used for large-scale disinformation," Altman said. "Now that they are getting better at writing computer code, they could be used for aggressive cyber attacks."
However, despite the risks, he said, they could be "the greatest technology that humanity has yet developed".
The warning came as OpenAI released the latest version of its artificial intelligence model, GPT-4, less than four months after the initial release, making it the fastest-growing consumer app in history.
In the interview, the AI engineer said that while the new version isn't "perfect," he scored a 90% on the US Bar exam and a near perfect on the high school SAT math test. He could also write computer code in most programming languages.
Artificial intelligence consumer-facing AI fears center around machines replacing humans. But Altman pointed out that AI only works under the guidance of humans.
"He's waiting for someone to give him instructions," he said. "It's a tool that's very much under human control," but he said he's concerned about which humans will be in control of the instructions.
"There will be people who do not set some of the safety limits that we set. Society, I think, has a limited amount of time to figure out how to react, how to regulate it, and how to manage it all."
Tesla CEO Elon Musk, one of the first investors in OpenAI when it was still a non-profit company, has repeatedly issued warnings that AI is more dangerous than a nuclear weapon.
Musk expressed concern that Microsoft, which hosts ChatGPT on the Bing search engine, had disbanded the ethics watchdog. “There is no regulatory oversight of AI, which is a *major* problem. I have been calling for AI safety regulation for over a decade!” Musk said in a tweet in December.
This week, Musk also tweeted: “What is left for us humans to do?”
On Thursday, Altman acknowledged that the latest version uses deductive reasoning rather than memorization, a process that can lead to strange answers.
"What I'm trying to warn people about the most is what we call the 'illusion problem,'" Altman said. “The model will confidently report entirely fabricated things as if they were facts.
"The right way to think of the models we create is as a reasoning engine, not as a database of facts"
Although the technology could act as a database of facts, "what we want them to do is something closer to the ability to reason, rather than memorization."
What you get out directly depends on what you put in.