"Since the release of GPT-4 in March 2023, OpenAI has been working on GPT-5..." states the Wall Street Journal. But “OpenAI's new artificial intelligence project has been delayed and has huge costs. It's unclear when — or if — it will work.”
"There may not be enough data in the world to make it smart enough."
OpenAI's closest partner and biggest investor, Microsoft, expected to see the new model around mid-2024, people familiar with the matter said. OpenAI has performed at least two major training sessions, each involving months of gathering massive amounts of data, with the goal of making Orion smarter.
Each time, news emerged problems and the software fell short of the results the researchers wanted. Let's mention that each training costs about half a billion dollars in computing costs.
The $157 billion valuation investors gave OpenAI in October is largely based on CEO Sam Altman's prediction that GPT-5 will represent a "significant leap forward" in all kinds of subjects and tasks…
OpenAI wants to use its new model to generate high-quality synthetic training data, according to the article. But OpenAI researchers “concluded that they needed different, high-quality data", as "the public Internet didn't have enough. "
OpenAI's solution was to create data from scratch. He's hiring people to write new code software or solve math problems that Orion can learn from. It also hires theoretical physics experts.
The workers, some of whom are software engineers and mathematicians, also share explanations of their work with Orion.
Having people explain their thinking deepens the value of newly generated data. Besides being a new “language” that the LLM should absorb is also a map of how the model can solve similar problems in the future…
But the process is painfully slow.
GPT-4 was trained with about 13 trillion tokens. A thousand people writing 5.000 words a day would take months to generate a billion tokens.
OpenAI's already difficult task has been complicated by internal turmoil and near-constant attempts by rivals to acquire its top researchers by offering them millions of dollars…
More than two dozen key executives, researchers and longtime employees have left OpenAI this year, including co-founder and chief scientist Ilya Sutskever and chief technology officer Mira Murati. Last Thursday, Alec Radford, a prodigious researcher who served as lead author on several of OpenAI's scientific papers, announced his retirement after about eight years at the company…
OpenAI isn't the only company concerned that progress has hit a wall. Across the industry, a debate is raging about whether the improvement of AI is starting to slow.
Sutskever, who recently co-founded a new AI company called Safe Superintelligence, or SSI, told a recent AI conference that the era of mega data is over.
"Data doesn't grow because we only have one Internet," he told a crowd of researchers, policy experts and scientists. "Data is the fossil fuel of artificial intelligence".
And that fuel is starting to run out.
Your answer is myopic, beyond wrong. Wrong, because "the world is starving" is not true, in the sense that it is not a necessity for survival, i.e. for humanity to stop technological development, IT, space, etc. in order to survive. Myopic though, because you don't see that technological progress is what improves the quality of life and that now that we are talking, we are living a revolution, with AI! The AI you degrade like this will contribute to eliminating the hunger you say, discover new vaccines, treatments, medical methods, minimize accidents and mishaps on the roads and elsewhere, improve and speed up Justice. It's not a video game, it changes our lives, our quality of life, our health, our wealth, our life expectancy.
The world is starving children and you are giving them money that is enough for everyone to live. Reptiles