Sam Altman says the happiers of Gpt-5 were all wrong
Launch August Openai It was a disaster of the Gpt-5 language model. Throughout life, there were drawbacks, with the model production diagrams with clear numbers. In a Reddit AMA with Openai employees, users complained that the new model was not friendly and called for the company to recover the previous version. Most importantly, critics have stated that the GPT-5 was reduced by the Stratosphere’s expectations that OpenAI has been juicing for years. The Gpt-5, which was promised as a changing game, was really doing the game better. But it was still the game.
The pessimists were approved at that moment to announce the end of artificial intelligence. Some even predicted other artificial intelligence. Gary Marcus, the full-time bubble Papper, “Gpt-5, was the most artificial intelligence system of all time.” “It was supposed to provide two things, know the level of AGI and doctorate, and none of these.” In addition, he says, the new seemingly pale model is to prove that the Openai ticket to Agi – which is strangely obtained by data and chip sets to make its systems more intelligent – can no longer be punched. For one time, Marcus’s views were repeated by a significant portion of the artificial intelligence community. In the days following the launch, the Gpt-5 was similar to the AI version of the New Coke.
Sam Altman doesn’t have it. A month after launch, he walks in a conference room at the headquarters in the Mission Bay San Francisco neighborhood, eager to explain to me and my colleague Kylie Robison, who is Gpt-5 everything he used, and all is his epic effort for AGI. “The wibians were some kind of bad when launching,” he confesses. “But now they are great.” Yes, GreatHuman beings are true that criticism has been lost. In fact, the company’s recent release from a mind-bending tool to produce a dramatic video slope of artificial intelligence has diverted the narrative from the first disappointing GPT-5. Altman’s message is that the days are in the wrong side of history. He insists that traveling to AGI is still on the way.
Game number
Critics may see the Gpt-5 as the end of the AI summer, but Altman and the team argue that it provides AI technology as an essential teacher, a search engine information source, and in particular an advanced colleague for scientists and encoders. Altman claims that users are starting to see it. “Gpt-5 is the first time people are in,” damn sacred. This does the important physics. “Or a biologist says,” Wow, it really helped me discover this. “” An important thing happened with any pre-GPT-5 models, which is the beginning of artificial intelligence that helps accelerate the speed of discovery of new science. “
So why is the initial reception? Altman and his team have raised various reasons. They say, one is that since the Gpt-4 hit the streets, the company provided versions that were themselves transformationalist, especially the advanced reasoning modes they added. “Jump from 4 to 5 Larger “We just had a lot in this way,” says Altman. Greg Brookman, President OPA agrees: “I’m not shocked that many people have had such a thing [underwhelmed] Reaction because we have shown our hand. “
Openai also says that since the Gpt-5 is optimized for specialized uses such as science or programming, daily users spend some time appreciating its virtues. “Most people are not physics researchers,” Altman said. As Mark Chen, the head of OPA’s research, explains it, unless you are a mathematics, you don’t care that the Gpt-5 will be in the top five mathematics Olympics, while last year the system was in the top 200 ranks.
On the cost of how to show Gpt-5 shows that scales are not working, Openai says this is due to misunderstanding. Unlike previous models, the Gpt-5 did not make its major advances than a much larger dataset and more calculations. The new model derives its achievements from reinforcement learning, a technique that relies on expert humans that gives feedback. Brockman says OPA had developed its models to the extent that they could produce their data to provide a reinforcing learning cycle. “When the model is dumb, the only thing you want to do is teach a larger version of it,” he says. “When the model is smart, you want to get it. You want to teach your data.”