"We Definitely Messed Up": Altman's Rare Live Reflection Admits GPT-5 Took a Wrong Turn, Coding Will No Longer Be Paramount

Deep News01-27

Key points from the Altman interview:

A paradigm shift in software development: Altman believes the demand for engineers will not decrease but will instead increase substantially. However, the focus of their work will shift from the low-level "writing and debugging code" to the higher-level "getting the system to achieve its goals," significantly diminishing the importance of the specific act of writing code. The dawn of the personalized software era: As AI capabilities improve, the next few years will see a surge in software specifically tailored for individuals or extremely small groups. Everyone will be able to continuously customize their own tools at an extremely low cost. Models will evolve faster than humans: Altman predicts that future models will learn new skills faster than humans, achieving milestones like mastering unfamiliar environments and complex tools after "just one explanation" or even "self-teaching." Self-correction of OpenAI's strategy: Altman admitted that during the development of the ChatGPT-5 series, an overemphasis on reasoning and programming capabilities led to a neglect of general abilities like writing. Moving forward, they will recalibrate and commit to building a well-rounded, "general-purpose" model. A shift in AI safety focus towards "resilience": Facing increasing risks like biosecurity, Altman advocates for a safety strategy that moves from simple "prohibition and blocking" towards enhancing system "resilience," building safety infrastructure akin to fire codes through AI's own technological advancements. Redefining scarce resources: In a world of abundance where AI drastically reduces creation and production costs, software products themselves will no longer be scarce. Human "attention" and "original, good ideas" will become the most core and scarce resources in commercial competition.

OpenAI CEO Sam Altman admitted in a recent live discussion that the company deviated from its intended path during the development of the ChatGPT-5 series models, becoming overly focused on programming and reasoning capabilities at the expense of other abilities. He also predicted that as AI reshapes software development methods, the traditional task of "writing code" will become far less important, yet the demand for engineering roles will actually increase significantly.

During this live conversation with AI industry practitioners, Altman stated that OpenAI "definitely messed up" on the ChatGPT-5 series models, leading to a noticeable imbalance in the model's capabilities. He clearly stated that OpenAI will return to the development path of a "truly high-quality general-purpose model," rapidly addressing shortcomings in other capabilities while continuing to advance programming intelligence.

Altman also expressed concern about the biosecurity risks that AI might trigger. He said he feels "very nervous" about potential AI safety issues in 2026, with biosecurity being the biggest concern. He argued that the strategy must shift from a blocking approach of "preventing everything from happening" towards a resilience-based safety model that enhances overall risk tolerance.

OpenAI Admits Model "Imbalance," Will Return to General-Purpose Route

Altman candidly admitted that during the development of the ChatGPT-5 series models, OpenAI intentionally focused most of its efforts on intelligence, reasoning, and programming capabilities. However, "sometimes when you focus on one thing, you inevitably neglect others." This led to less stable performance in writing capabilities compared to the 4.5 model.

He emphasized that, from a long-term perspective, the future mainstream will undoubtedly be truly high-quality general-purpose models. "When you want a model to generate a complete application for you, you not only need it to write the code correctly, but you also hope it possesses a clear, organized, and expressive personality when interacting with you."

Altman stated that OpenAI is confident about making multiple capabilities very strong simultaneously within a single model, adding that "this moment in time is particularly critical." The company will continue to advance programming intelligence while rapidly addressing shortcomings in other areas. He revealed that OpenAI is internally using a special version of the GPT-5.2 model, with scientists reporting that "the scientific progress enabled by these models is no longer at a negligible level."

Software Development Paradigm Shift, Engineer Demand to Increase, Not Decrease

On the question of whether AI will reduce the demand for software engineers, Altman gave a counterintuitive answer: the number of people working as engineers is likely to "increase substantially" in the future.

He explained that AI allows engineers to capture more work value and get computers to achieve intended functions. This means engineers will spend significantly less time typing and debugging code, shifting more effort towards "getting the system to accomplish tasks for you." There will be a massive emergence of software tailored for just one person or very small groups, with everyone continuously customizing their own tools.

Altman believes the demand for software engineering roles will not diminish; it will only grow, "and on a much larger scale than today, with a larger portion of global GDP being created through this method."

He also predicted that within the next few years, models will learn new skills faster than humans. When a model encounters a completely unfamiliar environment, tool, or technology for the first time, it will be able to explore and use it reliably after just one explanation, or even without any explanation. "And honestly, that moment doesn't feel very far away."

Cost is No Longer the Sole Consideration, Speed Becomes a New Dimension

Discussing model economics, Altman pointed out that model development has entered a new phase. "The focus is no longer just on how to drive down costs. Increasingly, people are starting to demand faster output speeds and are even willing to pay a higher price for speed."

Historically, OpenAI has performed well in reducing model costs, with a very clear downward trend in the cost curve from the earliest preview versions to now. But the key change now is that besides cost, the previously less-emphasized dimension of "speed" is becoming equally important.

"In some scenarios, people are actually willing to pay a higher price for faster output, even if it's significantly more expensive, as long as they get the result in one percent of the original time." Altman stated that OpenAI now faces the challenge of not just单纯ly reducing costs, but finding a reasonable balance between the two objectives of cost and speed.

He stated that if the market indeed requires further cost reduction, OpenAI is confident it can drive model costs very low, making "running Agents at scale" truly economically viable.

Biosecurity Emerges as the Biggest Worry for 2026

On safety issues, Altman revealed a specific time-related concern. He said he is "very nervous" about potential AI problems in 2026, with biosecurity being his primary worry.

"These models are already quite strong in the biological domain, and our current main strategy is basically to rely on restricting access, adding various classifiers, and trying to prevent people from using models for harmful purposes. But frankly, I don't think this 'blocking' approach can last much longer."

Altman believes that AI safety, especially biosecurity, must shift from preventing everything from happening towards enhancing overall risk tolerance, i.e., "resilience-based" safety. He compared this to humanity's history with fire: initially trying to prohibit its use, later realizing that was unworkable, and instead building fire codes, fire-resistant materials, and urban infrastructure, ultimately making fire controllable and usable.

"AI will certainly pose many real risks, but it will also become part of the solution to these problems. It is both the problem itself and part of the solution." Altman stated that if a significant, serious AI incident were to occur this year, the most likely area would be biosecurity.

In the field of education, Altman holds a conservative view. He said that before understanding the long-term effects of technology on adolescents, there is simply no need to introduce AI at the kindergarten level. "I've always thought there shouldn't be computers in kindergarten at all." He believes that at this stage, it's most important for young children to learn and communicate through real objects and real people, not screens.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment