AI Needs Self-Improvement Capabilities! Growing Consensus in AI Circles That "Current Training Methods Hit a Wall"

Deep News09:52

A small but growing group of AI developers from companies like OpenAI and Alphabet (GOOG) argue that current technological approaches cannot achieve breakthroughs in fields like biology and medicine, nor can they eliminate basic errors. This perspective is raising industry-wide doubts about the direction of billions in investments.

Discussions at last week's NeurIPS conference in San Diego highlighted this concern. Researchers emphasized that developers must create AI capable of acquiring new abilities post-deployment—a "continuous learning" feature akin to human cognition, which remains unrealized in AI systems.

These doubts contrast with optimistic projections from AI leaders. Anthropic CEO Dario Amodei recently claimed that scaling existing training techniques could achieve Artificial General Intelligence (AGI), while OpenAI CEO Sam Altman predicted self-improving AI within two years. If skeptics are correct, however, billions invested by OpenAI and Anthropic in reinforcement learning may face risks.

Despite technical limitations, current AI performance in writing, design, commerce, and data analysis continues driving revenue growth. OpenAI expects 2023 revenue to triple to $13 billion, with Anthropic projecting a tenfold increase to $4 billion.

**Core Debate: Can AI Learn Like Humans?**

Amazon AI head David Luan stated unequivocally: "I guarantee today’s model training methods won’t last." Multiple NeurIPS attendees echoed this, suggesting human-like AI may require entirely new paradigms.

OpenAI co-founder Ilya Sutskever noted current advanced training fails at generalization—handling tasks beyond pre-trained domains. In medicine, continuous learning could enable ChatGPT to identify novel tumors absent from medical literature, mimicking human radiologists who extrapolate from single cases.

In his NeurIPS keynote, University of Alberta’s Richard Sutton—considered the father of reinforcement learning—advocated for AI that learns from experience rather than curated expert datasets, warning that reliance on human knowledge limits ultimately stunts progress.

**Technical Attempts and Roadblocks**

Notable NeurIPS papers explored solutions. MIT and OpenAI researchers proposed "adaptive language models" that assimilate real-world information—for instance, having ChatGPT reformat unfamiliar medical journal articles into Q&A pairs for self-training. Some believe such continuous updating is essential for AI to make scientific breakthroughs by applying new information like human researchers.

However, persistent errors in basic tasks have slowed enterprise adoption of AI agents, which often malfunction without extensive provider oversight.

**Commercial Implications: Growth vs. Investment Risks**

If skeptics like Luan and Sutskever are right, billions earmarked for 2024 investments in reinforcement learning—including payments to firms like Scale AI—could be misallocated. Scale maintains its reinforcement learning products remain vital even for continuously learning AI.

Nevertheless, AI developers continue generating substantial revenue without breakthroughs. OpenAI and Anthropic, which had negligible income three years ago, now earn significantly from chatbots and model sales. AI startups like coding assistant Cursor collectively anticipate over $3 billion in annual sales.

**Industry Competition: Alphabet Gains Ground**

Researchers also analyzed the intensifying AI race among tech giants. Alphabet has surpassed rivals on certain metrics, prompting OpenAI’s Altman to prepare for "economic headwinds."

During a Q&A with Alphabet’s AI team, attendees inquired about pretraining improvements—a focus area for OpenAI this year. Alphabet VP Vahab Mirrokni cited better data composition and management of custom tensor processing units, reducing hardware failures’ impact.

OpenAI leadership recently claimed similar pretraining advancements, developing a new model codenamed "Garlic" to compete with Alphabet in coming months.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment