钛媒体 1小时前
Ilya Sutskever Says AI's 'Scaling Era' Is Ending; Research, Not Bigger Models, Will Drive the Next Wave
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_font3.html

 

Ilya Sutskever, co-founder of Safe Superintelligence Inc. and former chief scientist at OpenAI, says the AI industry is hitting a turning point.

In a new interview released Tuesday, he argued that the "scaling era"   — the strategy of driving AI progress by massively increasing compute, data and model size — is coming to an end.

"These models somehow just generalize dramatically worse than people,"   Sutskever said. "It's a very fundamental thing."

The remarks are striking coming from the scientist who helped popularize scale-driven breakthroughs inside OpenAI. But Sutskever now says the industry won't reach artificial general intelligence by simply stacking more GPUs. Instead, he's pushing for a return to foundational research and new architectural ideas.

Sutskever: The Compute Race Is Losing Steam

From 2020 to 2025, AI companies raced to train ever-larger models, a strategy that delivered stunning gains in language and image generation. But Sutskever says diminishing returns are now showing up clearly.

"At some point, pre-training will run out of data,"   he said. "The data is very clearly finite."

He argues the next advances won't come from scale, but from smarter research — a shift that mirrors SSI's own approach. "We are squarely an age-of-research company,"   Sutskever said.

Why Models Still Struggle in the Real World

A major focus of the interview was the widening gap between benchmark performance and real-world reliability. Sutskever pointed to coding assistants that fix one bug only to introduce another — behavior no skilled human would mimic.

He cited two core issues:

• Over-optimized reinforcement learning, which can make models too narrow and "single-minded."

• Pre-training on uncontrolled internet data, which doesn't translate cleanly into robust behavior in real-world settings.

Sutskever compared today's AI systems to a student who memorizes every past competition problem but fails to develop intuition or judgment.

"Pre-training gives quantity,"   he said. "Human talent gives generality."

The Missing Ingredient: Emotion as a Value System

One of Sutskever's more provocative claims is that human-like generalization depends on emotional value functions — internal signals that guide decision-making. He pointed to neurological studies showing that damage to emotional centers can impair basic judgment even when reasoning remains intact.

"Emotion is not noise,"   he said. "It's a value function."

Today's AI systems rely on reward models and optimization objectives, but lack any inherent, stable value system. Sutskever argues that this absence contributes to brittleness and poor adaptability.

SSI's Plan: Build AGI Slowly, Safely, and Without Hype

After departing OpenAI in 2024, Sutskever launched Safe Superintelligence Inc., arguing that superintelligent systems should be developed through controlled, iterative deployment — not splashy consumer releases.

SSI's goal is to build AI that can learn continuously, adapt to new environments, and maintain alignment with human values.

The next few years, he said, will determine whether research-driven companies can outpace Big Tech's compute-heavy strategies.

A 5 – 20 Year Timeline for a New Kind of Intelligence

Sutskever believes a new class of AI — one that learns as efficiently as humans and has embedded value systems — could emerge within 5 to 20 years.

Such systems, he said, wouldn't be static "trained models"   but continuously learning entities capable of growth over time.

If achieved, the impact could be transformative across industries, reshaping productivity, creativity and autonomous decision-making.

Why Investors Should Pay Attention

Sutskever's pivot away from scaling marks a potential inflection point for the industry:

The compute arms race may be approaching its limit.

Research labs, not GPU-rich giants, could drive the next breakthrough.

New focus areas — value systems, continual learning, cognitive architectures — may define the next wave of AI innovation.

Capital may shift from pure infrastructure to deep-tech research.

宙世代

宙世代

ZAKER旗下Web3.0元宇宙平台

一起剪

一起剪

ZAKER旗下免费视频剪辑工具

相关标签

the steam
相关文章
评论
没有更多评论了
取消

登录后才可以发布评论哦

打开小程序可以发布评论哦

12 我来说两句…
打开 ZAKER 参与讨论