TubeReads

Terence Tao – Kepler, Newton, and the true nature of mathematical discovery

Can artificial intelligence replicate the genius of Kepler's discovery of planetary motion—or only his twenty years of trial and error? Terence Tao, one of the world's leading mathematicians, argues that AI has driven the cost of idea generation to nearly zero, much like the internet did for communication. Yet where Kepler needed Tycho Brahe's dataset to verify his theories, modern science faces a new bottleneck: we can now generate thousands of hypotheses per day, but lack the infrastructure to validate them at scale. The question is no longer whether AI can do mathematics, but whether we can redesign science itself to handle the abundance.

Durata del video: 1:23:44·Pubblicato 20 mar 2026·Lingua del video: English
6–7 min di lettura·12,672 parole pronunciateriassunto in 1,254 parole (10x)·

1

Punti chiave

1

AI excels at breadth, trying thousands of approaches simultaneously, but struggles with depth—building cumulative progress from partial insights the way human mathematicians do.

2

The bottleneck in science has shifted from idea generation to verification and evaluation; peer review systems are already overwhelmed by AI-generated submissions.

3

Within a decade, AI will automate much of what math students and researchers do today, but mathematics will evolve to focus on fundamentally different problems—just as it did when computers replaced human calculators.

4

Hybrid human-AI collaboration will dominate mathematics for the foreseeable future, with humans providing depth and strategic direction while AI provides computational breadth and pattern recognition.

5

Over-optimization risks destroying serendipity—the unplanned interactions and accidental discoveries that have historically driven scientific progress.

In breve

AI has made hypothesis generation cheap and abundant, but the real challenge—and opportunity—is building new scientific infrastructure to validate, refine, and communicate ideas at a scale humans never imagined.


2

Kepler as a High-Temperature LLM

Kepler's discovery process mirrors modern AI: trying random relationships until one fits the data.

Kepler spent years proposing theories about planetary motion—from Platonic solids nested between planetary orbits to musical harmonies explaining cosmic order. Most were wrong. His famous third law, relating orbital period to distance from the Sun, appeared almost as an aside in a book about astrology and planetary harmonics. Yet this empirical regularity, verified against Tycho Brahe's unprecedented dataset, became the foundation for Newton's inverse-square law of gravity a century later.

The analogy to modern AI is striking. Like a large language model sampling at high temperature, Kepler generated countless hypotheses—some geometrical, some mystical—and tested them against data. Success required two ingredients: a massive, high-quality dataset (Brahe's observations, ten times more precise than any predecessor) and a verification mechanism. The process was inefficient but transformative: one correct empirical pattern, extracted from decades of failed attempts, catalyzed genuine scientific progress.

This raises a provocative question about AI's future role in science. If we can deploy millions of AI systems to hunt for empirical regularities across every domain—each one trying random relationships for the equivalent of twenty subjective years—we might discover thousands of «Kepler's third laws» waiting to be explained. The bottleneck would shift from discovery to interpretation: identifying which patterns matter and constructing the theories that unify them.


3

The Shifting Bottlenecks of Science

🔬
Classical Science
Scientists generated hypotheses carefully, then collected data to test them. The prestige and difficulty lay in the «eureka» moment of idea generation.
📊
Data-Driven Science
Modern science starts with massive datasets and extracts patterns through statistical analysis. Kepler had six data points; we now routinely work with millions.
🤖
AI-Augmented Science
AI has driven the cost of hypothesis generation to near-zero, but human reviewers are overwhelmed. The challenge is building infrastructure to validate ideas at scale.
🔍
The Verification Crisis
Journals report being flooded with AI-generated submissions. We can generate thousands of theories per day but lack systems to separate signal from noise efficiently.

4

Why AI Struggles with Partial Progress

Current AI cannot build cumulative understanding from intermediate insights the way humans do.

⚠️

Why AI Struggles with Partial Progress

Tao describes AI as «jumping robots» that can leap higher than humans but cannot grab a handhold, stay there, and pull others up to continue climbing. They excel at one-shot attempts but fail at the iterative, collaborative refinement that characterizes human mathematical practice. Each new session starts from scratch, with no retained skill or cumulative progress from prior failures.


5

The Erdős Problems: A Case Study

AI solved 50 previously unsolved problems, but success rate remains only 1-2%.

Problems Solved by AI
~50 out of 1,100+
Erdős problems solved with AI assistance over recent months, after decades without solutions
Success Rate on Individual Problems
1–2%
When AI tools are systematically applied to any given problem, not just the publicized successes
Literature Coverage
Near-zero
Most AI-solved problems had essentially no prior published attempts—low-hanging fruit with minimal human attention
Current Status
Plateau after initial surge
Pure AI solutions have paused; remaining progress requires hybrid human-AI collaboration

6

What AI Changes—and What It Doesn't

AI transforms auxiliary tasks but hasn't yet accelerated the core creative work.

My papers now have a lot more code, a lot more pictures, because it's so easy to generate these things now. Some plot which would have taken me hours to do, now I can do in minutes. But in the past, I just wouldn't have put the plot in my paper in the first place. I would just talk about it in words. So it's hard to measure what 2x means.

Terence Tao


7

Intelligence vs. Cleverness

AI demonstrates cleverness but lacks the adaptive, cumulative reasoning that defines intelligence.

CLEVERNESS
Trial and Error at Scale
AI can jump repeatedly, trying thousands of approaches and succeeding when one works. It excels at breadth—exploring vast solution spaces simultaneously. But it cannot stay on a handhold, adapt its strategy based on what it learned, or build incrementally from partial progress. Each attempt is independent.
INTELLIGENCE
Cumulative, Adaptive Reasoning
Human mathematicians engage in collaborative refinement: proposing an idea, testing it, modifying when it fails, mapping out what doesn't work, and gradually converging on a solution. This requires retaining insights across attempts, adapting strategies, and building shared understanding that persists beyond individual sessions.

8

The Hidden Cost of Optimization

Over-optimizing destroys serendipity, the unplanned interactions that historically drove breakthroughs.

Modern society has become remarkably efficient at scheduling and optimization, but Tao warns we may be optimizing ourselves into intellectual stagnation. During COVID-19, academia maintained its meeting schedules through remote work, yet lost the casual hallway conversations and accidental discoveries that emerged from physical proximity. When Tao worked at the library searching for journal articles, he would stumble upon adjacent papers that sparked new ideas—serendipity now eliminated by targeted digital searches.

The danger extends to AI-driven research. While AI can instantly retrieve exactly the paper you need, it cannot replicate the experience of browsing adjacent ideas or encountering unexpected connections. Tao spent a year at the Institute for Advanced Study with no distractions, producing papers efficiently for weeks—then running out of inspiration. «You actually do need a certain level of distraction in your life,» he reflects. «It adds enough randomness and high temperature.»

This tension between efficiency and creativity may be fundamental. If we build AI systems that perfectly optimize research trajectories, eliminating «wasted» time on tangential explorations, we risk losing the accidents that produce revolutionary insights. The question is not just whether AI can replicate human mathematical ability, but whether it can replicate the productive randomness that makes discovery possible.


9

The Future: Depth, Breadth, and New Scientific Paradigms

Science must be redesigned to leverage AI's breadth while retaining human depth.

1

Near Term (Current) AI handles auxiliary tasks—code generation, literature searches, numerical computation, formatting—making researchers more productive at secondary tasks while core creative work remains human-driven.

2

Medium Term (This Decade) Hybrid human-AI collaboration dominates. AI maps out broad problem spaces at moderate competence; humans identify islands of difficulty and provide strategic depth. Mathematics shifts focus to fundamentally different questions.

3

Verification Infrastructure New systems emerge to evaluate and validate the flood of AI-generated hypotheses. Automated conjecture-making, ablation analysis of Lean proofs, and large-scale benchmarking replace traditional peer review.

4

Long Term (Beyond 2030s) Most mathematical tasks students perform today are automated, but mathematics evolves rather than disappears—just as it did when computers replaced human calculators and numerical analysts in the 20th century.


10

Persone

Terence Tao
Mathematician
guest
Johannes Kepler
Astronomer (historical)
mentioned
Tycho Brahe
Astronomer (historical)
mentioned
Isaac Newton
Physicist and Mathematician (historical)
mentioned
Nicolaus Copernicus
Astronomer (historical)
mentioned
Charles Darwin
Naturalist (historical)
mentioned
Carl Friedrich Gauss
Mathematician (historical)
mentioned

Glossario
LeanA formal proof assistant that verifies mathematical proofs by checking them against axioms of logic, making proofs machine-readable and verifiable.
Erdős problemsA collection of over 1,100 unsolved mathematical problems posed by Paul Erdős, ranging from accessible puzzles to extremely difficult conjectures.
Reinforcement learningA machine learning technique where an AI learns by trial and error, receiving rewards or penalties based on outcomes, optimizing for measurable success.
ZFCZermelo-Fraenkel set theory with the Axiom of Choice—the standard foundation of modern mathematics, listing the basic axioms from which all mathematical proofs can be derived.

Avviso: Questo è un riassunto generato dall'IA di un video YouTube a scopo educativo e di riferimento. Non costituisce consulenza in materia di investimenti, finanziaria o legale. Verificare sempre le informazioni con le fonti originali prima di prendere decisioni. TubeReads non è affiliato con il creatore del contenuto.