Terence Tao – Kepler, Newton, and the true nature of mathematical discovery
Can artificial intelligence replicate the genius of Kepler's discovery of planetary motion—or only his twenty years of trial and error? Terence Tao, one of the world's leading mathematicians, argues that AI has driven the cost of idea generation to nearly zero, much like the internet did for communication. Yet where Kepler needed Tycho Brahe's dataset to verify his theories, modern science faces a new bottleneck: we can now generate thousands of hypotheses per day, but lack the infrastructure to validate them at scale. The question is no longer whether AI can do mathematics, but whether we can redesign science itself to handle the abundance.
Ключевые выводы
AI excels at breadth, trying thousands of approaches simultaneously, but struggles with depth—building cumulative progress from partial insights the way human mathematicians do.
The bottleneck in science has shifted from idea generation to verification and evaluation; peer review systems are already overwhelmed by AI-generated submissions.
Within a decade, AI will automate much of what math students and researchers do today, but mathematics will evolve to focus on fundamentally different problems—just as it did when computers replaced human calculators.
Hybrid human-AI collaboration will dominate mathematics for the foreseeable future, with humans providing depth and strategic direction while AI provides computational breadth and pattern recognition.
Over-optimization risks destroying serendipity—the unplanned interactions and accidental discoveries that have historically driven scientific progress.
Вкратце
AI has made hypothesis generation cheap and abundant, but the real challenge—and opportunity—is building new scientific infrastructure to validate, refine, and communicate ideas at a scale humans never imagined.
Kepler as a High-Temperature LLM
Kepler's discovery process mirrors modern AI: trying random relationships until one fits the data.
Kepler spent years proposing theories about planetary motion—from Platonic solids nested between planetary orbits to musical harmonies explaining cosmic order. Most were wrong. His famous third law, relating orbital period to distance from the Sun, appeared almost as an aside in a book about astrology and planetary harmonics. Yet this empirical regularity, verified against Tycho Brahe's unprecedented dataset, became the foundation for Newton's inverse-square law of gravity a century later.
The analogy to modern AI is striking. Like a large language model sampling at high temperature, Kepler generated countless hypotheses—some geometrical, some mystical—and tested them against data. Success required two ingredients: a massive, high-quality dataset (Brahe's observations, ten times more precise than any predecessor) and a verification mechanism. The process was inefficient but transformative: one correct empirical pattern, extracted from decades of failed attempts, catalyzed genuine scientific progress.
This raises a provocative question about AI's future role in science. If we can deploy millions of AI systems to hunt for empirical regularities across every domain—each one trying random relationships for the equivalent of twenty subjective years—we might discover thousands of «Kepler's third laws» waiting to be explained. The bottleneck would shift from discovery to interpretation: identifying which patterns matter and constructing the theories that unify them.
The Shifting Bottlenecks of Science
Why AI Struggles with Partial Progress
Current AI cannot build cumulative understanding from intermediate insights the way humans do.
Why AI Struggles with Partial Progress
Tao describes AI as «jumping robots» that can leap higher than humans but cannot grab a handhold, stay there, and pull others up to continue climbing. They excel at one-shot attempts but fail at the iterative, collaborative refinement that characterizes human mathematical practice. Each new session starts from scratch, with no retained skill or cumulative progress from prior failures.
The Erdős Problems: A Case Study
AI solved 50 previously unsolved problems, but success rate remains only 1-2%.
What AI Changes—and What It Doesn't
AI transforms auxiliary tasks but hasn't yet accelerated the core creative work.
“My papers now have a lot more code, a lot more pictures, because it's so easy to generate these things now. Some plot which would have taken me hours to do, now I can do in minutes. But in the past, I just wouldn't have put the plot in my paper in the first place. I would just talk about it in words. So it's hard to measure what 2x means.”
Intelligence vs. Cleverness
AI demonstrates cleverness but lacks the adaptive, cumulative reasoning that defines intelligence.
The Hidden Cost of Optimization
Over-optimizing destroys serendipity, the unplanned interactions that historically drove breakthroughs.
Modern society has become remarkably efficient at scheduling and optimization, but Tao warns we may be optimizing ourselves into intellectual stagnation. During COVID-19, academia maintained its meeting schedules through remote work, yet lost the casual hallway conversations and accidental discoveries that emerged from physical proximity. When Tao worked at the library searching for journal articles, he would stumble upon adjacent papers that sparked new ideas—serendipity now eliminated by targeted digital searches.
The danger extends to AI-driven research. While AI can instantly retrieve exactly the paper you need, it cannot replicate the experience of browsing adjacent ideas or encountering unexpected connections. Tao spent a year at the Institute for Advanced Study with no distractions, producing papers efficiently for weeks—then running out of inspiration. «You actually do need a certain level of distraction in your life,» he reflects. «It adds enough randomness and high temperature.»
This tension between efficiency and creativity may be fundamental. If we build AI systems that perfectly optimize research trajectories, eliminating «wasted» time on tangential explorations, we risk losing the accidents that produce revolutionary insights. The question is not just whether AI can replicate human mathematical ability, but whether it can replicate the productive randomness that makes discovery possible.
The Future: Depth, Breadth, and New Scientific Paradigms
Science must be redesigned to leverage AI's breadth while retaining human depth.
Near Term (Current) AI handles auxiliary tasks—code generation, literature searches, numerical computation, formatting—making researchers more productive at secondary tasks while core creative work remains human-driven.
Medium Term (This Decade) Hybrid human-AI collaboration dominates. AI maps out broad problem spaces at moderate competence; humans identify islands of difficulty and provide strategic depth. Mathematics shifts focus to fundamentally different questions.
Verification Infrastructure New systems emerge to evaluate and validate the flood of AI-generated hypotheses. Automated conjecture-making, ablation analysis of Lean proofs, and large-scale benchmarking replace traditional peer review.
Long Term (Beyond 2030s) Most mathematical tasks students perform today are automated, but mathematics evolves rather than disappears—just as it did when computers replaced human calculators and numerical analysts in the 20th century.
Люди
Глоссарий
Отказ от ответственности: Это ИИ-сводка видео с YouTube, подготовленная в образовательных и справочных целях. Она не является инвестиционной, финансовой или юридической консультацией. Всегда проверяйте информацию по первоисточникам перед принятием решений. TubeReads не связан с автором контента.