Back to reading room

Chikocorp websites

Terence Tao

Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

Rank #1 | Mathematics | Watch on YouTube

Best mix of raw intellectual depth and long-term usefulness across math, physics, and AI.

Curated Summary

A concise editorial summary of the episode’s core ideas.

Thesis

Terence Tao presents mathematics as the study of models at the boundary between tractable and impossible, where progress often comes from identifying the right abstraction, ruling out tempting but doomed approaches, and finding deep connections across fields. He uses Kakeya, Navier-Stokes, prime number theory, Ricci flow, and formal proof systems to show that the hardest problems are less about raw computation than about discovering the right language for randomness, structure, and scale.

Why It Matters

For a technical reader, the episode is a compact map of how frontier math actually works: isolate the true obstruction, simplify aggressively, classify failure modes, and use cross-domain analogies to import methods. Tao also gives a realistic view of AI in mathematics: currently strongest as a proof assistant, search tool, and coding aid, but not yet capable of the strategic "sense of smell" needed to choose fruitful directions on major open problems.

Key Ideas

Practical Takeaways

Best For

This episode is best for mathematicians, theoretical computer scientists, physicists, and technically inclined researchers who want a realistic picture of how deep problem solving works. It is especially valuable if you care about PDEs, number theory, proof assistants, or AI-for-math, and want durable heuristics rather than biography or trivia.

Extended Reading

A longer, section-by-section synthesis of the full episode.

Boundary Problems

Tao begins by distinguishing "hard" from "interesting" problems. Mathematics contains arbitrarily difficult or even undecidable questions, but the most fertile problems are near the frontier where existing techniques do most of the work and then fail on a stubborn remaining gap. That framing sets the tone for the whole discussion: progress often comes not from attacking impossibility head-on, but from locating the precise obstruction.

His example is the Kakeya problem, which starts as a geometric puzzle about turning a needle around in the smallest possible area. In two dimensions, Besicovitch showed this can be done in arbitrarily small area. In three dimensions, the relevant question becomes quantitative: for a very thin tube of thickness delta, how small can the volume be while still allowing all directions? The conjectured answer is that the minimum volume decreases only very slowly with delta, roughly logarithmically, and Tao notes that this was eventually proved.

What makes Kakeya important is not the puzzle itself but its unexpected reach. Tube-packing geometry turns out to control phenomena in PDE, harmonic analysis, number theory, and wave propagation. The key link is the wave packet picture: localized waves travel through space-time along thin tubes. If tubes in many directions could be packed too efficiently, one could force pathological wave concentration. The problem therefore becomes a proxy for understanding when dispersive systems remain controlled and when they can focus into singular behavior.

Singularities and Navier-Stokes

The Navier-Stokes regularity problem enters through this lens of concentration. The equations model incompressible fluids, and the question is whether smooth initial data can ever evolve into a singularity where velocity becomes unbounded in finite time. Tao emphasizes the mathematical difference between "never seen in practice" and "proved impossible." Mathematicians care about the full 100%, not overwhelming plausibility.

His explanation of the difficulty is energetic and scale-based. Viscosity dissipates energy and tends to calm fluids, but nonlinear transport can move energy across scales. In ordinary turbulence, large eddies transfer energy to several smaller eddies, dispersing it enough for viscosity to eventually dominate. The dangerous scenario is a concentrated cascade: energy gets funneled repeatedly into one smaller blob, then another, accelerating fast enough that dissipation cannot catch up. This creates a self-similar route to finite-time blowup.

Tao uses Maxwell's demon as an analogy for the obstruction. Many systems statistically "should" behave benignly, but a rare coordinated conspiracy could defeat that intuition. Likewise with digits of pi: they appear patternless, but current techniques do not prove the strongest randomness statements one expects. For Navier-Stokes, the issue is not that blowup looks likely, but that current tools cannot exclude a very special adversarial configuration.

Averaged Equations and the Logic of Obstructions

Tao's 2016 blowup result does not solve the actual Navier-Stokes problem; instead it studies an averaged modification of the equation. He explains this as an obstruction result: by altering the interactions while preserving enough of the structure, he engineered a system that does blow up. That shows any proof of global regularity for real Navier-Stokes must use features absent from the averaged model.

This reflects a deeper strategy in hard mathematics: learning what cannot work is as valuable as learning what can. Hard problems attract many plausible methods, and years can be wasted on approaches doomed in principle. Counterexamples in nearby models prune the search space. Tao stresses that mathematics advances not just by finding the winning technique, but by ruling out losing ones.

A central concept here is supercriticality. In PDE, different terms compete: for Navier-Stokes, viscosity is linear and regularizing, while transport is nonlinear and destabilizing. In supercritical equations, the bad nonlinear terms grow stronger relative to the good terms at smaller scales. This is why small-scale behavior is decisive and why fine detail matters far more than in systems like celestial mechanics, where large-scale bulk descriptions suffice. Tao presents supercriticality as one of the key dividing lines between equations that are tractable and those prone to wild behavior.

Liquid Computers and Blowup as Computation

One of the most striking parts of the conversation is Tao's "liquid computer" idea for potential Navier-Stokes blowup. In trying to construct blowup for the averaged equation, he found that naive energy transfer to smaller scales fails in three dimensions because energy spreads across many scales at once and becomes vulnerable to viscosity. To force blowup, he needed delayed transfer: energy should move to the next scale, wait until the previous scale is emptied, then continue.

To implement that delay, he built a complicated nonlinear mechanism analogous to an electronic circuit with gates, clocks, and staged activation. He credits discussions with his engineer wife for helping shape that viewpoint. The system behaves like a mathematical Rube Goldberg machine that controls when energy is allowed to pass. From this, he extrapolates a possible roadmap for real Navier-Stokes: if fluid configurations can realize logic gates and computation, then one could imagine a self-replicating fluid machine that creates a smaller copy of itself, transfers energy into it, powers down, and repeats at ever smaller scales until blowup.

The idea sounds fantastical, but Tao's point is methodological rather than literal. Similar phenomena are known in cellular automata like Conway's Game of Life, where simple local rules support gliders, logic gates, Turing-complete computation, and self-replicators. This offers precedent for computation emerging from a simple dynamical law. He is careful to say that real fluids are much messier than digital cellular automata, and that all the needed fluid logic components remain speculative. Still, it is a serious conceptual bridge between singularity formation and computation.

A memorable line is "the equation has a certain scaling symmetry". That scaling symmetry is what makes iterative self-reproduction relevant: once one stage is built, smaller and faster copies can in principle continue indefinitely.

Structure, Randomness, and What Can Be Proved

Another major theme is Tao's recurring dichotomy between structure and randomness. Many mathematical theorems work by showing that an object is either genuinely random-like or close to some structured model; both cases are then analyzable. He describes inverse theorems as tools for certifying structure: if a function behaves "almost" additively, there is often a nearby exactly structured object explaining that behavior.

Szemeredi's theorem serves as the canonical example. Dense sets of integers contain arithmetic progressions of arbitrary length, and this is true both for highly structured sets like the odd numbers and for random dense subsets. The theorem succeeds because progressions are robust under either explanation. That robustness underlies Tao's later work on primes.

This framework also explains why some conjectures are much harder than others. Twin primes are fragile: one can remove a very sparse, carefully chosen set of primes and destroy all twin pairs while preserving most aggregate statistics. Arithmetic progressions are robust: even after deleting 99% of the primes, long progressions can still remain. The proof technology is therefore much stronger for the latter. Tao's synthesis is that randomness alone is rarely enough; what matters is whether the target pattern survives both structured and random regimes.

Infinity, Models, and Compressing Reality

The discussion then broadens into philosophy of mathematics and physics. Tao describes science as an interaction among reality, observations, and models. Mathematics operates inside models: assuming axioms, what follows? Physics proposes and tests models against data. Neither can proceed alone; each corrects and sharpens the other.

On infinity, Tao offers a pragmatic view: it is an idealization of quantities too large or too small to bound explicitly. Infinite reasoning often simplifies the mathematics, but it can also mislead unless handled carefully. Rearranging infinite sums, taking limits, and interchanging operations all require disciplined analysis. He notes a historical trend toward "finitizing" infinite arguments: once an infinite theorem is proved, later work often extracts explicit quantitative bounds.

He also frames physical theories as forms of compression. A good theory replaces petabytes of observations with a short mathematical description plus a few parameters. In this sense, the success of mathematical physics is partly the success of data compression. That naturally leads to universality: many complex microscopic systems produce simple macroscopic laws that depend on only a small number of parameters. The central limit theorem is the clean model of this phenomenon; Gaussian behavior emerges from many weakly dependent inputs. But Tao also stresses where universality fails, as in systemic correlation during the 2008 financial crisis.

A strong line here is "the universe is compressible at all". The fact that broad regularities exist, and that simple mathematical structures capture them, is itself one of the deepest mysteries.

Fox, Hedgehog, and Mathematical Taste

Tao reflects at length on style. He contrasts hedgehogs, who know one area extremely deeply, with foxes, who range across many areas and transport ideas between them. He identifies mostly as a fox: he likes analogies, narratives, and "arbitrage" between fields. For him, one of the main engines of progress is finding that two previously separate subjects share a common form.

This self-description also explains his approach to problem solving. He recommends "cheating strategically": simplify aggressively, turn off most difficulties, solve toy versions, then add complications back one at a time. Instead of confronting a hard problem in its full strength, isolate one obstruction and learn from it. He compares this to action-movie choreography, where the hero survives by effectively fighting one opponent at a time.

His account of proof aesthetics is equally revealing. John Conway's notion of "extreme proofs" made a deep impression on him: among all proofs of a theorem, one can ask for the shortest, most elementary, most elegant, or most conceptual. Tao treats proof writing as craftsmanship, much like good coding. Correctness is necessary, but influential mathematics should also be readable, motivating, and reusable.

Formal Proof, Lean, and New Workflows

Tao is unusually enthusiastic about formal proof systems, especially Lean. He explains Lean as both a programming language and a proof language that produces certificates of correctness. Compared with pen-and-paper mathematics, writing in Lean is like explaining a proof to an extremely pedantic colleague. Every type and inference must be justified, though tooling increasingly automates the routine parts.

He estimates that formalization currently costs roughly ten times the labor of ordinary exposition, but he thinks that ratio is falling rapidly due to better infrastructure and AI-assisted autocomplete. He sees a likely phase transition once formalization becomes cheaper than traditional writing for some tasks. At that point, mathematical publishing and refereeing may change substantially: correctness checking could be delegated to the proof assistant, leaving humans to evaluate significance and context.

Tao also highlights a subtler advantage: modular collaboration. In Lean, every object carries its provenance, so proofs can be read nonlinearly and split into atomic tasks. That enables trustless large-scale collaboration in a way ordinary mathematical prose does not. He contrasts this with earlier crowdsourced "Polymath" efforts, where human moderation was a bottleneck.

His equational theories project is the strongest example. The team generated roughly 22 million algebra implication problems and sought for each either a proof or a counterexample. With about 50 contributors, nearly all cases have now been settled. The project illustrates how formal methods can turn parts of mathematics into distributed, verifiable infrastructure rather than isolated artisanal arguments.

AI as Assistant, Collaborator, and Mathematical Smell

Tao's view of AI is optimistic but technically grounded. He thinks current systems are already useful for coding, literature search, and lightweight formalization support, but their main weakness is unreliability in subtle ways. AI-generated proofs often look polished while containing errors no human would make. In programming terms, they are "odorless": they lack the obvious bad style that would warn a reader where to inspect.

He expects AI to help first with routine but labor-intensive mathematical tasks: trying standard methods, checking many adjacent cases, searching libraries like Mathlib, or drafting parts of an argument. The key missing ability is what he calls mathematical "smell": the capacity to sense when a strategy is promising, overcomplicated, or fatally wrong before pushing too far. Humans often know that a decomposition has made a problem harder rather than easier even if they cannot yet prove it. Tao sees that judgment as central to serious research.

Still, he predicts substantial progress. He already expected AI-assisted mathematical papers by 2026 and says that has effectively begun in limited form. More ambitiously, he thinks AI within this decade could plausibly generate a genuinely interesting conjecture linking areas that humans had treated as unrelated. That would mark a more profound intellectual contribution than merely extending existing proof search.

Primes, Collatz, and the Remaining Frontier

On famous open problems, Tao is cautious. He sees twin primes as a problem where more partial progress is likely, but a genuine breakthrough may require overcoming the "parity barrier," a fundamental obstruction in sieve methods. His explanation is elegant: one conspiracy is hard to rule out, but multiple conspiracies can sometimes be made incompatible. That is why bounded gaps between primes became tractable before twin primes themselves.

He treats the Riemann hypothesis as more remote. It expresses an extremely strong form of randomness in the multiplicative structure of primes, akin to square-root cancellation in probability, but current techniques are too blunt. Any proof or disproof would likely need ideas from "left field," not a direct extension of known machinery.

The Collatz conjecture occupies a different niche: simple to state, computationally irresistible, and resistant to all standard methods. Tao's own result shows that almost all starting values eventually become much smaller, using probabilistic ideas that capture the average downward drift. But that still leaves open the possibility of a rare exceptional orbit, and such exceptional behavior may require qualitatively new ideas, perhaps closer to computation or cellular automata than to standard number theory.

Across these examples, Tao returns to the same message: the hardest problems are not merely those we have not solved, but those where we do not yet know what kind of explanation would count as a solution.

Mathematics as Human Infrastructure

The conversation closes on culture rather than technique. Tao discusses awards, Perelman, Andrew Wiles, education, and the social structure of mathematics. He is skeptical of hero narratives even while acknowledging that famous individuals inspire newcomers. Major theorems usually rest on decades or centuries of collective work, and modern mathematics increasingly exceeds any single mind.

He also emphasizes that people think mathematically in different ways. Some are visual, some symbolic, some puzzle-driven. Education often fails because it presents a narrow style as universal. New tools, from online communities to Lean projects, may lower the barrier by creating more entry points into real mathematics, including for amateurs and students.

The deepest optimism in the episode is not about any particular theorem. It is about mathematics as a cumulative, collaborative system that keeps expanding its own methods. Problems once seen as forbidding can become routine when the right language, infrastructure, or abstraction finally arrives. Tao clearly sees formal proof and AI not as replacements for mathematicians, but as possible next layers in that long civilizational build.