What happens when we hand the keys of mathematical truth to a computer program? Does math become stronger—or does it lose something deeply human in the process?
Welcome to FreeAstroScience, where we break down complex scientific ideas into language that feels like a conversation between friends. Whether you're a curious student, a lifelong learner, or someone who simply loves asking why, you belong here. Today, we're exploring one of the most fascinating debates unfolding right now in the world of mathematics: the push to verify all of math using a computer language called Lean—and the voices raising thoughtful concerns about what we might sacrifice along the way.
Grab your coffee. Settle in. This one's worth reading to the very end.
📑 Table of Contents
- From Euclid to Errors: Why Math Needed Better Foundations
- The Calculus Crisis: When Intuition Hit a Wall
- Bourbaki: The Secret Society That Reshaped Mathematics
- What Is Lean, and Why Should You Care?
- The Beauty and the Cost of Machine-Verified Proofs
- Could One Tool Make Math Too Uniform?
- Rigor vs. Creativity: Where Do We Draw the Line?
From Euclid to Errors: Why Math Needed Better Foundations
Let's go back—way back. In ancient Greece, Euclid showed us something beautiful: start with a small list of basic principles (called axioms), apply logical reasoning, and you can reveal entirely new mathematical truths .
That idea still sits at the core of mathematics today.
But here's the catch. Those early proofs sometimes hid unstated assumptions. They leaned on misleading intuitions. Small gaps went unnoticed. You couldn't say with absolute confidence that every conclusion was airtight.
For a long time, that was fine. Math kept growing. New ideas kept flowing. Nobody tripped over those hidden cracks—until they did.
When the Cracks Started Showing
Over the past few centuries, mathematicians worked hard to close those gaps. By the early 1900s, they'd settled on the axioms they wanted to use. They introduced new logical systems and standards to formalize their arguments—to guarantee that every single step in a proof, no matter how long or tangled, led to an undeniable conclusion .
This wasn't just about tidying up. The push to formalize actually helped mathematicians discover surprising connections between different branches of math. It opened doors nobody expected. As David Bressoud of Macalester College put it: "You often don't know what you don't know. And to be humble about that."
The Calculus Crisis: When Intuition Hit a Wall
Here's a story that still surprises people.
In the late 1600s, Isaac Newton and Gottfried Leibniz independently developed calculus—one of the most powerful tools in all of science . Calculus lets us understand how things change: the curve of a planet's orbit, the flow of a river, the spread of heat through metal.
But Newton and Leibniz defined key concepts—like infinity and infinitely small quantities (infinitesimals)—in vague, geometric terms. Used carelessly, their formulas could produce nonsense. Even division by zero .
For about 150 years, nobody worried much. Scientists learned by instinct when calculus worked and when it didn't. Then, in the early 1800s, mathematicians started bumping into strange objects—infinite sums, jagged curves—that defied their gut feelings about what was possible.
The Fix That Changed Everything
Brilliant minds like Niels Abel, Augustin-Louis Cauchy, and Karl Weierstrass realized they needed to go back to basics. What is a function? What does "continuous" really mean? What actually happens when you add up infinitely many things?
Their answers gave us analysis—now one of the most important fields in all of mathematics. Analysis helps us understand fluid flow, electrical currents, and even the chemical makeup of distant stars .
And the formalization of calculus also gave birth to set theory, which established the rules underpinning all of modern math .
But not everyone cheered. The physicist Oliver Heaviside grumbled in 1899: "It is not easy to get up any enthusiasm after it has been artificially cooled by the wet blankets of the rigorists."
The great David Hilbert captured this perfectly in 1905: "The edifice of science is not raised like a dwelling, in which the foundations are first firmly laid." Scientists should first find "comfortable spaces to wander around and only subsequently… support and fortify them." He called this "the right and healthy path of development."
Bourbaki: The Secret Society That Reshaped Mathematics
Now we arrive at one of the most fascinating chapters in the history of math.
In the mid-1930s, France was in trouble, mathematically speaking. World War I had wiped out a generation of promising students and researchers. Universities taught math in fragmented, outdated ways .
So a group of young mathematicians formed a secret society. They called themselves Bourbaki. Their original goal was modest: write a better textbook for French students. But ambition, as it tends to do, grew. Today, Bourbaki has produced over 40 volumes covering virtually every area of mathematics.
The Style That Conquered a Field
Bourbaki's true legacy isn't what they wrote—it's how they wrote it. They stripped away concrete examples and calculations. They presented each proof as a clean chain of logical deductions, without reference to the underlying intuition .
"Very formal. Very austere," said historian Leo Corry of Tel Aviv University .
Their influence spread like wildfire. Patrick Massot of Paris-Saclay University noted: "The most successful parts have become so much part of the common mathematical knowledge and style that it's hard to think of what it was before."
And there's real power in abstraction. As Massot argues, "you are forced to understand what really makes things work, and what is just noise."
The Unintended Cost of Elegance
But Bourbaki's style wasn't a natural fit for every kind of mathematics. Highly visual, concrete fields like combinatorics (the science of counting) and graph theory (the science of networks) got pushed to the margins. They were sidelined at top institutions in the U.S. and Europe for decades—only thriving in places like Hungary, where Bourbaki's grip was weaker .
Béla Bollobás of the University of Memphis recalled bluntly: "Graph theory [was] the slum of topology. That atmosphere changed only recently. Very recently."
Even in fields that did thrive under Bourbaki—algebra, topology, analysis—some mathematicians worry the influence was too strong. Proofs started looking the same. Thinking started looking the same.
"There is a sense that it decreased the cultural diversity of mathematics," said Aravind Asok of the University of Southern California .
Before Bourbaki, there were many different flavors of algebraic geometry—French, Italian, and more. The Italian school, less rigorous but geometrically rich, faded quickly as Bourbaki's style spread. Math became more reliable, sure. But "by cutting off other paths of possible progress, Bourbaki introduced a new problem," Asok explained.
What Is Lean, and Why Should You Care?
Here's where our story reaches the present day—and it's electric.
Since the 1960s, researchers have been developing proof assistants: computer programs that check every line of a mathematical proof . You write each step in a language the computer understands, and the program verifies the logic. Miss a single step—even something as basic as showing that 1 + 1 equals 2—and the program won't accept your proof .
Today, mathematicians are betting big on one proof assistant in particular: Lean. And the numbers are staggering.
Alex Kontorovich of Rutgers University calls Lean "revolutionary"—and his analogy is hard to forget: "Imagine if every time you wanted to build a spaceship, every single engineer has to understand every single component — from mining the iron ore, to smelting it, designing it. Now with these formal systems, for the first time, you can do mathematics and just buy somebody's part off the shelf."
That's the big promise. Lean breaks a proof into pieces that can be verified independently and reused in other proofs. It's modular. It's powerful. And it might just be the most ambitious formalization project in the history of mathematics.
The Beauty and the Cost of Machine-Verified Proofs
A Real-World Success Story
In 2019, mathematician Peter Scholze wrote a proof by hand for a theorem central to a new theory he was building. The proof was so complicated that even Scholze himself struggled to tell if it was correct.
So in late 2020, a team led by Johan Commelin and Adam Topaz set out to formalize it in Lean. Months later, they confirmed the proof was correct—giving mathematicians much greater confidence in Scholze's work. Even better, the process uncovered a cleaner version of the proof and sharpened some of Scholze's original ideas .
Kevin Buzzard of Imperial College London, one of Lean's biggest advocates, put it beautifully: "It forces you to think about mathematics in the right way. It forces you to become an artist."
Buzzard has spent the past year working to formalize a proof of Fermat's Last Theorem in Lean—a result whose proof is famously long and tangled .
But Not Everyone's Convinced
Some mathematicians worry that the time and energy poured into Lean might be better spent elsewhere. Formalizing a single proof can take months or even over a year.
And here's a candid admission from Asok: "There are many, many errors in the literature," he said, "but my experience is that mathematics is remarkably robust." In other words, there's little danger that the whole building of math will come crashing down because of unfound mistakes .
Could One Tool Make Math Too Uniform?
This is where the conversation gets really interesting—and a little uncomfortable.
Within Lean, there's little room for diversity of method, concept, or ideology.
Every new proof in Lean can only reference definitions and theorems that have already been verified and stored in its library. Changing a definition creates a domino effect: proofs built on the old definition may break .
Stephanie Dick of Simon Fraser University captures this tension well: Mathematics is "not a fixed and finite and formalized enterprise. It's a moving, shifting beast, and the terms of it are changing all the time."
The Wikipedia Problem
A dedicated group of Lean users decides which definitions enter the library and how they're coded—a process Simon DeDeo of Carnegie Mellon University compared to Wikipedia .
Emily Riehl of Johns Hopkins University calls it "really a community trying to do the best thing for the community." But she's honest about the downside: "There is one decision that will be made in the end, [when] in many cases, there's not one correct decision. Some people will be happy, and other people will be unhappy."
History Repeating?
Just as Bourbaki's style fit some subjects better than others, Lean is a better match for number theory and algebraic geometry than for graph theory or category theory warns that past technologies—even choices of notation—have subtly shifted how mathematicians think. Lean might push researchers toward problems that are easier to formalize, even if they don't realize it. "I've now found so many cases where something like that is happening," she said—"where it actually shifts the focus and the intuition and the understanding away from the mathematical problem domain toward the behavior of this system."
Riehl's practical suggestion? Mathematicians should try using more than one proof assistant, rather than putting all their eggs in Lean's basket. She's not sure how realistic that is, though, given the enormous effort formalization demands
Rigor vs. Creativity: Where Do We Draw the Line?
Here's the tension at the heart of this whole story.
Big advances in mathematics require bold ideas. These often come from experimentation and intuition—from exploring new mathematical worlds, even when you can't prove every step along the way. That kind of math tends to contain mistakes at first. But it also tends to produce breakthroughs .
Akshay Venkatesh of the Institute for Advanced Study raises a thought that should give us pause. "There's no question that the concept of proof is a fundamental part of mathematics," he said. But "where I think there should be a question is whether it should be the defining feature of mathematics."
That's a big idea. Let it sit for a moment.
What We've Lost Along the Way
To Descartes in the 1600s, rigor meant being able to hold an idea in your head and intuitively understand why it's true. Older proofs had "a kind of grittiness," Venkatesh observed. They weren't formally elegant. But they were "something a human being can easily understand."
Modern proofs? More elegant, yes. But also "more slippery, harder for the mind to get a grip on."
What History Teaches Us
The record shows that math has a tendency to self-correct. Bourbaki's dominance eventually softened. Analysis found a compromise between strict definitions and Heaviside's freewheeling calculations. The fields that were sidelined—graph theory, combinatorics—found their way back.
What's clear is this: the future of formalization will be one "we haven't imagined yet."
As Asok put it: "I don't want any mathematics where one mode dominates. There is a cultural history to mathematics that deserves to be preserved."
📐 A Quick Look: Historical Waves of Formalization
Conclusion: The Sleep of Reason Breeds Monsters
We've traveled from Euclid's axioms to Lean's 260,000 verified theorems—a journey spanning more than two thousand years. Along the way, we've seen that the relationship between rigor and creativity in mathematics is not a battle with a winner. It's a dance. Push too hard toward formality, and you risk losing the wild, intuitive spark that drives discovery. Lean too far into intuition, and the foundations start to crack.
What Lean offers is extraordinary: a world where proofs are broken into modular, verifiable pieces, where AI and humans collaborate, where the accuracy of mathematics can be checked by machines. But we should remember Hilbert's wisdom—that science grows best when explorers wander freely first, and builders reinforce the structure only when the walls start to wobble.
The best math, like the best science, needs both the dreamer and the engineer.
At FreeAstroScience.com, we believe in explaining complex scientific ideas in language everyone can grasp. We believe in keeping your mind active, your curiosity sharp, and your questions coming—because the sleep of reason breeds monsters. Come back often. There's always more to learn, more to wonder about, and more to discover together.
You're never alone in that curiosity. We're right here with you.
📚 References & Sources
- Quanta Magazine — "In Math, Rigor Is Vital. But Are Digitized Proofs Taking It Too Far?" (Part 1)
- Quanta Magazine — "In Math, Rigor Is Vital. But Are Digitized Proofs Taking It Too Far?" (Part 2)
- Quanta Magazine — "In Math, Rigor Is Vital. But Are Digitized Proofs Taking It Too Far?" (Part 3)
Published March 25, 2026 · Written for FreeAstroScience.com by Gerd Dani

Post a Comment