• AI RESEARCH UNIT
  • Posts
  • Can AI Become a Mathematician? The Race to Teach Machines How to Discover Theorems

Can AI Become a Mathematician? The Race to Teach Machines How to Discover Theorems

Introducing to you weekly AI Research Unit deepdives, where we give you a two minute summary of AI research that is making waves right now. 

Large language models can write poems, spin up code, even mimic Shakespeare — but ask them to solve a novel math problem and they stumble.

That’s a problem. Because math isn’t just another skill — it’s the gateway to abstract reasoning, formal proof, and the kind of symbolic thinking that underlies science, logic, and invention.

If we can teach AI to truly understand and do math — not just mimic it — we’re a step closer to building systems that can reason, hypothesize, and maybe even discover truths we’ve never seen before.

This week, we explore a paper that tries to do just that — with a bold vision of AI as a mathematician-in-training. Yoshua Bengio, an AI heavy hitter, is one of the authors, so we can tell that this is going to be a big deal straight from the word go.

You can find the paper here - https://arxiv.org/html/2403.04571v1

Can AI Become a Mathematician? The Race to Teach Machines How to Discover Theorems

When you think of AI, maybe image generation or chatbots come to mind—but what about solving math problems that have stumped humans for centuries?

A bold new vision is emerging: an AI mathematician that doesn't just solve equations, but invents new branches of mathematics.

Here’s how researchers are thinking about making that happen…

The Core Challenge: From Instinct to Intellect

Most AI today is great at intuitive tasks—like recognizing images or predicting the next word in a sentence. These are "System 1" tasks: fast, pattern-based, and instinctual.

But math? That’s "System 2": slow, logical, and deeply reasoned. Current AI hits a wall here.

To go further, AI must move from instinct to insight.

How Would an AI Mathematician Work?

The research lays out a roadmap with 7 key concepts:

1. Compression is the Goal
Math is about compressing a huge number of facts into a small set of powerful theorems. Teaching AI to find these "compression points" can help it build its own mathematical toolkit.

2. AI That Makes Conjectures
Rather than waiting to be told what to prove, AI can propose its own ideas—using systems like Generative Flow Networks that explore many possibilities without being narrowly optimized.

3. Learning to Prove Like Humans
Reinforcement learning could help AI break down complex problems into smaller pieces—just like human mathematicians use lemmas and subgoals in their proofs. We’re already making progress towards this - check out this AI prover here  - https://arxiv.org/abs/2502.00212.

This is vital as models are inherently bad at math and struggle to get better, which is to no big surprise why it is one of the next big challenges for researchers. 

4. Be Curious, Not Just Smart
AI can prioritize working on surprising or uncertain ideas—this is called active learning, and it’s how the best humans find big breakthroughs.

5. Baby Steps First
Like kids learning math, AI should start with easy concepts and gradually build up to higher abstractions. This curriculum learning approach mirrors real education.

6. Naming Helps Thinking
By inventing reusable terms and definitions, AI can write neater proofs—and even teach us new ways to see old problems.

7. Beyond Math: Scientific Discovery
These methods could spill over into physics, biology, or any field where uncovering elegant patterns and principles is the endgame.

Why It Matters

This isn’t just about math. Teaching AI to reason symbolically, explore ideas independently, and build knowledge in layers could be the key to unlocking truly intelligent systems.

Imagine:

  • AI proposing a new geometry,

  • Discovering the next breakthrough in physics,

  • Or explaining why something works—not just that it works.

What’s Next?

We’re still early in this journey. Key questions remain:

  • Can AI truly understand abstraction? A basic transformer model operates at a single layer of abstraction, so it will struggle to work out whether anything is true. It can tell you that “1+1=3” is unlikely, but not that it is untrue.

  • How do we ensure its ideas are useful, not just novel? Drug discovery AIs can churn out thousands of new molecules per day, but most of them will be useless or even harmful.

  • What does "creativity" mean for a machine? Making outputs no one has seen before? Or developing concepts no one has thought of before?

But one thing is clear: the search for an AI mathematician is really the search for AI that can think more like us—and perhaps one day, beyond us.

Math might be humanity’s most abstract art—and AI is getting ready to pick up the brush.

Let us know if you'd like a deeper dive into AI research or want to follow this story as it unfolds.
Build the Future of AI With Us!

Join our community of innovators shaping the next era of open-source intelligence.

Follow us on X for updates, breakthroughs, and dev drops.

This isn’t just open-source — it’s open potential.