This startup wants to change how mathematicians do math
Axiom Math, a startup based in Palo Alto, California, has released a free new AI tool for mathematicians, designed to discover mathematical patterns that could unlock solutions to long-standing problems.
The tool, called Axplorer, is a redesign of an existing one called PatternBoost that François Charton, now a research scientist at Axiom, co-developed in 2024 when he was at Meta. PatternBoost ran on a supercomputer; Axplorer runs on a Mac Pro.
The aim is to put the power of PatternBoost, which was used to crack a hard math puzzle known as the Turán four-cycles problem, in the hands of anyone who can install Axplorer on their own computer.
Last year, the US Defense Advanced Research Projects Agency set up a new initiative called expMath—short for Exponentiating Mathematics—to encourage mathematicians to develop and use AI tools. Axiom sees itself as part of that drive.
Breakthroughs in math have enormous knock-on effects across technology, says Charton. In particular, new math is crucial for advances in computer science, from building next-generation AI to improving internet security.
Most of the successes with AI tools have involved finding solutions to existing problems. But finding solutions is not all that mathematicians do, says Axiom Math founder and CEO Carina Hong. Math is exploratory and experimental, she says.
MIT Technology Review met with Charton and Hong last week for an exclusive video chat about their new tool and how AI in general could change mathematics.
Math by chatbot
In the last few months, a number of mathematicians have used LLMs, such as OpenAI’s GPT-5, to find solutions to unsolved problems, especially ones set by the 20th-century mathematician Paul Erdős, who left behind hundreds of puzzles when he died.
But Charton is dismissive of those successes. “There are tons of problems that are open because nobody looked at them, and it’s easy to find a few gems you can solve,” he says. He’s set his sights on tougher challenges—“the big problems that have been very, very well studied and famous people have worked on them.” Last year, Axiom Math used another of its tools, called AxiomProver, to find solutions to four such problems in mathematics.
The Turán four-cycles problem that PatternBoost cracked is another big problem, says Charton. (The problem is an important one in graph theory, a branch of math that’s used to analyze complex networks such as social media connections, supply chains, and search engine rankings. Imagine a page covered in dots. The puzzle involves figuring out how to draw lines between as many of the dots as possible without creating loops that connect four dots in a row.)
“LLMs are extremely good if what you want to do is derivative of something that has already been done,” says Charton. “This is not surprising—LLMs are pretrained on all the data that there is. But you could say that LLMs are conservative. They try to reuse things that exist.”
However, there are lots of problems in math that require new ideas, insights that nobody has ever had. Sometimes those insights come from spotting patterns that hadn’t been spotted before. Such discoveries can open up whole new branches of mathematics.
PatternBoost was designed to help mathematicians find new patterns. Give the tool an example and it generates others like it. You select the ones that seem interesting and feed them back in. The tool then generates more like those, and so on.
It’s a similar idea to Google DeepMind’s AlphaEvolve, a system that uses an LLM to come up with novel solutions to a problem. AlphaEvolve keeps the best suggestions and asks the LLM to improve on them.
Special access
Researchers have already used both AlphaEvolve and PatternBoost to discover new solutions to long-standing math problems. The trouble is that those tools run on large clusters of GPUs and are not available to most mathematicians.
Mathematicians are excited about AlphaEvolve, says Charton. “But it’s closed—you need to have access to it. You have to go and ask the DeepMind guy to type in your problem for you.”
And when Charton solved the Turán problem with PatternBoost, he was still at Meta. “I had literally thousands, sometimes tens of thousands, of machines I could run it on,” he says. “It ran for three weeks. It was embarrassing brute force.”
Axplorer is far faster and far more efficient, according to the team at Axiom Math. Charton says it took Axplorer just 2.5 hours to match PatternBoost’s Turán result. And it runs on a single machine.
Geordie Williamson, a mathematician at the University of Sydney, who worked on PatternBoost with Charton, has not yet tried Axplorer. But he is curious to see what mathematicians do with it. (Williamson still occasionally collaborates with Charton on academic projects but says he is not otherwise connected to Axiom Math.)
Williamson says Axiom Math has made several improvements to PatternBoost that (in theory) make Axplorer applicable to a wider range of mathematical problems. “It remains to be seen how significant these improvements are,” he says.
“We are in a strange time at the moment, where lots of companies have tools that they’d like us to use,” Williamson adds. “I would say mathematicians are somewhat overwhelmed by the possibilities. It is unclear to me what impact having another such tool will be.”
Hong admits that there are a lot of AI tools being pitched at mathematicians right now. Some also require mathematicians to train their own neural networks. That’s a turnoff, says Hong, who is a mathematician herself. Instead, Axplorer will walk you through what you want to do step by step, she says.
The code for Axplorer is open source and available via GitHub. Hong hopes that students and researchers will use the tool to generate sample solutions and counterexamples to problems they’re working on, speeding up mathematical discovery.
Williamson welcomes new tools and says he uses LLMs a lot. But he doesn’t think mathematicians should throw out the whiteboards just yet. “In my biased opinion, PatternBoost is a lovely idea, but it is certainly not a panacea,” he says. “I’d love us not to forget more down-to-earth approaches.”