Nous Analysis, the San Francisco-based synthetic intelligence startup, launched on Tuesday an open-source mathematical reasoning system referred to as Nomos 1 that achieved near-elite human efficiency on this 12 months's William Lowell Putnam Mathematical Competitors, one of the prestigious and notoriously troublesome undergraduate math contests on the earth.
The Putnam is understood for its issue: Whereas an ideal rating is 120, this 12 months's high rating was 90, and the median was simply 2. Nomos 1, against this, scored 87 factors — a outcome that will have ranked second out of three,988 individuals within the 2024 competitors, in keeping with the corporate.
The discharge marks an inflection level within the quickly accelerating race to construct AI techniques able to refined mathematical reasoning. In contrast to the large, compute-intensive fashions deployed by main know-how corporations, Nomos 1 achieves its outcomes with a comparatively compact structure: 30 billion parameters with roughly 3 billion lively at any given time, utilizing a mixture-of-experts design based mostly on Alibaba's Qwen3 mannequin.
"This rating would rank #2/3988 in 2024 and marks our first step with Hillclimb AI in the direction of making a SOTA AI mathematician," Nous Analysis introduced on social media Tuesday.
The identical base mannequin scored 24 factors with out Nous Analysis's specialised coaching
Maybe most putting is the hole between Nomos 1 and its base mannequin. When Nous Analysis ran the identical Qwen3-30B-A3B-Pondering-2507 mannequin by means of an equivalent testing harness, it scored simply 24 out of 120 — a outcome that underscores the vital significance of post-training optimization and specialised reasoning strategies over uncooked mannequin scale.
"Nomos 1 achieved an 87/120 with 8 good scores," the corporate said, noting that the efficiency distinction "is basically because of post-training and information high quality relatively than the harness."
The outcomes have been verified by means of blind grading by a human professional who had beforehand completed within the high 200 on the Putnam. Nous Analysis offered the anonymized submissions to the grader, then revealed the total set of de-anonymized information and the runbooks used to generate them on GitHub.
Why the Putnam competitors is taken into account the last word check of mathematical reasoning
The William Lowell Putnam Mathematical Competitors is an annual arithmetic competitors for undergraduate school college students enrolled at establishments of upper studying in the US and Canada. It’s extensively thought-about to be probably the most prestigious university-level arithmetic competitors on the earth.
The notoriously brutal William Lowell Putnam Mathematical Competitors is extra of a mathematical sporting occasion than a tutorial check. The examination consists of two 3-hour classes separated by a 2-hour break. There are a complete of 12 inquiries to be solved, 6 for every session. Every query is price 10 factors, for a complete of 120 factors.
Putnam questions usually are not the kind that come up in common exams or textbooks. They’re extra like puzzles than calculations, typically requiring college students to seek out other ways to signify issues earlier than an answer may unfold.
Final 12 months, practically 4,000 college students throughout the continent wrote the Putnam. Sixty-one per cent scored three factors or fewer, in keeping with the Mathematical Affiliation of America, which organizes the competitors. The highest rating was 90 out of 120.
Many Putnam Fellows have gone on to develop into distinguished researchers in arithmetic and different fields, together with three Fields Medalists — John Milnor, David Mumford, and Daniel Quillen — and two Nobel laureates in physics — Richard Feynman and Kenneth Wilson.
Contained in the two-phase reasoning system that powers Nomos 1's mathematical breakthroughs
Nomos 1 is a specialization of Qwen's Qwen3-30B-A3B-Pondering mannequin, optimized for mathematical problem-solving and proof-writing in pure language. The system was developed in collaboration with Hillclimb AI.
What distinguishes Nomos 1 from easy mannequin inference is its refined reasoning harness — an open-source framework that orchestrates how the mannequin approaches and solves issues. The harness operates in two distinct phases inside a three-hour time restrict, mirroring the precise Putnam competitors construction.
Within the fixing section, parallel staff concurrently sort out issues utilizing a priority-based system. Every employee picks an issue, generates a submission, then scores its personal work on a scale of 1 to 7. Issues with the fewest good scores obtain precedence, making certain the system focuses its compute on the toughest challenges. This course of continues till both all issues have achieved a goal variety of self-critiqued good scores or time runs out.
The finalization section begins quarter-hour earlier than the time restrict (or at 50% for shorter runs) and employs a two-stage choice course of. First, a consolidation step teams submissions by conclusion and makes an attempt to establish the right group — importantly, not essentially the bulk group. Then, a pairwise event utilizing single elimination determines the ultimate submission for every drawback.
"Our open supply reasoning system consists of a fixing section, the place staff try a least-solved drawback and self-assess, adopted by a finalization section, which consolidates submissions to decide on a ultimate submission for every drawback," Nous Analysis defined.
How Nomos 1 compares to mathematical AI techniques from DeepSeek, Google, and OpenAI
The Nomos 1 outcomes arrive amid a flurry of advances in mathematical reasoning AI. DeepSeek's mannequin, DeepSeekMath-V2, scored 118 out of 120 factors on questions from the 2024 William Lowell Putnam Mathematical Competitors, beating the highest human rating of 90. The mannequin additionally carried out on the stage of gold-medal winners within the Worldwide Mathematical Olympiad.
This 12 months, Google's superior Gemini mannequin operated end-to-end in pure language, producing rigorous mathematical proofs straight from the official drawback descriptions – all inside the 4.5-hour competitors time restrict. They achieved this 12 months's outcome utilizing a complicated model of Gemini Deep Assume.
What makes Nomos 1's achievement notable is just not uncooked efficiency — it trails DeepSeek's 118/120 — however relatively its accessibility and effectivity. At 30 billion parameters with solely 3 billion lively, the mannequin can run on consumer-grade {hardware}, a stark distinction to the large compute clusters required by frontier fashions from OpenAI and Google.
Hermes 4.3 arrived simply six days earlier, educated on a decentralized blockchain community
The Nomos 1 announcement follows carefully on the heels of Nous Analysis's December 3 launch of Hermes 4.3, a general-purpose language mannequin that marked one other vital milestone for the corporate.
Hermes 4.3, based mostly on ByteDance's Seed-OSS-36B-Base mannequin, is the primary manufacturing mannequin that Nous Analysis educated totally on its Psyche community — a distributed coaching infrastructure that makes use of a novel optimizer referred to as DisTrO to coordinate coaching throughout nodes unfold all through information facilities over the open web, secured by consensus on the Solana blockchain.
The corporate educated Hermes 4.3 each by means of conventional centralized strategies and on the Psyche community, particularly to confirm that distributed coaching might match or exceed centralized efficiency for manufacturing workloads. The Psyche-trained model outperformed the centralized model throughout a collection of downstream duties, the corporate reported.
"The coaching run proved secure all through, averaging 144k tokens/second unfold throughout 24 Psyche nodes," Nous Analysis said. "Utilizing DisTrO's overlapped collective technique, the whole lot of the P2P communications have been hidden by the coaching time, successfully attaining equal throughput to conventional, centralized coaching."
Hermes 4.3 additionally achieved state-of-the-art outcomes on RefusalBench, a brand new benchmark that measures a mannequin's willingness to be useful throughout a wide range of situations generally restricted by different fashions. The mannequin answered 74.60% of RefusalBench questions in non-reasoning mode, surpassing its predecessor Hermes 4 70B (59.50%) and outperforming closed fashions together with Grok 4 (51.30%) and Gemini 2.5 Professional (24.23%).
Small fashions with good coaching are closing the hole with trillion-parameter giants
Collectively, the 2 releases in a single week sign Nous Analysis's strategic wager: that smaller, extra environment friendly fashions with refined post-training strategies and reasoning harnesses can compete with — and in some circumstances outperform — the large fashions developed by better-funded rivals.
For enterprise decision-makers, the implications are vital. Mathematical reasoning capabilities have functions far past tutorial competitions: they're important for formal verification, theorem proving, scientific modeling, cryptographic evaluation, and any area requiring rigorous logical deduction.
The open-source nature of each releases — Nomos 1 is on the market below the Apache 2.0 license on Hugging Face, with the total reasoning harness on GitHub — signifies that organizations can deploy these capabilities on their very own infrastructure with out counting on API calls to main cloud suppliers.
"For the primary time, anybody can run or entry a state-of-the-art AI mathematician," one observer famous on social media. "This lowers the barrier to severe math analysis, proof verification, modeling complicated techniques, superior reasoning work."
The important thing contributors to Nomos 1 embrace Roger Jin, who led the coaching; Jeffrey Quesnelle and Dakota Mahan, who constructed the infrastructure; Chen Guang, who suggested; and Ryan Teknium and Jeffrey Quesnelle, who offered management. The mannequin was developed with contributions from Hillclimb AI and a crew of math specialists together with Samuel Kim, Miron Yurkevich, and others.
The race to construct AI mathematicians is accelerating quicker than anybody predicted
The 86th Putnam Competitors befell on Saturday, December 6, 2025 — simply three days earlier than Nous Analysis launched Nomos 1. The timing underscores how quickly the sphere is shifting: corporations are actually releasing mathematical AI techniques able to near-elite human efficiency inside days of the competitions they're designed to resolve.
Competitors in mathematical AI has intensified dramatically in current months. In July, a complicated model of Google DeepMind's Gemini mannequin and an experimental reasoning mannequin from OpenAI each achieved gold standing on the IMO 2025. DeepSeek's new mannequin matched their efficiency, fixing 5 out of 6 issues.
However the useful resource necessities for these frontier techniques stay prohibitive for many organizations. OpenAI's o1-pro is estimated at over 1.8 trillion parameters; Google's Gemini 2.5 Professional seemingly exceeds 400 billion. Nomos 1, against this, achieves aggressive outcomes with a fraction of that footprint.
The hole between huge frontier fashions and environment friendly open-source alternate options is narrowing. And for organizations that want mathematical reasoning capabilities with out the funds for hyperscale compute, that hole might have simply closed sufficient to matter.
As one observer put it on social media: "This marks a major bounce for AI math fashions which might be sufficiently small to run in your laptop computer."
A laptop computer that may now outperform practically 4,000 of the continent's greatest undergraduate mathematicians.
Keep forward of the curve with NextBusiness 24. Discover extra tales, subscribe to our e-newsletter, and be part of our rising neighborhood at nextbusiness24.com

