Recent math benchmarks for large language models (LLMs) such as MathArena indicate that state-of-the-art reasoning models achieve impressive performance on mathematical competitions like AIME, with the leading model, o3-mini, achieving scores comparable to top human competitors. However, these benchmarks evaluate models solely based on final numerical answers, neglecting rigorous reasoning and proof generation which are essential for real-world mathematical tasks. To address this, we introduce the first comprehensive evaluation of full-solution reasoning for challenging mathematical problems. Using expert human annotators, we evaluated several state-of-the-art reasoning models on the six problems from the 2025 USAMO within hours of their release. Our results reveal that all tested models struggled significantly, achieving less than 5% on average. Through detailed analysis of reasoning traces, we identify the most common failure modes and find several unwanted artifacts arising from the optimization strategies employed during model training. Overall, our results suggest that current LLMs are inadequate for rigorous mathematical reasoning tasks, highlighting the need for substantial improvements in reasoning and proof generation capabilities.
“Notably, O3-MINI, despite being one of the best reasoning models, frequently
skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”
every time I read these posters it’s in that type of the Everyman characters in the discworld that say some utter lunatic shit and follow it up with “it’s just [logical/natural/obvious/…]”
Read the paper, it’s not simply predicting the next token. For instance, when writing a rhyming couplet, it first plans ahead on what the rhyme is, and then fills in the rest of the sentence.
The researchers were surprised by this too, they expected it to be the other way around.
Oh, sorry, I got so absorbed into reading the riveting material about features predicting state name tokens to predict state capital tokens I missed that we were quibbling over the word “next”. Alright they can predict tokens out of order, too. Very impressive I guess.
looks inside
it’s predicting the next token
every time I read these posters it’s in that type of the Everyman characters in the discworld that say some utter lunatic shit and follow it up with “it’s just [logical/natural/obvious/…]”
Stands to reason
Read the paper, it’s not simply predicting the next token. For instance, when writing a rhyming couplet, it first plans ahead on what the rhyme is, and then fills in the rest of the sentence.
The researchers were surprised by this too, they expected it to be the other way around.
Oh, sorry, I got so absorbed into reading the riveting material about features predicting state name tokens to predict state capital tokens I missed that we were quibbling over the word “next”. Alright they can predict tokens out of order, too. Very impressive I guess.
predict
ahead
stop prompting LLMs and go read some books, it’ll do you a world of good