Recent math benchmarks for large language models (LLMs) such as MathArena indicate that state-of-the-art reasoning models achieve impressive performance on mathematical competitions like AIME, with the leading model, o3-mini, achieving scores comparable to top human competitors. However, these benchmarks evaluate models solely based on final numerical answers, neglecting rigorous reasoning and proof generation which are essential for real-world mathematical tasks. To address this, we introduce the first comprehensive evaluation of full-solution reasoning for challenging mathematical problems. Using expert human annotators, we evaluated several state-of-the-art reasoning models on the six problems from the 2025 USAMO within hours of their release. Our results reveal that all tested models struggled significantly, achieving less than 5% on average. Through detailed analysis of reasoning traces, we identify the most common failure modes and find several unwanted artifacts arising from the optimization strategies employed during model training. Overall, our results suggest that current LLMs are inadequate for rigorous mathematical reasoning tasks, highlighting the need for substantial improvements in reasoning and proof generation capabilities.
“Notably, O3-MINI, despite being one of the best reasoning models, frequently
skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”
You didn’t link to the study; you linked to the PR release for the study. This is the study.
Note that the paper hasn’t been published anywhere other than on Anthropic’s online journal. Also, what the paper is doing is essentially a tea leaf reading. They take a look at the swill of tokens, point at some clusters, and say, “there’s a dog!” or “that’s a bird!” or “bitcoin is going up this year!”. It’s all rubbish dawg
Fair enough, you’re the only person with a reasonable argument, as nobody else can seem to do anything other than name calling.
Linking to the actual papers and pointing out they haven’t been published to a third party journal is far more productive than whatever anti-scientific bullshit the other commenters are doing.
We should be people of science, not reactionaries.
So, how does any of this relate to wanting to go back to an imagined status quo ante? (yes, I refuse to use reactionary in any other way than to describe politcal movements. Conservatives do not can fruits).
This isn’t debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don’t like that you can leave (or post a few more times for us to laugh at before you’re banned).
As to the particular paper that got linked, we’ve seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren’t going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.
You didn’t link to the study; you linked to the PR release for the study. This is the study.
Note that the paper hasn’t been published anywhere other than on Anthropic’s online journal. Also, what the paper is doing is essentially a tea leaf reading. They take a look at the swill of tokens, point at some clusters, and say, “there’s a dog!” or “that’s a bird!” or “bitcoin is going up this year!”. It’s all rubbish dawg
To be fair, the typesetting of the papers is quite pleasant and the pictures are nice.
they gotta make up for all those scary cave-wall pictures somehow
Fair enough, you’re the only person with a reasonable argument, as nobody else can seem to do anything other than name calling.
Linking to the actual papers and pointing out they haven’t been published to a third party journal is far more productive than whatever anti-scientific bullshit the other commenters are doing.
We should be people of science, not reactionaries.
you got banned before I got to you, but holy fuck are you intolerable
which we should do by parroting press releases and cherry picking which papers count as science, of course
but heaven forbid anyone is rude when they rightly tell you to go fuck yourself
So, how does any of this relate to wanting to go back to an imagined status quo ante? (yes, I refuse to use reactionary in any other way than to describe politcal movements. Conservatives do not can fruits).
nah I think it just sits weirdly with people (I can see what you mean but also why it would strike someone as frustrating)
This isn’t debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don’t like that you can leave (or post a few more times for us to laugh at before you’re banned).
As to the particular paper that got linked, we’ve seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren’t going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.
lmao fuck off
your argument would be immensely helped if you posted science instead of corporate marketing brochures
It’s an anti-fun version of listening to dark side of the moon while watching the wizard of oz.