Assuming the answer given is indeed correct, it only suggests to me that someone previously solved it and the LLM was trained on it. LLMs can’t actually do math.
ETA: to clarify, if this was a model trained for mathematics instead of an LLM I wouldn’t have this opinion, but the article suggests it’s just an LLM.
Vast majority of new discoveries in math come from combining existing ideas in novel ways. Where LLMs shine is having a huge training set of all kinds of algorithms, formulas, proofs, and theorems. They are able to make associations that no human could. That’s the value, once the association is made that two particular theorems work together, the human can take over and prove the result.
The issue I have with this idea, is that the LLM doesn’t have an understanding of the algorithms, formulas, proofs or theorems. It “knows” they exist, but not what they actually mean. It’s like having a toddler pick out associations on a word board and hoping they happen to relate to each other.
Yes, there is more to an LLM, and I can’t really think of a good analogy. My point is that the associations being made aren’t based in mathematical knowledge, it’s based on “do these things relate to each other in my training data.”
LLMs specifically are incapable of “new thoughts.”
It doesn’t need to have an understanding of algorithms any more than evolution needs to have an understanding of how a brain works to make one. LLMs are a tool used by humans who give it direction and have understanding. The LLMs are absolutely capable of new thoughts in a very real sense. When the LLM puts different ideas in a novel way that is a genuinely new thought that can be picked up by a human with full understanding of this thought.
You have no idea what you’re talking about and are making all of this up.
You realize there are many, many cases where an LLM just abysmally fails at basic math questions, right? The obvious ones that get news attention are corrected rather quickly, but there are plenty of times they just plainly fail at basic “math as word problem” situations.
If they can’t “understand” basic math reliably, why would they be able to “understand” complex concepts any more reliably?
There’s thousands of cases of humans failing at simple math, why would they be able to understand complex math problems?
Yet they again and again do solve complex math problems…
This argument makes absolutely no sense. These models aren’t humans, and there is no way to compare them as such.
So, the program that constantly lies and makes up nonsense to gaslights everyone it is facts, has solved a math problem when it “seems to have used a totally new method for problems of this kind”.
Sounds like it’s spewing bullshit again.
Remember everyone, its glue that makes your pizza so tasty. All hale AI Jesus.
It appears that you have no clue how mathematics or LLMs work. It doesn’t matter how you arrive at a solution. Once you have a candidate then you can formally verify it. Sounds like you need to stop writing cringe comments on subjects you clearly have no business opining on.
Hail AI Jesus
this is human equivalent of https://en.wikipedia.org/wiki/Ophiocordyceps_unilateralis
You are the #1 mouth breathing window licker. Good luck!
I’d be so insulted by that if I had a shred of respect for you.




