• KairuByte@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    5
    arrow-down
    4
    ·
    15 days ago

    Assuming the answer given is indeed correct, it only suggests to me that someone previously solved it and the LLM was trained on it. LLMs can’t actually do math.

    ETA: to clarify, if this was a model trained for mathematics instead of an LLM I wouldn’t have this opinion, but the article suggests it’s just an LLM.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      15 days ago

      Vast majority of new discoveries in math come from combining existing ideas in novel ways. Where LLMs shine is having a huge training set of all kinds of algorithms, formulas, proofs, and theorems. They are able to make associations that no human could. That’s the value, once the association is made that two particular theorems work together, the human can take over and prove the result.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        5
        arrow-down
        4
        ·
        15 days ago

        The issue I have with this idea, is that the LLM doesn’t have an understanding of the algorithms, formulas, proofs or theorems. It “knows” they exist, but not what they actually mean. It’s like having a toddler pick out associations on a word board and hoping they happen to relate to each other.

        Yes, there is more to an LLM, and I can’t really think of a good analogy. My point is that the associations being made aren’t based in mathematical knowledge, it’s based on “do these things relate to each other in my training data.”

        LLMs specifically are incapable of “new thoughts.”

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          3
          ·
          14 days ago

          It doesn’t need to have an understanding of algorithms any more than evolution needs to have an understanding of how a brain works to make one. LLMs are a tool used by humans who give it direction and have understanding. The LLMs are absolutely capable of new thoughts in a very real sense. When the LLM puts different ideas in a novel way that is a genuinely new thought that can be picked up by a human with full understanding of this thought.

        • Zexks@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          5
          ·
          14 days ago

          You have no idea what you’re talking about and are making all of this up.

          • KairuByte@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            14 days ago

            You realize there are many, many cases where an LLM just abysmally fails at basic math questions, right? The obvious ones that get news attention are corrected rather quickly, but there are plenty of times they just plainly fail at basic “math as word problem” situations.

            If they can’t “understand” basic math reliably, why would they be able to “understand” complex concepts any more reliably?

            • m532@lemmy.ml
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              14 days ago

              There’s thousands of cases of humans failing at simple math, why would they be able to understand complex math problems?

              Yet they again and again do solve complex math problems…

  • Entertainmeonly@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    7
    arrow-down
    12
    ·
    15 days ago

    So, the program that constantly lies and makes up nonsense to gaslights everyone it is facts, has solved a math problem when it “seems to have used a totally new method for problems of this kind”.

    Sounds like it’s spewing bullshit again.

    Remember everyone, its glue that makes your pizza so tasty. All hale AI Jesus.