• xiaohongshu [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      7 hours ago

      I think you have a fundamental misunderstanding of how neural network based LLMs work.

      Let’s say you give a prompt of “tell me if capitalism is a good or a bad system”, in a very simplistic sense, what it does is that it will query the words/sentences associated with the words “capitalism” and “good”, as well as “capitalism” and “bad” which it has been trained on from the entire internet’s data, and from there it spews out seemingly coherent sentences and paragraphs about why capitalism is good or bad.

      It does not have the capacity to reason or evaluate whether capitalism as an economic system itself is good or bad. These LLMs are instead very powerful statistical models that can reproduce coherent human language based on word associations.

      What is groundbreaking about the transformer architecture in natural language processing is that it can allow the network to retain the association memory for far longer than the previous iterations like LSTM, seq2seq etc could, as they would start spewing out garbled text after a few sentences or so because their architectures do not allow memory to be properly retained after a while (vanishing gradient problem). Transformer based models solved that problem and enabled reproduction of entire paragraphs and even essays of seemingly coherent human-like writings because of their strong memory retention capability. Impressive as it is, it does not understand grammatical structures or rules. Train it with a bunch of broken English texts, and it will spew out broken English.

      In other words, the output you’re getting from LLMs (“capitalism good or bad?”) are simply word association that it has been trained on from the input collected from the entire internet, not actual thinking coming from its own internal mental framework or a real-world model that could actually comprehend causality and reasoning.

      The famous case of Google AI telling people to put glue on their pizza is a good example of this. It can be traced back to a Reddit joke post. The LLM itself doesn’t understand anything, it simply reproduces what it has been trained on. Garbage in, garbage out.

      No amount of “neurosymbolic AI” is going to solve the fundamental issue of LLM not being able to understand causality. The “chain of thought” process allows researchers to tweak the model better by understanding the specific path the model arrives at its answer, but it is not remotely comparable to a human going through their thought process.

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        I understand how LLMs work perfectly fine. What you don’t seem to understand is that neurosymbolic AI is a combination of LLMs for parsing inputs and categorizing them with a symbolic logic engine for doing reasoning. If you bothered to actually read the paper I linked you wouldn’t have wasted your time writing this comment.

    • piggy [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      8 hours ago

      Neurosymbolic AI is overhyped. It’s just bolting on LLMs to symbolic AI and pretending that it’s a “brand new thing” (it’s not, it’s actually how most LLMs practically work today and have been for a long time GPT-3 itself is neurosymbolic). The advocates of approach pretend that the “reasoning” comes from symbolic AI which is known as classical AI, which still suffers from the same exact problems that it did in the 1970’s when the first AI winter happened. Because we do not have an algorithm capable of representing the theory of mind, nor do we have a realistic theory of mind to begin with.

      Not only that but all of the integration points between classical techniques and statistical techniques present extreme challenges because in practice the symbolic portion essentially trusts the output of the statistical portion because the symbolic portion has limited ability to validate.

      Yeah you can teach ChatGPT to correctly count the r’s in strawberry with a neurosymbolic approach but general models won’t be able to reasonably discover even the most basic of concepts such as volume displacement by themselves.

      You’re essentially back at the same problem where you either lean on the symbolic aspects and limit yourself entirely to advanced ELIZA like functionality that can just use classifier or your throw yourself to the mercy of the statistical model and pray you have enough symbolic safeguards.

      Either way it’s not reasoning, it is at best programming – if that. That’s actually the practical reason why the neurosymbolic space is getting attention because the problem has effectively been to be able to control inputs and outputs for the purposes of not only reliability / accuracy but censorship and control. This is still a Garbage In Garbage Out process.

      FYI most of the big names in the “Neurosymbolic AI as the next big thing” space hitched their wagon to Khaneman’s Thinking Fast and Slow bullshit that is effectively made up bullshit like Freudianism but lamer and has essentially been squad wiped by the replication crisis.

      Don’t get me wrong DeepSeek and Duobau are steps in the right direction. They’re less proprietary, less wasteful, and broadly more useful, but they aren’t a breakthrough in anything but capitalist hoarding of technological capacity.

      The reason AI is not useful in most circumstance is because of the underlying problems of the real world and you can’t algorithm your way out of people problems.

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        I don’t think it’s overhyped at all. It’s taking two technologies that are good at solving specific types of problems and using them together in a useful way. The problem that symbolic AI systems ran into in the 70s are precisely the ones that deep neural networks address. You’re right there are challenges, but there’s absolutely no reason to think they’re insurmountable.

        I’d argue that using symbolic logic to come up with solutions is very much what reasoning is actually. Meanwhile, classification of input problem is the same one that humans have as well. Somehow you have to take data from the senses and make sense of it. If you’re claiming this is garbage in garbage out process, then the same would apply to human reasoning as well.

        The models can create internal representations of the real world through reinforcement learning in the exact same way that humans do. We build up our internal world model through our interaction with environment, and the same process is already being applied in robotics today.

        I expect that future AI systems will be combinations of different types of algorithms all working together and solving different challenges. Combining deep learning with symbolic logic is an important step here.

        • piggy [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          8 hours ago

          The problem that symbolic AI systems ran into in the 70s are precisely the ones that deep neural networks address.

          Not in any meaningful way. A statistical model cannot address the Frame problem. Statistical models themselves exacerbate the problems of connectionist approaches. I think AI researchers aren’t being honest with the causality here. We are simply fooling ourselves and willfully misinterpreting statistical correlation as causality.

          You’re right there are challenges, but there’s absolutely no reason to think they’re insurmountable.

          Let me repeat myself for clarity. We do not have a valid general theory of mind. That means we do not have a valid explanation of the process of thinking itself. That is an insurmountable problem that isn’t going to be fixed by technology itself because technology cannot explain things, technology is constructed processes. We can use technology to attempt to build a theory of mind, but we’re building the plane while we’re flying it here.

          I’d argue that using symbolic logic to come up with solutions is very much what reasoning is actually.

          Because you are a human doing it, you are not a machine that has been programmed. That is the difference. There is no algorithm that gives you correct reasoning every time. In fact using pure reasoning often leads to lulzy and practically incorrect ideas.

          Somehow you have to take data from the senses and make sense of it. If you’re claiming this is garbage in garbage out process, then the same would apply to human reasoning as well.

          It does. Ben Shapiro is a perfect example. Any debate guy is. They’re really good at reasoning and not much else. Like read the Curtis Yarvin interview in the NYT. You’ll see he’s really good at reasoning, so good that he accidentally makes some good points and owns the NYT at times. But more often than not the reasoning ends up in a horrifying place that isn’t actually novel or unique simply a rehash of previous horriyfing things in new wrappers.

          The models can create internal representations of the real world through reinforcement learning in the exact same way that humans do. We build up our internal world model through our interaction with environment, and the same process is already being applied in robotics today.

          This is a really Western brained idea of how our biology works, because as complex systems we work on inscrutable ranges. For example lets take some abstract “features” of the human experience and understand how they apply to robots:

          • Strength. We cannot build a robot that can get stronger over time. Humans can do this, but we would never build a robot to do this. We see this as inefficient and difficult. This is a unique biological aspect of the human experience that allows us to reason about the physical world.

          • Pain. We would not build a robot that experiences pain in the same way as humans. You can classify pain inputs. But why would you build a machine that can “understand” pain. Where pain interrupts its processes? This is again another unique aspect of human biology that allows us to reason about the physical world.

          • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 hours ago

            Not in any meaningful way. A statistical model cannot address the Frame problem. Statistical models themselves exacerbate the problems of connectionist approaches. I think AI researchers aren’t being honest with the causality here. We are simply fooling ourselves and willfully misinterpreting statistical correlation as causality.

            The frame problem is addressed by creating a model of the environment the system interacts with. That’s what provides the context for reasoning and deciding what information is relevant and what isn’t. Embodiment is one obvious way to build such a model where the robot or even a virtual agent interacts with the environment and encodes the rules of the environment within its topology.

            Let me repeat myself for clarity. We do not have a valid general theory of mind.

            This is not necessary for making an AI that can reason about the environment, make decisions, and explain itself. Furthermore, not having a theory of mind does not even prevent us from creating minds. One example of this could be using evolutionary algorithms to evolve agents that have similar reasoning capabilities to our own. Another would be to copy the structure of animal brains to a high degree of fidelity.

            Because you are a human doing it, you are not a machine that has been programmed.

            You are programmed in a sense of the structure of your brain being a product of the information encoded in your DNA. The same way the neural network is a product of the algorithms used to build it. However, the learning that both your brain and the network are doing is encoded in the weights and connections of the network through reinforcement. These are not programmed in either case.

            This is a really Western brained idea of how our biology works, because as complex systems we work on inscrutable ranges.

            🙄

            Strength. We cannot build a robot that can get stronger over time. Humans can do this, but we would never build a robot to do this. We see this as inefficient and difficult. This is a unique biological aspect of the human experience that allows us to reason about the physical world.

            You’re showing utter lack of imagination on your part here. Of course we could build a robot that could get stronger. There’s nothing uniquely biological about this example.

            Pain. We would not build a robot that experiences pain in the same way as humans. You can classify pain inputs. But why would you build a machine that can “understand” pain. Where pain interrupts its processes? This is again another unique aspect of human biology that allows us to reason about the physical world.

            Maybe try thinking why organisms evolve pain in the first place and what advantage it provides.