• ඞmir@lemmy.ml
      link
      fedilink
      arrow-up
      21
      ·
      18 days ago

      That’s specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there’s generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of “AI”

      That doesn’t mean any of these “AI” products can think, but don’t conflate LLMs and AI as being the same

      • wischi@programming.dev
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        18 days ago

        Your brain is also “just a Chinese room”. It’s just physic, chemistry and biology. There is no magic inside your brain. If a “Chinese room” is fast enough and can fool everyone into “believing” that it’s fluent in chinese, than the room speaks chinese.

        • Kogasa@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          17 days ago

          This fails to engage with the thought experiment. The question isn’t if “the room is fluent in Chinese.” It is whether the machine learning model is actually comparable to the person in the room, executing program instructions to turn input into output without ever understanding anything about the input or output.

          • wischi@programming.dev
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            16 days ago

            The same is true for your brain. Show me the neurons that are fluent in Chinese. Of course the LLM is just executing code. And if we have AGI it will also just be “executing code” but so does your brain. It’s not exactly code (but maye AGI will be analog computers, so not exactly code either) but the laws of physics dictate what your brain does. The laws of physics don’t understand Chinese, the atoms and molecules don’t understand Chinese. “Understanding Chinese” is an emergent property.

            Think about it that way: Assume every person you know (execpt you) is just some form of Chinese Room … You first of all couldn’t prove that and second it wouldn’t matter at all.

            • Kogasa@programming.dev
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              16 days ago

              We aren’t trying to establish that neurons are conscious. The thought experiment presupposes that there is a consciousness, something capable of understanding, in the room. But there is no understanding because of the circumstances of the room. This demonstrates that the appearance of understanding cannot confirm the presence of understanding. The thought experiment can’t be formulated without a prior concept of what it means for a human consciousness to understand something, so I’m not sure it makes sense to say a human mind “is a Chinese room.” Anyway, the fact that a human mind can understand anything is established by completely different lines of thought.

    • BlueMagma@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      18 days ago

      How can you know the system has no cognitive capability ? We haven’t solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.

    • MindTraveller@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      edit-2
      18 days ago

      Language processing is a cognitive capability. You’re just saying it’s not AI because it isn’t as smart as HAL 9000 and Cortana. You’re getting your understanding of computer science from movies and video games.