• dustyData@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    2
    ·
    6 days ago

    The human brain is not a computer. It was a fun simile to make in the 80s when computers rose in popularity. It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor. The more we know about the brain the less it looks like a computer. Pattern recognition is barely a tiny fraction of what the human brain does, not even the most important function, and computers suck at it. No computer is anywhere close to do what a human brain can do in many different ways.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      edit-2
      5 days ago

      It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor.

      Notably, neither of those two disciplines are computer science. Silicon computers are Turing complete. They can (given enough time and scratch space) compute everything that’s computable. The brain cannot be more powerful than that you’d break causality itself: God can’t add 1 and 1 and get 3, and neither can god sort a list in less than O(n log n) comparisons. Both being Turing complete also means that they can emulate each other. It’s not a metaphor: It’s an equivalence. Computer scientists have trouble telling computers and humans apart just as topologists can’t distinguish between donuts and coffee mugs.

      Architecturally, sure, there’s massive difference in hardware. Not carbon vs. silicon but because our brains are nowhere close to being von Neumann machines. That doesn’t change anything about brains being computers, though.

      There’s, big picture, two obstacles to AGI: First, figuring out how the brain does what it does and we know that current AI approaches aren’t sufficient,secondly, once understanding that, to create hardware that is even just a fraction as fast and efficient at executing erm itself as the brain is.

      Neither of those two involve the question “is it even possible”. Of course it is. It’s quantum computing you should rather be sceptical about, it’s still up in the air whether asymptotic speedups to classical hardware are even physically possible (quantum states might get more fuzzy the more data you throw into a qbit, the universe might have a computational upper limit per unit volume or such).

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        Notably, computer science is not neurology. Neither is equipped to meddle in the other’s field. If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains. But they are not equivalent. Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing. And Turing completeness is irrelevant. The brain is not a Turing machine. It does not process tokens one at a time. Turing completeness is a technology term, it shares with Turing machines the name alone, as Turing’s philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          5 days ago

          If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains.

          Does not follow. Different architectures require different specialisations. One is research into something nature presents us, the other (at least the engineering part) is creating something. Completely different fields. And btw the analytical tools neuroscientists have are not exactly stellar, that’s why they can’t understand microprocessors (the paper is tongue in cheek but also serious).

          But they are not equivalent.

          They are. If you doubt that, you do not understand computation. You can read up on Turing equivalence yourself.

          Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing.

          The fuck has “fast” to do with “complex”. Also the mechanisms probably aren’t terribly complex, how the different parts mesh together to give rise to a synergistic whole creates the complexity. Also I already addressed the distinction between “make things run” and “make them run fast”. A dog-slow AGI is still an AGI.

          The brain is not a Turing machine. It does not process tokens one at a time.

          And neither are microprocessors Turing machines. A thing does not need to be a Turing machine to be Turing complete.

          Turing completeness is a technology term

          Mathematical would be accurate.

          it shares with Turing machines the name alone,

          Nope the Turing machine is one example of a Turing complete system. That’s more than “shares a name”.

          Turing’s philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.

          You’re probably thinking of the Turing test. That doesn’t have to do anything with Turing machines, Turing equivalence, or Turing completeness, yes. Indeed, getting the Turing test involved and confused with the other three things is probably the reason why you wrote a whole paragraph of pure nonsense.

          • TeryVeneno@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            Yo if you’re ever in the mood, I’d love to talk more about the subject with you. You might be the only person I’ve ever seen to actually talk about this topic the way I understand it.

            • TeryVeneno@lemmy.ml
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              5 days ago

              No this guy actually understands what he’s talking about. He may not be articulating it the best, but his argument is not false. What he’s essentially saying is that based on what we understand now, the brain must be a machine in some sense that can do computations.

              The only reason this is the case is because logically unless new physics arises this must be the case. So it’s not the brain is a computer like we have now, it’s that all things that process and handle information systematically must do computation. What that looks like and what each unit does it what we don’t get.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              5 days ago

              Elon, judging from his twitter takes, understands this stuff even less than you do.

      • bigpEE@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 days ago

        Re: quantum computing, we know quantum advantage is real both for certain classes of problems, e.g. theoretically using Grover’s, and experimentally for toy problems like bosonic sampling. It’s looking like we’re past the threshold where we can do error correction, so now it’s a question of scaling. I’ve never heard anyone discuss a limit on computation per volume as applying to QC. We’re down to engineering problems, not physics, same as your brain vs computer case.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          5 days ago

          From all I know none of the systems that people have built come even close to testing the speedup: Is error correction going to get harder and harder the larger the system is, the more you ask it to compute? It might not be the case but quantum uncertainty is a thing so it’s not baseless naysaying, either.

          Let me put on my tinfoil hat: Quantum physicists aren’t excited to talk about the possibility that the whole thing could be a dead end because that’s not how you get to do cool quantum experiments on VC money and it’s not like they aren’t doing valuable research, it’s just that it might be a giant money sink for the VCs which of course is also a net positive. Trying to break the limit might be the only way to test it, and that in turn might actually narrow things down in physics which is itching for experiments which can break the models because we know that they’re subtly wrong, just not how, data is needed to narrow things down.

          • bigpEE@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            We’ve already done bosonic sampling that’s classically intractable. Google published it a few years ago. So yes, quantum supremacy has already been proven. It’s a useless toy problem, but one a classical computer just can’t do.

            Yes, error correction will get harder the more we scale, but we’re pretty sure we’ve reached the point where we win by throwing more qubits at it. Again, now it’s engineering the scaling. No mean feat, it’ll take a long time, but it’s not like this is all speculation or fraud. The theory is sound

    • Akrenion@slrpnk.net
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      6 days ago

      Some Scientists are connectiong i/o on brain tissue. These experiments show stunning learning capabilities but their ethics are rightly questioned.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        6 days ago

        I don’t get how the ethics of that are questionable. It’s not like they’re taking brains out of people and using them. It’s just cells that are not the same as a human brain. It’s like taking skin cells and using those for something. The brain is not just random neurons. It isn’t something special and magical.

        • Akrenion@slrpnk.net
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          6 days ago

          We haven’t yet figured out what it means to be conscious. I agree that a person can willingly give permission to be experimented on and even replicated. However there is probably a line where we create something conscious for the act of a few months worth of calculations.

          There wouldn’t be this many sci-fi books about cloning gone wrong if we already knew all it entails. This is basically the matrix for those brainoids. We are not on the scale of whole brain reproduction but there is a reason for the ethics section on the cerebral organoid wiki page that links to further concerns in the neuro world.

          • Cethin@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            6 days ago

            Sure, we don’t know what makes us sapient or conscious. It isn’t a handful of neurons on a tray though. They’re significantly less conscious than your computer is.

            • Akrenion@slrpnk.net
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              6 days ago

              Maybe I was unclear. I think ethics play a role in research always. That does not mean I want this to stop. I just think we need regulations. Computer-Brain-Interfaces and large brainoids are more than a handful of neurons on a tray. I wouldn’t call them human but we all know how fast science can get.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        Reading about those studies is pretty interesting. Usually the neurons do most of the heavy lifting, adapting to the I/O chip input and output. It’s almost an admittance that we don’t yet fully understand what we are dealing with, when we try to interface with our rudimentary tech.