Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • UlyssesT [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    58
    ·
    1 year ago

    Crude reductionist beliefs such as humans being nothing more than “meat computers” and/or “stochastic parrots” have certainly contributed to the belief that a sufficiently elaborate LLM treat printer would be at least as valid a person as actual living people.

    • daisy@hexbear.net
      link
      fedilink
      English
      arrow-up
      39
      ·
      1 year ago

      This is verging on a religious debate, but assuming that there’s no “spiritual” component to human intelligence and consciousness like a non-localized soul, what else can we be but ultra-complex “meat computers”?

      • oktherebuddy@hexbear.net
        link
        fedilink
        English
        arrow-up
        38
        ·
        edit-2
        1 year ago

        yeah this is knee-jerk anti-technology shite from people here because we live in a society organized along lines where creation of AI would lead to our oppression instead of our liberation. of course making a computer be sentient is possible, to believe otherwise is to engage in magical (chauvinistic?) thinking about what constitutes consciousness.

        When I watched blade runner 2049 I thought the human police captain character telling the Officer K (replicant) character she was different from him because she had a soul a bit weird, since sci-fi settings are pretty secular. Turns out this was prophetic and people are more than willing to get all spiritual if it helps them invent reasons to differentiate themselves from the Other.

        • CannotSleep420@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          24
          ·
          1 year ago

          One doesn’t need to assert the existence of an immaterial soul to point out that the mechanisms that lead to consciousness are different enough from the mechanisms that make computers work that the former can’t just be reduced to an ultra complex form of the latter.

          • oktherebuddy@hexbear.net
            link
            fedilink
            English
            arrow-up
            11
            ·
            1 year ago

            There isn’t a materialist theory of consciousness that doesn’t look something like an ultra complex computer. We’re talking like an alternative explanation exists but it really does not.

            • CannotSleep420@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              17
              ·
              1 year ago

              In what way does consciousness resemble an ultra complex computer? Nobody has consciousness fully figured out of course, but I would at least expect there to be some relevant parallel between computer hardware and brain hardware if this is the case.

              • drhead [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                12
                ·
                1 year ago

                What stops me from doing the same thing that neurons do with a sufficiently sized hunk of silicon? Assuming that some amount of abstraction is fine.

                If the answer is “nothing”, then that demonstrates the point. If you can build an artificial brain, that does all of the things a brain does, then there is nothing special about our brains.

                • Egon [they/them]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  10
                  ·
                  edit-2
                  1 year ago

                  But can you actually build an artificial brain with a hunk of silicon? We don’t know enough about brains or consciousness to do that, so the point is kinda moot

              • oktherebuddy@hexbear.net
                link
                fedilink
                English
                arrow-up
                12
                ·
                edit-2
                1 year ago

                When people say computer here they mean computation as computer scientists conceive of it. Abstract mathematical operations that can be modeled by boolean circuits or Turing machines, and embodied in physical processes. Computers in the sense you’re talking about (computer hardware) are one method of embodying these operations.

                • CannotSleep420@lemmygrad.ml
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  1 year ago

                  I probably should have worded my last reply differently, because modeling the human brain with boolean circuits and turing machines is mainly what I have an issue with. While I’m not particularly knowledgable on the brain side of things, I can see the resemblance between neurons and logic gates. However, my contention is that the material constraints of how those processes are embodied are going to have a significant effect on how the system works (not to say that you were erasing this effect entirely).

                  I want to say more on the topic, but now that my mind is on it I want to put some time and effort into explaining my thoughts in its own post. I’ll @ you in a reply if/when I make the post.

                  • Saeculum [he/him, comrade/them]@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    1 year ago

                    However, my contention is that the material constraints of how those processes are embodied are going to have a significant effect on how the system works

                    Sure, but that’s no basis to think that a group of logic gates could not eventually be made to emulate a neuron. The neuron has a finite number of things it can do because of the same material constraints, and while one would probably end up larger than the other, increasing the physical distances between the thinking parts, that would surely only limit the speed of an emulated thought rather than its substance?

                • silent_water [she/her]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  it still remains to be proved that consciousness can be emulated on a Turing machine. that’s a huge open problem. you can assume it’s true but your results are contingent.

            • WideningGyro [any]@hexbear.net
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              I zoned out on the consciousness debate around 2015, so forgive me if this stuff is now considered outdated, but as I recall those materialist theories of consciousness all run into the hard problem, right? I might be biased in one direction, but I feel like the fact that computational models can’t account for lived experience is a pretty good argument against them. Wouldn’t it just be more accurate to say that we’re missing a good theory of consciousness, at all?

        • VILenin [he/him]@hexbear.netOPM
          link
          fedilink
          English
          arrow-up
          19
          ·
          1 year ago

          Nobody ever mentioned a “soul” in this conversation until you brought it up to use as an accusation.

          “Computers aren’t sentient” is not a religious belief no matter how hard you try to smear it as such.

          • oktherebuddy@hexbear.net
            link
            fedilink
            English
            arrow-up
            12
            ·
            1 year ago

            It isn’t “Computers aren’t sentient”, nobody thinks computers are sentient except some weirdos. “Computers can’t be sentient”, which is what is under discussion, is a much stronger claim.

            • VILenin [he/him]@hexbear.netOPM
              link
              fedilink
              English
              arrow-up
              13
              ·
              1 year ago

              The claim is that “computers can be sentient”. That is a strong claim and requires equally strong evidence. I’ve found the arguments in support of it lackluster and reductionist for reasons I’ve outlined in other comments. In fact, I find the idea that if we compute hard enough we get sentience borders on a religious belief in extra-physical properties being bestowed upon physical objects once they pass a certain threshold.

              There are people who argue that everything is conscious, even rocks, because everything is ultimately a mechanical process. The base argument is the same, but I have a feeling that most people here would suddenly disagree with them for some reason. Is it “creationism” to find such a hypothesis absurd, or is it vulgar materialism to think it’s correct? You seem to take offense at being called “reductionist” despite engaging in a textbook case of reductionism.

              This doesn’t mean you’re wrong, or that the rock-consciousness people are wrong, it’s just an observation. Any meaningful debate about sentience right now is going to be philosophical. If you want to be scientific the answer is “I don’t know”. I don’t pretend to equate philosophy with science.

              • oktherebuddy@hexbear.net
                link
                fedilink
                English
                arrow-up
                11
                ·
                1 year ago

                Consciousness isn’t an extra-physical property. That’s the belief.

                I don’t take offense to being called reductionist, I take offense to reductionism being said pejoratively. Like how creationists say it. It’s obvious to me that going deeper, understanding the mechanisms behind things, makes them richer.

                The thing that makes your argument tricky is we do have evidence now. Computers are unambiguously exhibiting behaviors that resemble behaviors of conscious beings. I don’t think that makes them conscious at this time, any more than animals who exhibit interesting behavior, but it shows that this mechanism has legs. If you think LLMs are as good as AI is ever going to get that’s just really blinkered.

                • VILenin [he/him]@hexbear.netOPM
                  link
                  fedilink
                  English
                  arrow-up
                  10
                  ·
                  1 year ago

                  I think that AI will get better but it’s “base” will remain the same. Going deeper to understand the mechanisms is different than just going “it’s a mechanism”, which I see a lot of people doing. I think computers can very easily replicate human behaviors and emulate emotions.

                  Obviously creating something sentient is possible since brains evolved. And if we don’t kill ourselves I think it’s very possible that we’ll get there. But I think it will be very different to what we think of as a “computer” and the only similarities they might share could be being electrically powered.

                  At the end of the road we’ll just get to arguing about philosophical zombies and the discussion usually wraps up there.

                  I’d be very happy if it turned out that I’m completely wrong.

                  • oktherebuddy@hexbear.net
                    link
                    fedilink
                    English
                    arrow-up
                    7
                    ·
                    1 year ago

                    Okay I think we pretty much agree. I have been thinking about what the next “category” of thing is that might function as a substrate of consciousness. I do think that the software techniques people have come up with in AI research, run on “computers” though they may be, are different enough from what we ordinarily think of as computers (CPU, GPU, fast short-term memory, slow long-term memory, etc.) to be a distinct ontological category. And new hardware is being built to specifically accelerate the sort of operations used in those software techniques. I would accept these things being called something other than a computer, even though they could be simulated on a Turing machine or with boolean circuits, because as you’ve said that is of limited use - similar to saying that everything is a mechanistic physical process.

          • oktherebuddy@hexbear.net
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            1 year ago

            wow we can’t speculate about things that could exist, only things that do exist. this was written on a communist website btw

          • Saeculum [he/him, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            By that way of reasoning, the replicates aren’t people because they are characters written by the author same as any other.

            They are as much fiction as sentient machines are science fiction.

            • usernamesaredifficul [he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              12
              ·
              1 year ago

              ok sure my point was the authors aren’t making a point about the nature of machines informed by the limits of machines and aren’t qualified to do so

              saying AI is people because of Data from star trek is like saying there are aliens because you saw a Vulcan on tv in terms of relevance

              • Saeculum [he/him, comrade/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                That’s fair, though taking the idea that AI is people because of Data from Star Trek isn’t inherently absurd. If a machine existed that demonstrated all the capabilities and external phenomena as Data in real life, I would want it treated as a person.

                The authors might be delusional about the capabilities of their machine in particular, but in different physical circumstances to what’s most likely happening here, they wouldn’t be wrong.

                • DamarcusArt@lemmygrad.ml
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  Sorry to respond to this several day old comment, but I think there were quite a few episodes where Data’s personhood was directly called into question, it is a tangential point, but I think it is likely that even if we had a robotic Brent Spiner running around, people might still not be 100% convinced that they are truly sapient, and might consider it an incredibly complex mechanical Turk style trick. It really is hard to tell for sure, even if we did have a “living” AI to examine.

      • Yurt_Owl@hexbear.net
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        Why is the concept of a spirit relevant? Computers and living beings share practically nothing in common

        • oktherebuddy@hexbear.net
          link
          fedilink
          English
          arrow-up
          17
          ·
          edit-2
          1 year ago

          You speak very confidently about two things that have seen the boundaries between them shift dramatically within the past few decades. I would also like to ask if you actually understand microbiology & how it works, or have even seen a video of ATP Synthase in action.

          • VILenin [he/him]@hexbear.netOPM
            link
            fedilink
            English
            arrow-up
            16
            ·
            1 year ago

            Love to see the “umm ackshually scientists keep changing their minds” card on hexbear dot net. Yes neuroscience could suddenly shift to entirely support your belief, but that’s not exactly a stellar argument. I’d love to know how ATP has literally anything to do with proving computational consciousness other than that ATP kind of sort of resembles a mechanical thing (because it is mechanical).

            Sentience as a physical property does not have to stem from the same processes. Everything in the universe is “mechanical” so making that observation is meaningless. Everything is a “mechanism” so everything has that in common. Reducing everything down to their very base definition instead of taking into account what kind of mechanisms they are is literally the very definition of reductionism. You have to look at the wider process that derives from the sum of its mechanical parts, because that’s where differences arise. Of course if you strip everything down to its foundation it’s going to be the same. Is a door and a movie camera the same thing because they both consist of parts that move?

            • oktherebuddy@hexbear.net
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              I have no idea what you are trying to say. I think you agree consciousness must have a mechanistic/material base, and is some kind of emergent phenomenon, so we probably agree on whatever point you’re trying to make. Except I guess you think that even though it’s an emergent phenomenon of some mechanistic base, that mechanistic base can’t be non-biological. Which is weird.

              • VILenin [he/him]@hexbear.netOPM
                link
                fedilink
                English
                arrow-up
                16
                ·
                1 year ago

                My argument has nothing to do with the fact that computers aren’t biological. I’m saying that the only blueprints for consciousness we have right now are brains. And decidedly not computers, which I have no reason to believe will become sentient if you extrapolate it for some reason. I don’t think the difference between computers and brains is biological, it’s just a difference. If you replicated an entire brain I think it would be sentient even though it wouldn’t be strictly “biological”. I guess you could call that a computer, but then you’re veering into semantics. I’m referring to computers strictly in the way that they are currently built.

                I think there’s a mechanistic road to sentience, but we know vanishingly little about it. But I think we know more than enough to conclude that computers, as they operate today, will struggle to be anything more than a crude analogy. My point is that artificial sentience needs to be more than just “a mechanism”, because literally everything in the universe is a mechanism. It needs to be a certain kind of mechanism that we don’t understand yet.

        • daisy@hexbear.net
          link
          fedilink
          English
          arrow-up
          16
          ·
          1 year ago

          Let’s assume for the moment that there’s no such thing as a spirit/soul/ghost/etc. in human beings and other animals, and that everything that makes me “me” is inside my body. If this is the case, computers and living brains do have something fundamental in common. They are both made of matter that obeys the laws of physics. As far as we know, there’s no such thing as “living” quarks and electrons that are distinct from “non-living” quarks and electrons.

            • daisy@hexbear.net
              link
              fedilink
              English
              arrow-up
              15
              ·
              edit-2
              1 year ago

              I’m having a hard time understanding your reasoning and perspective on this. My interpretation of your comments is that you believe biological intelligence is a special phenomenon that cannot be understood by the scientific method. If I’m in error, I’d welcome a correction.

              • VILenin [he/him]@hexbear.netOPM
                link
                fedilink
                English
                arrow-up
                9
                ·
                1 year ago

                Biological intelligence is currently not understood. This has nothing to do with distinguishing between “living” and “non-living” matter. Brains and suitcases are also both made of matter. It’s a meaningless observation.

                The question is what causes sentience. Arguing that brains are computers because they’re both made of matter is a non-sequitur. We don’t even know what mechanism causes sentience so there’s no point in even beginning to make comparisons to a separate mechanism. It plays into a trend of equating the current most popular technology to the brain. There was no basis for it then, and there’s no basis for it now.

                Nobody here is arguing about what the brain is made of.

          • silent_water [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            this argument fails because you’ve presupposed that the fundamental model of computation maps neatly onto the emergent processes conducted by brains. that we only have a single model for information processing right now does not mean that only one exists. this is an unsolved problem - you can suppose it’s true but that doesn’t mean the rest of your argument follows. the supposition requires proof.

      • UlyssesT [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        1 year ago

        Please stop doing the heavy lifting for LLM tech companies by implying that any rejection of the “AI” labeling of their products is faith healing, crystal touching, and New Age thinking.

        It is possible, and much more likely, that organic brains can be fully understood eventually but that imitating a performatively loud portion of what those organic brains seem to do with LLMs is not the same thing as a linear replication of the entire process.

      • silent_water [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        saying meat computers implies that the computation model fits. it’s an ontological assumption that requires evidence. this trend of assuming every complex processes is computation blinds us. are chemical processes computation? sometimes and sometimes not! you can’t assume that they are and expect to get very far. processing information isn’t adequate evidence for the claim.

    • CannotSleep420@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 year ago

      stochastic parrots

      I could have sworn that the whole point of that paper was to point out that LLMs aren’t actually intelligent, not that human intelligence is basically an LLM.

      • dat_math [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        I could have sworn that the whole point of that paper was to point out that LLMs aren’t actually intelligent, not that human intelligence is basically an LLM.

        big same