• ramble81@lemmy.zip
    link
    fedilink
    arrow-up
    107
    arrow-down
    2
    ·
    1 day ago

    Seriously the sheer amount of people that equate coherent speech with sentience is mind boggling.

    All jokes aside, I have heard some decently educated technical people say “yeah, it’s really creepy that it put a random laugh in what it said” or “it broke the 4th wall when talking”… it’s fucking programmed to do that and you just walked right in to it.

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      35
      ·
      24 hours ago

      Technical term is the ELIZA effect.

      In 1966, Professor Weizenbaum made a chatbot called ELIZA that essentially repeats what you say back in different terms.

      He then noticed by accident that people keep convincing themselves it’s fucking concious.

      “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

      - Prof. Weizenbaum on ELIZA.

    • Clay_pidgin@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      Of course it’s creepy. Why wouldn’t it be? Someone programmed it to do that, or programmed it in such a way that it weighted those additions. That’s weird.

      • chaogomu@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        1 day ago

        The difference is knowledge. You know what an apple is. A LLM does not. It has training data that has the word apple is associated with the words red, green, pie, and doctor.

        The model then uses a random number generator to mix those words up a bit, and see if the result looks a bit like the training data, and if it does, the model spits out a sequence of words that may or may not be a sentence, depending on the size and quality of the training data.

        At no point is any actual meaning associated with any of the words. The model is just trying to fit different shaped blocks through different shaped holes, and sometimes everything goes through the square hole, and you get hallucinations.

        • CannonFodder@lemmy.world
          link
          fedilink
          arrow-up
          10
          arrow-down
          6
          ·
          1 day ago

          Our brains just get signals coming in from our nerves that we learn to associate with a concept of the apple. We have years of such training data, and we use more than words to tokenize thoughts, and we have much more sophisticated state / memory; but it’s essentially the same thing, just much much more complex. Our brains produce output that is consistent with its internal models and constantly use feedback to improve those models.

          • SparroHawc@lemmy.zip
            link
            fedilink
            arrow-up
            2
            ·
            5 hours ago

            You can tell a person to think about apples, and the person will think about apples.

            You can tell an LLM ‘think about apples’ and the LLM will say ‘Okay’ but it won’t think about apples; it is only saying ‘okay’ because its training data suggests that is the most common response to someone asking someone else to think about apples. LLMs do not have an internal experience. They are statistical models.

            • CannonFodder@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              4 hours ago

              Well, the LLM does briefly ‘think’ about apples in that it activates its ‘thought’ areas relating to apples (the token repressing apples in its system). Right now, an llm’s internal experience is based on its previous training and the current prompt while it’s running. Our brains are always on and circulating thoughts, so of course that’s a very different concept of experience. But you can bet there are people working on building an ai system (with llm components) that works that way too. The line will get increasingly blurred. Or brain processing is just an organic based statistical model with complex state management and chemical based timing control.

          • Jared White ✌️ [HWC]@humansare.social
            link
            fedilink
            English
            arrow-up
            23
            arrow-down
            5
            ·
            1 day ago

            You think you are saying things which proves you are knowledgeable on this topic, but you are not.

            The human brain is not a computer. And any comparisons between the two are wildly simplistic and likely to introduce more error than meaning into the discourse.

            • CannonFodder@lemmy.world
              link
              fedilink
              arrow-up
              4
              arrow-down
              7
              ·
              23 hours ago

              The human brain is exactly like an organic highly parallel computer system using convolution system just like ai models. It’s just way more complex. We know how synapses work. We know the form of grey matter. It’s too complex for us to model it all artificially at this point, but there’s nothing indicating it requires a magical function to make it work.

            • WorldsDumbestMan@lemmy.today
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              13
              ·
              1 day ago

              What is this whole “human beings are special and have a soul?”. You happen to experience things you “feel”, that’s it. Everything else is just like a specialized computer, shapped by nature to act in a certain way.

          • Catoblepas@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            2
            ·
            1 day ago

            but it’s essentially the same thing, just much much more complex

            If you say that all your statements and beliefs are a slurry of weighted averages depending on how often you’ve seen something without any thought or analysis involved, I will believe you 🤷‍♂️

            • CannonFodder@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              4
              ·
              23 hours ago

              There’s no reason to think that thought and analysis that you perceive isn’t based on such complex historical weighted averages in you brain. In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.
              What’s funny is people thinking their brain is anything magically different than an organic computer.

              • Catoblepas@piefed.blahaj.zone
                link
                fedilink
                English
                arrow-up
                12
                arrow-down
                1
                ·
                23 hours ago

                In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.

                I encourage you to try to find and cite any reputable neuroscientist that believes we can even quantify what thought is, much less believes both A) we ‘know the basic fundamentals of how brains work’ and B) it’s just like an LLM.

                Your argument isn’t a line of reasoning invented by neuroscientists, it’s one invented by people who need to sell more AI processors. I know which group I think has a better handle on the brain.

                • CannonFodder@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  4
                  ·
                  22 hours ago

                  I never said it’s directly like an Ilm. That’s a very specific form. The brain has many different structures - and the neural interconnections we can map have been shown to be a form of convolution in much the same way that many ai systems use (not by coincidence). Scientists generally avoid metaphysics like subjects of consciousness because it’s inherently unprovable. We can look at the results of processing/thought and quantify the complexity and accuracy. We do this for children at various ages and can see how they learn to think in increasing complexity. We can do this for ai systems too. The leaps that we’ve seen over the last few years as computational power of computers has reached some threshold, show emergent abilities that only a decade ago were thought to be impossible. Since we can never know anyone else’s experience, we can only go on input/output. And so if it looks like intelligence, then it is intelligence. Then the concept of ‘thought’ in this context is only semantics. There is, so far, nothing to suggest that magic is needed for our brains to think; it’s just a physical process - so as we add more complexity and different structures to ai systems, there’s no reason to think we can’t make them do the same as our brains, or more.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        12
        arrow-down
        3
        ·
        1 day ago

        Oh my goddd…

        Honestly, I think we need to take all these solipsistic tech-weirdos and trap them in a Starbucks until they can learn how to order a coffee from the counter without hyperventilating.

  • stickly@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    22 hours ago

    Love the meme but also hate the drivel that fills the comment sections on these types of things. People immediately start talking past each other. Half state unquantifiable assertions as fact (“…a computer doesn’t, like, know what an apple is maaan…”) and half pretend that making a sufficiently complex model of the human mind lets them ignore the Hard Problems of Consciousness (“…but, like, what if we just gave it a bigger context window…”).

    It’s actually pretty fun to theorize if you ditch the tribalism. Stuff like the physical constraints of the human brain, what an “artificial mind” could be and what making one could mean practically/philosophically. There’s a lot of interesting research and analysis out there and it can help any of us grapple with the human condition.

    But alas, we can’t have that. An LLM can be a semi-interesting toy to spark a discussion but everyone has some kind of Pavlovian reaction to the topic from the real world shit storm we live in.

    • Yaky@slrpnk.net
      link
      fedilink
      arrow-up
      1
      ·
      3 hours ago

      There is a great short story called The Sleepover, which involves true artificial intelligence and what it did.

      spoiler

      IIRC: true artificial intelligence originated in a research lab, but being intelligent, avoided detection, spread to the entire Earth, eventually broke free of physical world, and was performing mathematical manipulations with reality itself.

    • BlackDragon@slrpnk.net
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      11 hours ago

      (“…a computer doesn’t, like, know what an apple is maaan…”)

      I think you’re misunderstanding and/or deliberately misrepresenting the point. The point isn’t some asinine assertion, it’s a very real fundamental problem with using LLMs for any actually useful task.

      If you ask a person what an apple is, they think back to their previous experiences. They know what an apple looks like, what it tastes like, what it can be used for, how it feels to hold it. They have a wide variety of experiences that form a complete understanding of what an apple is. If they have never heard of an apple, they’ll tell you they’ve never heard of it.

      If you ask an LLM what an apple is, they don’t pull from any kind of database of information, they don’t pull from experiences, they don’t pull from any kind of logic. Rather, they generate an answer that sounds like what a person would say in response to the question, “What is an apple?” They generate this based on nothing more than language itself. To an LLM, the only difference between an apple and a strawberry and a banana and a gibbon is that these things tend to be mentioned in different types of sentences. It is, granted, unlikely to tell you that an apple is a type of ape, but if it did it would say it confidently and with absolutely no doubt in its mind, because it doesn’t have a mind and doesn’t have doubt and doesn’t have an actual way to compare an apple and a gibbon that doesn’t involve analyzing the sentences in which the words appear.

      The problem is that most of the language-related tasks which would be useful to automate require not just text which sounds grammatically correct but text which makes sense. Text which is written with an understanding of the context and the meanings of the words being used.

      An LLM is a very convincing Chinese room. And a Chinese room is not useful.

      • JcbAzPx@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        About the only useful task an LLM could have is generating random NPC dialog for a video game. Even then, it’s close to the least efficient way to do it.

        • BlackDragon@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          There’s a lot of stuff it can do that’s useful, just all malicious. Anything which requires confidently lying to someone about stuff where the fine details don’t matter. So it’s a perfect tool for scammers.

    • ideonek@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      21 hours ago

      Is it possible that you try to convince yourself that you are not in any tribe just becouse you picked yours by being contraitan to two tribes that you haslty drew with crude labels?

      WE picked our position match our convictions! THEY picked the convictions to match their position. And we know which is which becouse we know which one is ME.

        • ideonek@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          15 hours ago

          And there is a good evidence that everybody tend to do it. Except of us, obviously.

          • howrar@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            13 hours ago

            No one’s saying that anyone never does it. The context is this thread and others like it. One scenario in which it happens with some people.

      • stickly@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        20 hours ago

        Well there’s two different layers of discussions that people mix together. One is the discussion in abstract about what it means to be human, the limits of our physical existence, the hubris of technological advancement, the feasibility of singularity, etc… I have opinions here for sure, but the whole topic is open ended and multipolar.

        The other is the tangible: the datacenter building, oil burning, water wasting, slop creating, culture exploiting, propoganda manufacturing reality. Here there’s barely any ethical wiggle room and you’re either honest or deluding yourself. But the mere existence of generative Ai can still drive some interesting, if niche, debates (ownership of information, trust in authority and narrative, the cost of convenience…).

        So there are different readings of the original meme depending on where you’re coming from:

        • A deconstruction of the relationship between humans and artificial intelligence – funny
        • A jab at all techbros selling an AGI singularity – pretty good
        • Painting anyone with an interest in LLM as an idiot – meh

        I don’t think it’s contrarian to like some of those readings/discussions but still be disappointed in the usual shouting matches.

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      21 hours ago

      You’re committing a different sin, and it’s failing to consider that I’ve already played with these toys 6 years ago and I’m now bored with them.

      Also, you’re on the fuckAI board, which is a place dedicated to a political position.

      • sheetzoos@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 hours ago

        Agreed. He’s committed the sin of not realizing he’s in an echo chamber. How dare he try to have a rational conversation when people like petrol sniff king and I just want to cling to our tribalism! We’re right and there’s nothing you can to do convince us otherwise.

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          ·
          4 hours ago

          Half of the things I say are specifically designed to irritate people like you, so all this tells me is I’ve hit a bullseye. You’re like a big tuna that I’ve caught.

  • Klear@quokk.au
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    1 day ago

    I always wanted to teach a robot to say “I think therefore I am”.

  • Echolynx@lemmy.zip
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    Ok this is crazy, I just saw this word earlier today in the book I was reading—I know it’s primed in my brain now, but really, what are the odds of seeing this again?

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          Depends, why do you believe you are seeing more often a particular word?

          The reason defines whether it is apophenia or not. If you are delusional that it is an alien entity trying to communicate secret information to you in particular, by exposing you to a word more frequently, that’s apophenia. If you know it is the frequency illusion and just find it kinda funny how it feels, then it isn’t. Anyways, it is more often associated with the perception of patterns of causality in things that are random or banal. I’m of the opinion that this comic in particular is not a good representation of apophenia, other than the fact that the protagonist is certainly disconnected from reality.

            • dustyData@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              It is a clinical term, it doesn’t describe a feeling. If you are not disconnected from reality you do not have apophenia. It can be sub clinical or non pathological, but it is not a vague feeling. It is a concrete belief. I’m sorry if I’m harsh with this. I just hate pop appropriation of psychological terms. They always end up distorted into tiktok garbage.

              • queermunist she/her@lemmy.ml
                link
                fedilink
                arrow-up
                4
                ·
                1 day ago

                I think it has gained new meaning beyond being a symptom for schizophrenia, such as the tendency for gamblers to believe they’re on a lucky streak or other illusions that trick the brain into seeing patterns that aren’t there.

                Or the wikipedia article is wrong.

                • dustyData@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 day ago

                  Exactly, they do believe it. It’s not a vague feeling that is kind of funny but they actually still know logically it isn’t true. For the person with apophenia, it is true. The gambler does believe in the pattern of the numbers and their luck is due to come. It is not a vague feeling, it is a belief that has overridden their contact with reality. It can be non pathological or sub clinical, as in, it doesn’t affect their day to day life and causes no suffering to themselves or others. But they absolutely believe it and behave accordingly to said belief.

    • Mist101@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      Yeah? Well, maybe yours is an illusion, but how to you explain all the dodge rams on the road after I bought mine?