• Chozo@fedia.io
      link
      fedilink
      arrow-up
      38
      arrow-down
      4
      ·
      1 day ago

      Read about how LLMs actually work before you read articles written by people who don’t understand LLMs. The author of this piece is suggesting arguments that imply that LLMs have cognition. “Lying” requires intent, and LLMs have no intention, they only have instructions. The author would have you believe that these LLMs are faulty or unreliable, when in actuality they’re working exactly as they’ve been designed to.

      • WanderingThoughts@europe.pub
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 day ago

        as they’ve been designed to

        Well, designed is maybe too strong a term. It’s more like stumbling on something that works and expand from there. It’s all still build on the fundaments of the nonsense generator that was chatGPT 2.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          1 day ago

          Given how dramatically LLMs have improved over the past couple of years I think it’s pretty clear at this point that AI trainers do know something of what they’re doing and aren’t just randomly stumbling around.

          • WanderingThoughts@europe.pub
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            A lot of the improvement came from finding ways to make it bigger and more efficient. That is running into the inherent limits, so the real work with other models just started.

            • Natanael@infosec.pub
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              And from reinforcement learning (specifically, making it repeat tasks where the answer can be computer checked)

      • thedruid@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        1 day ago

        So working as designed means presenting false info?

        Look , no one is ascribing intelligence or intent to the machine. The issue is the machines aren’t very good and are being marketed as awesome. They aren’t

        • Chozo@fedia.io
          link
          fedilink
          arrow-up
          9
          arrow-down
          1
          ·
          1 day ago

          So working as designed means presenting false info?

          Yes. It was told to conduct a task. It did so. What part of that seems unintentional to you?

          • thedruid@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            edit-2
            1 day ago

            That’s not completing a task. That’s faking a result for appearance.

            Is that what you’re advocating for ?

            If I ask an llm to tell me the difference between aeolian mode and Dorian mode in the field of music , and it gives me the wrong info, then no it’s not working as intended

            See I chose that example because I know the answer. The llm didn’t. But it gave me an answer. An incorrect one

            I want you to understand this. You’re fighting the wrong battle. The llms do make mistakes. Frequently. So frequently that any human who made the same amount of mistakes wouldn’t keep their job.

            But the investment, the belief in a.i is so engrained for some of us who so want a bright and technically advanced future, that you are now making excuses for it. I get it. I’m not insulting you. We are humans. We do that. There are subjects I am sure you could point at where I do this as well

            But a.i.? No. It’s just wrong so often. It’s not it’s fault. Who knew that when we tried to jump ahead in the tech timeline, that we should have actually invented guardrail tech first?

            Instead we let the cart go before the horses, AGAIN, because we are dumb creatures , and now people are trying to force things that don’t work correctly to somehow be shown to be correct.

            I know. A mouthful. But honestly. A.i. is poorly designed, poorly executed, and poorly used.

            It is hastening the end of man. Because those who have been singing it’s praises are too invested to admit it.

            It simply ain’t ready.

            Edit: changed “would” to “wouldn’t”

            • Chozo@fedia.io
              link
              fedilink
              arrow-up
              10
              ·
              1 day ago

              That’s not completing a task.

              That’s faking a result for appearance.

              That was the task.

              • thedruid@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                5
                ·
                1 day ago

                No, the task was To tell me the difference in the two modes.

                It provided incorrect information and passed it off as accurate. It didn’t complete the task

                You know that though. You’re just too invested to admit it. So I will withdraw. Enjoy your day.

                  • thedruid@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    4
                    ·
                    1 day ago

                    No. It gave the wrong answer therefore it didn’t complete the task. It gave the wrong answer. Task incomplete

                    That’s literally how a task works.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      3
      ·
      1 day ago

      I’ve read the article. If there is any dishonesty, it is on the part of the model creator or LLM operator.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      7
      ·
      1 day ago

      You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.

      • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        7
        ·
        edit-2
        1 day ago

        It’s just semantics in this case. Catloaf’s argument is entirely centered around the definition of the word “lie,” and while I agree with that, most people will understand the intent behind the usage in the context it is being used in. AI does not tell the truth. AI is not necessarily accurate. AI “lies.”

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          3
          ·
          1 day ago

          AI returns incorrect results.

          In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            1 day ago

            It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.

          • thedruid@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            1 day ago

            Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.

            I can’t believe this Is the supposed high level of discourse on lemmy

            • FreedomAdvocate@lemmy.net.au
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              7
              ·
              edit-2
              1 day ago

              I can’t believe this Is the supposed high level of discourse on lemmy

              Lemmy users and AI have a lot of things in common, like being confidently incorrect and making things up to further their point. AI at least agrees and apologises when you point out that it’s wrong, it doesn’t double down and cry to the mods to get you banned.

              • thedruid@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 day ago

                I know. it would be a lot better world if a. I apologists could just admit they are wrong

                But nah. They better than others.

      • FreedomAdvocate@lemmy.net.au
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        As someone on Lemmy I have to disagree. A lot of people claim they do and pretend they do, but they generally don’t. They’re like AI tbh. Confidently incorrect a lot of

        • TheGrandNagus@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          People frequently act like Lemmy users are different to Reddit users, but that really isn’t the case. People act the same here as they did/do there.

      • venusaur@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 day ago

        And A LOT of people who don’t and blindly hate AI because of posts like this.

      • thedruid@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        6
        ·
        1 day ago

        That’s a huge, arrogant and quite insulting statement. Your making assumptions based on stereotypes

          • thedruid@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            No. You’re mad at someone who isn’t buying that a. I. 's are anything but a cool parlor trick that isn’t ready for prime time

            Because that’s all I’m saying. The are wrong more often than right. They do not complete tasks given to them and they really are garbage

            Now this is all regarding the publicly available a. Is. What ever new secret voodoo one. Think has or military has, I can’t speak to.

            • gravitas_deficiency@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              17 hours ago

              Uh, just to be clear, I think “AI” and LLMs/codegen/imagegen/vidgen in particular are absolute cancer, and are often snake oil bullshit, as well as being meaningfully societally harmful in a lot of ways.

          • thedruid@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            7
            ·
            1 day ago

            You’re just as bad.

            Let’s focus on a spell check issue.

            That’s why we have trump