• irmoz@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 days ago

    Its not an “idea about how it works”. It is how it works.

    Can you not just admit you learned something here? Or do you just have to argue with everything to try and appear right?

    What’s wrong with, “oh, I didn’t know that. How interesting!”

    • can@sh.itjust.worksM
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      I think it was the part about how the training data gets poisoned that was the interesting idea.

      It is also the reality we’re living in however.

      • irmoz@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Based on their behaviour, I’m not so sure. It seemed to me to be a way of saying “that’s maybe not true, but it’s fun to think about”. At least, that’s how I’d use the phrase “that’s an interesting idea”. If I just found it interesting, I’d say “how interesting!”

        But yes, it is indeed fascinating how LLMs work.

      • irmoz@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Let’s rewind to before that desperate (and likely spontaneous) accusation, and I’ll give you another chance to reply in a normal manner.

        No deflection. Just admit you didn’t know LLMs scrape social media. That’s all. It’s okay; we don’t come into this world with all of its knowledge.

        • rainrain@sh.itjust.worksBanned from communityOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 days ago

          I actually did have a vague idea in that general direction.

          But that’s rather beside my point. I mean, the AI definitely offered these answers. The answers are definitely gender biased. Offering that it’s merely an artifact of the LLM technology is definitely a terrible excuse for that.

          And given that LLMs are well known to be tweaked to align better with the philosophical styles of the hour, doubly so.

          • irmoz@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            So, once again, you double down. No, you obviously didn’t know that, and clearly still don’t actually understand it, since you’re claiming it was engineered to push a narrative (what narrative you won’t say, but i bet it rhymes with bliss gandry).

            Lastly, it’s not an “excuse”. It’s an explanation. Calling it an “excuse” is just another attempt to deflect the answer and avoid being wrong.

            • rainrain@sh.itjust.worksBanned from communityOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              2 days ago

              Well given that the bias is generally considered a bad thing, explanations absolving them of responsibility for the badness are what is generally called an “excuse”.

              • irmoz@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                2 days ago

                It’s not absolving them of responsibility. They’re responsible for how they train their data - it doesn’t need to be trained on social media.