• Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I mean, Alex Jones has more skin in the grift than most conspiracy theorists, so he’s not likely to do a 180 quickly, if at all. Also, it seems like he’s been drunk more often on the latest episodes, so maybe he’s having an existential crisis started by being fact-checked in real time by a robot.

      We can’t know what his internal state is, but I do agree that it does not seem to have slowed his pace at all on the surface.

  • Fizz@lemmy.nz
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 months ago

    No ai can’t because no one believes a word they say. There are so many guardrails put in place that speaking to ai chatbots feels like talking to corporate HR

    • Letsdothis@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Yeah, I feel like trusting ai is going to lead people down dangerously convincing rabbit holes

  • nullboi@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 months ago

    The amount of conspiracy theories I’ve heard in the past year or so involve AI in some way.

    Yesterday a friend and I were talking and he said the government was using AI to hack his brain.

    I don’t think a chat bot is going to help that situation.

  • CondensedPossum@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    11
    ·
    edit-2
    3 months ago

    Pretty funny to posit that a LLM chatbot ought to talk us out of conspiratorial thinking while running on a corporate GPU farm absolutely BLASTING through electricity and copyright and IP violations because it’s legally convenient for the powerful. Please post more thought provoking unreasonable propaganda.

    • Deceptichum@quokk.au
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      5
      ·
      edit-2
      3 months ago

      Huh that’s funny, because I run a local LLM even on my laptop.

      And fuck yes, I love IP violations. Makes me want to go pirate some media and draw fan art.

      Please post some more ignorant rage.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        3 months ago

        Its wild how some people’s blind hate of gen AI has got them thinking “corporate control of culture is good actually”

            • Deceptichum@quokk.au
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 months ago

              Uh yes it does.

              I’ve let the corporations spend the time, money, and resources to train a model.

              They get zero benefit when I run it locally. I get all the benefit.

              • WereCat@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                3 months ago

                The point I’m trying to make is to your first response to CondensedPossum being that you’re still ruining a corporate LLM with bias.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    If the AI wanted to talk me out of conspiracy theories, why don’t they use the brain signals to control us to thinking that way? Do the microwaves from the circuits behind the walls all go out of service all of a sudden?

    This is just classic silicon valley trying to “innovate”, when their real plan was to muscle out CIA and FBI work to non-union contractors.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 months ago

    I guess this is all part of the social sciences side of chatbots and something to keep an eye on, and folks have to start somewhere…but I kind of feel that the technology isn’t really at the point where teaching people in general with a chatbot is an ideal solution.

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 months ago

    AI is a conspiracy theory—companies are just hiring people in lower-income countries to impersonate machines!

    (/s, of course, but with just enough truth to it that there’s probably someone somewhere out there who thinks the above statement is plausible.)

  • Daemon Silverstein
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Interestingly enough, there’s an AI experimentation focused on (trying to) debunking conspiracy theories. The article was posted here on [email protected]

    Edit: the “Can AI talk us out of conspiracy theory rabbit holes?” article’s cover is misleadingly trying to relate conspiracy theories with occult, pagan and esoteric concepts, with symbols that you find in esoteric field (such as the eyed hand, alchemy symbols for planets and stars, etc). I’m a pagan myself. Religious intolerance is a thing that harms minority religions and the article sadly helps to spread this intolerance.

    The occult, pagan and esoteric has nothing to do with conspiracy theories, they’re belief systems, they’re religions, they’re spiritual practices and views. Religions such as Luciferianism and Wicca are often attacked by Christians (with moralist speech such as “you worship Satan, you worship demons, you’re evil, repent”; let’s not forget what the church did to “witches” some centuries ago). I’m not attacking Christianity here (I was a Christian once), but it’s a reality: pagan beliefs, such as mine (I’m somewhat Luciferian and Thelemite in a syncretic way), are often attacked, and such a scientific article does harm pagan beliefs. Pagans don’t spread conspiracy theories.

  • Ilovethebomb@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 months ago

    This is the first time in a long time I’ve heard of a use case for AI that is genuinely useful

    It’s a job very few people will want to do, it can do the job as well as, if not better than a human, and it’s a use case that is genuinely useful.

    I wish them luck.