• chaogomu@kbin.social
    link
    fedilink
    arrow-up
    81
    arrow-down
    2
    ·
    6 months ago

    The problem is, you can’t trust ChatGPT to not lie to you.

    And since generative AI is now being used all over the place, you just can’t trust anything unless you know damn well that a human entered the info, and then that’s a coin flip.

    • Lmaydev@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      6 months ago

      The newer ones search the internet and generate from the results not their training and provide sources.

      So that’s not such a worry now.

      Anyone who used ChatGPT for information and not text generation was always using it wrong.

      • BakerBagel@midwest.social
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        6 months ago

        Except people are using LLM to generate web pages on something to get clicks. Which means LLM’s are training off of information generated by other LLM’s. It’s an ouroboros of fake information.

        • Lmaydev@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          6 months ago

          But again if you use LLMs ability to understand and generate text via a search engine that doesn’t matter.

          LLMs are not supposed to give factual answers. That’s not their purpose at all.

    • notapantsday@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      However, I find it much easier to check if the given answer is correct, instead of having to find the answer myself.