I’m sure this is a common topic but the timeline is pretty fast these days.

With bots looking more human than ever i’m wondering what’s going to happen once everyone start using them to spam the platform. Lemmy with it’s simple username/text layout seem to offer the perfect ground for bots, to verify if someone is real is going to take scrolling through all his comments and read them accurately one by one.

  • usrtrv@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I think we’ll be in bad shape when you can’t trust any opinions about products, media, politics, etc. Sure, shills currently exists, so everything you read already needs skepticism. But at some point bots will be able to flood very high quality posts. But these will of course be lies to push a product or ideology. The truth will be noise.

    I do think this is inevitable, and the only real guard would be to move back to smaller social circles.

    • Muddybulldog@mylemmy.win
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I’m of the mind that the truth already is noise and has been for a long, long time. AI isn’t introducing anything new, it’s just enabling faster creation of agenda-driven content. Most people already can’t identify the AI generated content that’s been spewing forth in years past. Most people aren’t looking for quality content, they looking for bias-affirming content. The overall quality is irrelevant.

      • zer0OP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 year ago

        The outcome is that people will ditch platform like lemmy and seek true informations somewhere else