• Signtist@bookwyr.me
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 hours ago

    This just in: computer that kinda looks like it can learn, but definitely can’t, failed to learn from its mistake.

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 hours ago

    AI is like a five year old (sorry for the inslut kids), it has loads of initiative and very few inhibitions to stop it doing something stupid/dangerous

    • KelvarCherry [They/Them]@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 hours ago

      This is why it is so bafflingly stupid that Anyone is allowing a chatbot to do things. When that idea was first suggested, I and anyone else with a tragic faith in the intelligence of our society laughed it off as another absurd techbro idea, like all of the bullshit Musk and Zuckerberg spew on stage… but they did it ;-;

      They made chatbots, an inherently unpredictable aggregation of all their training data, with systems that link that output to actual commands. This is the type of situation that would normally be deemed a Critical security breach on its own. In November of 2021, an exploit was found to run commands through maliciously-designed strings, exclusively in the Log4J Java debug logging library, and it was a red alert across the cybersecurity world. This is, like, the highest of high vulnerabilities, and for good reason!!!

      Everyone who designs these chatbots should be sued to hell.

  • SeeMarkFly@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    8 hours ago

    Did you pay A.I. a living wage???

    Only a living wage can prevent data dumps.

    Upper management can’t even see it.