because that’s what we want. To open up classified documents to an AI controlled by Elon, of all people :/

  • Echo Dot@feddit.uk
    link
    fedilink
    arrow-up
    4
    ·
    18 hours ago

    This is going to be a new sabre rattling technique in the future.

    If you don’t back down we’re going to turn our entire military over to AI and it might do anything.

    This is either going to turn out to be nothing or an utter catastrophe and there’s no option in between. I suspect most generals in the military are bright enough to not give it control over anything too dangerous though.

  • FreddiesLantern@leminal.space
    link
    fedilink
    arrow-up
    5
    ·
    19 hours ago

    Soldier on the field: “Delta team in position! Send orders and coordinates!”

    Pentagon: “We receive you Delta team, standby for transmission”

    “Ok Grok, send them the orders and coordinates”

    Clanker: “Beep beep. Buy a blue check mark and/or Teslur”

    Soldier: “Instructions unclear sir!!!”

    Pentagon: “Please clarify how they are unclear soldier?”

    Soldier: “It’s AI generated porn of your mom sir?”

    Pentagon: “GODDAMNIT ELON!!!”

  • blarth
    link
    fedilink
    arrow-up
    5
    ·
    20 hours ago

    Holy shirt, I figured it out. THIS is the bad place!

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      2
      ·
      18 hours ago

      War thunder needs to add UFOs, so we all finally get to find out what’s going on at area 51

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 days ago

    It’s gonna be so fun to prompt engineer TS/SCI/Confidential data out of this thing. I’m curious to see how long the first exploit is gonna take.

  • teft@piefed.social
    link
    fedilink
    English
    arrow-up
    56
    ·
    2 days ago

    How long until these dipshits accidentally publish state secrets on twitter? Taking bets now.

  • toiletobserver@lemmy.world
    link
    fedilink
    arrow-up
    45
    ·
    2 days ago

    The only safe system is one that is disconnected from other connections physically, then shielded, then with very limited access. Even then, humans are still the biggest risk. I’m sure they’re doing the exact opposite.

    • Skyrmir@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      By definition LLM models need massive external input in order to improve. So they can’t really be disconnected. Top that off with them only being useful when you can interact with them from many or remote locations, and there’s just no way to really keep them secure. They need massive communication to accomplish anything useful, and there’s no real way to keep massive communication secure.

      • raccoon@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        You don’t have to improve them out in the field. Just collect metrics on their behavior and train a central model on that data, then you upgrade the local models on each unit when they are brought in for maintenance. I’m simplifying, of course. And terrified.

        • Skyrmir@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          That’s the thing, the local models aren’t going to have the processing power to really beat deterministic systems. So you’re going to need comms at some point to really get any kind of edge. Otherwise dumb systems get the job done for pennies, while you’re having to produce high end chips to handle local processing and back end training, just to significantly out perform a garage door trip light. Yes, an exaggerated comparison, but the point holds.