• purplemonkeymad@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      Na this is vide ops. Anyone who thought a coding machine could do ops probably assumes anyone who codes can also do ops. It’s going to be making the same mistakes that have happened in DevOps.

      • Joelk111@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 minutes ago

        To be fair, I use LLMs quite a bit in my home lab setup. For one, it’s a home lab, not exactly a prod setup for a company or whatever. Secondly, I obviously also don’t run commands without knowing what they’re doing, with a source that isn’t an LLM.

  • Bongles@lemmy.zip
    link
    fedilink
    English
    arrow-up
    27
    ·
    4 hours ago

    This keeps happening. I can understand using AI to help code, I don’t understand Claude having so much access to a system.

      • Earthman_Jim@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        25 minutes ago

        That’s honestly the most frightening part of all of this to me. How many of these people at the very tippy top pushing this stuff are suffering from cyber psychosis? How many of them have given themselves the covert mission to give AI the keys to the world at all costs because they’re mentally ill from their own technomagic trick?

  • Seefra 1@lemmy.zip
    link
    fedilink
    English
    arrow-up
    13
    ·
    4 hours ago

    It seems that every few weeks some developer makes this same mistake and a news is published each time.

  • mudkip@lemdro.id
    link
    fedilink
    English
    arrow-up
    16
    ·
    5 hours ago

    I don’t feel an inkling of sympathy. Play stupid games, win stupid prizes.

  • kamen@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    ·
    9 hours ago

    You either have a backup or will have a backup next time.

    Something that is always online and can be wiped while you’re working on it (by yourself or with AI, doesn’t matter) shouldn’t count as backup.

    • ThomasWilliams@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      6 hours ago

      He did have a backup. This is why you use cloud storage.

      The operator had to contact Amazon Business support, which helped restore the data within about a day.

    • MIDItheKID@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      7 hours ago

      AI or not, I feel like everybody has had “the incident” at some point. After that, you obsessively keep backups.

      For me it was a my entire “Junior Project” in college, which was a music album. My windows install (Vista at that time - I know, vista was awful, but it was the only thing that would utilize all 8gb of my RAM because x64 XP wasn’t really a thing) bombed out, and I was like “no biggie, I keep my OS on one drive and all of my projects on the other, I’ll just reformat and reinstall Windows”

      Well… I had two identical 250gb drives and formatted the wrong one.

      Woof.

      I bought an unformat tool that was able to recover mostly everything, but I lost all of my folder structure and file names. It was just like 000001.wav, 000002.wav etc. I was able to re-record and rebuild but man… Never made that mistake again. Like I said. I now obsessively backup. Stacks of drives, cloud storage. Drives in divverent locations etc.

      • SirEDCaLot@lemmy.today
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 hours ago

        AI or not, I feel like everybody has had “the incident” at some point. After that, you obsessively keep backups.

        Yup!

        Also totally unrelated helpful tip- triple check your inputs and outputs when using dd to clone a drive. dd works great to clone an old drive onto a new blank one. It is equally efficient at cloning a blank drive full of nothing but 0s over an old drive that has some 1s mixed in.

        • kamen@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 hours ago

          And that’s a great example where a GUI could be way better at showing you what’s what and preventing such errors.

          If you’re automating stuff, sure, scripting is the way to go, but for one-off stuff like this seeing more than text and maybe throwing in a confirmation dialogue can’t hurt - and the tool might still be using dd underneath.

      • kamen@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        TestDisk has saved my ass before. It’s great at recovering broken partitions. If it’s just a quick format done with no encryption involved, you have a very high chance of having your stuff back. That’s of course if you catch yourself after doing just the format.

        Other than that, yeah, I’ve also had my moments. Back in high school not only did I not have money for an external drive - I didn’t even have enough space on my primary one. One time a friend lent me an external drive to do a backup and do a clean reinstall - and I can’t remember the details, but something happened such that the external drive got borked - and said friend had important stuff that was only on that hard drive. Ironically enough it wasn’t even something taking much space - it was text documents that could’ve lived in an email attachment.

  • GaumBeist@lemmy.ml
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    8 hours ago

    Nobody wants to point out that Alexey Grigorev changes to being named Gregory after 2 paragraphs?

    Slop journalism at its sloppiest. I wouldn’t be surprised to find out that this story was entorely fabricated.

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 hour ago

      Naw, Alexey Grigorev is a real person, with a GiHub and everything, and he wrote a blog post about this very incident. The person writing the article just fucked up the name.

      I’m surprised that you jumped to that conclusion without doing a 5-minute web search.

    • Sundiata@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 hours ago

      holy shit your right lol…good catch.

      Makes me want to get out more so I can have real interaction with real peop-

      sees people walking around with meta glasses

      me: “Hey hows it going?”

      person(GEMINI 35.84 INTERFACE): “Human is approaching you, facescan assumes awkward, potentially hostile, he isn’t tagged, there is no name above his head. do not speak with him”

      person: turns and walks away silently in a creepy puppet manner

      me: “What the actual fuck?”

      GEMINI 35.84: “Uploading unknown face into database to Stargate for analysis, no match, law enforcement has been called”

      News at 11: “A man has been incinerated by law enforcement in what officials are describing as a special unwanted persons removal operation”

      this shit could become real in a few decades. funny and depressing as fuck.

  • jaykrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    9 hours ago

    The developer is to blame. Using a cutting edge tool irresponsibly. I have made mistakes using AI to help coding as well, never this bad though. Blaming AI would be like blaming the hammer a roofer was using to hammer nails and slamming their finger accidentally with it. You don’t blame the hammer, you blame the negligence of the roofer.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      The problem is this is the way it’s being pushed. This is how it’s being sold. There are no guardrails.

      …… and that’s the biggest problem. I’m frustrated as hell on the commits I’ve had to unwind because someone doesn’t know how to check the changes before committing, then has it try to fix itself, again without checking on the changes , then again. It’s horrible.

      …… and I’ve seen it too. Trying to have it do only code reviews - the ai points out useful things but then wants to commit a crapload of changes without going over it with me first.

      …… and people are playing with mcp agents, which are really great for letting the ai get data from systems and integrate with those systems . But with few to no guardrails. There’s no no review, the user doesn’t necessarily follow what’s changing, it just gets done. Sometime badly very badly

      We’re all focused on whether the ai works, and it does do a pretty good job with coding but the tools don’t keep the human in the loop, or humans don’t know how to stay on the loop

    • j2k4@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Version control doesn’t do shit for your database. Snapshots/backups.

  • rizzothesmall@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    8 hours ago

    A developer having the ability to accidentally erase your production db is pretty careless.

    An AI agent having the ability to “accidentally” erase your production db is fucking stupid as all fuck.

    An AI agent having the ability to accidentally erase your production db and somehow also all the backup media? That requires a special course on complete dribbling fuckwittery.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      38
      ·
      10 hours ago

      Wrong answer. If you don’t give them access, the alternative (ruling out not using AI because leadership will never go for that) is to hire high school kids to take a task from a manager, ask the ai to do it, then do what the AI says repeatedly to iterate to the solution. The problem with that alt is that it is no better than giving the ai access, and it leaves you with no senior tech people. Instead, you give it access, but only give senior tech people access to the AI. Ones who would know to tell the AI to have a backup of the database, one designed to not let you delete it without multiple people signing off.

      Senior tech people aren’t going to spend thier time trying things an AI needs tried to find the solution. So if you don’t give it access, they won’t use it, and eventually they will all be gone. Then you are even further up shit creek than you are now.

      The answer overall, is smarter people talking to the AI, and guardrails to stop a single point of failure. The later is nothing new.

      • vithigar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        21
        ·
        9 hours ago

        What is this insane rambling?

        The alternative is that the only thing with access to make changes in your production environment is the CI pipeline that deploys your production environment.

        Neither the AI, nor anything else on the developers machine, should have access to make production changes.

      • Shanmugha@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        9 hours ago

        Nah. As a tech people, I am not going to give an llm write access to anything in production, period

      • MartianRecon@lemmus.org
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        10 hours ago

        The answer is no AI. It’s really simple. The costs for ai are not worth the output.

      • criss_cross@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        Do you go on an oncall rotation by chance? Because anyone that has to respond to night time pages would not be saying this lol.

      • Matty_r@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 hours ago

        I’m in favour of hiring kids to figure out the solution through iteration and doing web searches etc. If they fuck up, then they learn and eventually become better at their job - maybe even becoming a Senior themselves eventually.

        I get what you’re saying - Seniors are more likely to use the tools more effectively, but there are many cases of the AI not doing what its told. Its not repeatably consistent like a bash script.

        People are better - always.

    • minorkeys@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      19
      ·
      11 hours ago

      No risk, no reward. People are desperate for these tools to help them success.

  • SapphironZA@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    117
    ·
    13 hours ago

    We used to say Raid is not a backup. Its a redundancy

    Snapshots are not a backup. Its a system restore point.

    Only something offsite, off system and only accessible with seperate authentication details, is a backup.

    • Krudler@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      Circa 1997 I was making some innovative new games, employed by a dude who’d put millions of his own money into the company. He was completely nonplussed when I brought him 20 CDs in a sealed box to remove from the building and store off site. He thought I’d lost my damned mind and blew it off as ravings of a stressed dev. I pointed out real threats to our IP including the hardware failures and even so far as the building burning down. 2 years of custom art and code gone. “Unlikely. Relax.”

      After I moved on… an ex co-worker who’s still a longtime friend, tells me a different division lost a huge amount of FMV over some whoops-I-destroyed-the-wrong-drive blunder. 20 days to render on an 8 or 10 machine farm. Poof - No backups. In 1997 even with top-of-the-line gear it took an insane investment to render quality 3D.

      The friggin’ carelessness irks the shit out of me as I type ahah

      • mic_check_one_two@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        20
        ·
        13 hours ago

        AKA Schrödinger’s Backup. Until you have successfully restored from a backup, it is just an amorphous blob of data that may or may not be valid.

        I say this as someone who has had backups silently fail. For instance, just yesterday, I had a managed network switch generate an invalid config file for itself. I was making a change on the switch, and saved a backup of the existing settings before changing anything. That way I could easily reset the switch to default and push the old settings to it, if the changes I made broke things. And like an idiot, I didn’t think to validate the file (which is as simple as pushing the file back to the switch to see if it works) before I made any changes.

        Sure enough, the change I made broke something, so I performed a factory reset and went to upload that backup I had saved like 20 minutes prior… When I tried to restore settings after the factory reset, the switch couldn’t read the file that it had generated like 20 minutes earlier.

        So I was stuck manually restoring the switch’s settings, and what should have been a quick 2 minute “hold the reset button and push the settings file once it has rebooted” job turned into a 45 minute long game of “find the difference between these two photos” for every single page in the settings.

    • tetris11@feddit.uk
      link
      fedilink
      English
      arrow-up
      25
      ·
      12 hours ago

      3-2-1 Backup Rule: Three copies of data at two different types of storage media, with 1 copy offsite

    • SreudianFlip@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 hours ago

      Fukan yes

      • D\L all assets locally
      • proper 3-2-1 of local machines
      • duty roster of other contributors with same backups
      • automate and have regular checks as part of production
      • also sandbox the stochastic parrot
    • OrteilGenou@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 hours ago

      I remember back when I first started seeing a DR plan with three tiers of restore, 1 hour, 12 hours or 72 hours. I knew that to 1 hour meant a simple redirect to a DB partition that was a real time copy of the active DB, and twelve hours meant that failed, so the twelve hours was a restore point exercise that would mean some data loss, but less than one hour, or something like that.

      I had never heard of 72 hours and so raised a question in the meeting. 72 hours meant having physical tapes shipped to the data center, and I believe meant up to 12 (though it could have been 24) hours of data lost. I was impressed by this, because the idea of having a job that ran either daily or twice daily that created tape backups was completely new to me.

      This was in the early aughts. Not sure if tapes are still used…