• Ms. ArmoredThirteen@lemmy.ml
    link
    fedilink
    arrow-up
    37
    ·
    4 months ago

    I worked for the state once and the number of times I had to put my foot down for security was appalling. We’re talking like getting web services updated to use basic password auth could take months and I’d be pressured by management to ignore it because some asshat using the service doesn’t want to update their 30 year old batch file to deal with auth. Other people would regularly push things that could easily expose thousands of people’s identifying info just to get management off their backs. A couple projects I think I was specifically kept away from because they were “mission critical” and they didn’t want me slowing it down with trivial stuff like not leaking unencrypted databases…

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      4 months ago

      Very stark contrast to a typical day at my job.

      “Looks like there’s a broken link on this page. No problem, we can get that fixed up in a day or two after we tackle the 32 vulnerabilities that cropped up since the last time we changed that page."

    • jadero@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      4 months ago

      That is something I just don’t get. I’m a hobbyist turned pro turned hobbyist. The only people who I ever offered my services to were either after one of my very narrow specialties where I was actually an expert or literally could not afford a “real” programmer.

      I never found proper security to have any impact on my productivity. Even going back to my peak years in the first decade of this century, there was so much easily accessible information, so many good tutorials, and so many good products that even my prototypes incorporated the basics:

      • Encrypt the data at rest
      • Encrypt the data in transit
      • No shared accounts at any level of access
      • Full logging of access and activity.
      • Before rollout, back up and recovery procedures had to be demonstrated effective and fully documented.

      Edited to add:

      It’s like safety in the workplace. If it’s always an add-on, it will always be of limited effectiveness and reduce productivity. If it’s built in to the process from the ground up, it’s extremely effective and those doing things unsafely will be the productivity drain.

      • CodeMonkey@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        4 months ago
        • Encrypt the data at rest
        • Encrypt the data in transit

        Did you remember to plan for a zero downtime encryption key rotation?

        • No shared accounts at any level of access

        Did you know when account passwords expire? Have you thought about password rotation?

        • Full logging of access and activity.

        That sounds like a good practice until you have 20 (or even 2000) backend server requests per end user operation.

        All of those are taken from my experience.

        Security is like an invasive medical procedure: it is very painful in the short term but prevents dire complications in the long term.

        • jadero@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          All excellent points. I never worked at those scales or under those conditions, neither should I have been permitted to. And I had enough self-awareness to keep myself away from anything like that.

          I guess when I read about this breach or that, the real damage seems to be a result of not having the basics covered. Whatever “basic” might mean for different scales of operation, encrypted at rest seems to be the the basis of public harm through theft of data, and it strikes me that if that can’t be managed at a particular scale, then operating at that scale should not be considered.

      • Miaou@jlai.lu
        link
        fedilink
        arrow-up
        4
        ·
        4 months ago

        Dependencies, scope creep, feature creep, off by one errors, misconfiguration, unclear/unenforced contracts/invariants… Most of those are trivial to solve at small scale, but the more moving parts you have, the more complex it becomes

        • jadero@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          Of course, but that just makes the case for security as a foundational principle even stronger.

          Mistakes happen. They always will. That’s not a reason to just leave security as the afterthought it so often is.

          None of the things I mentioned have anything to do with errors and scope creep, but everything to do with building using sound principles and practices always. As in, you know, always. In class, during bootcamps, during design meetings, when writing sample code, when writing reference implementations, during the construction of the prototype that, let’s face it, almost always goes into production. Always.

          • Miaou@jlai.lu
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            Sure, and then the big client bankrolling your company needs the feature in production for next week.

            If you’re gafam you can tell them to get screwed and that you need more time, but at least in my experience I’ve always been on the other side of the table, and sometimes you gotta change a setting in a production DB because the related GUI change was not approved since the guy doing the review was sick and the other reviewer was not sure which shade of green to use somewhere on the page.

            I agree with that security is not something you add on the side, but circumstances change and things are not always in control. You say mistakes happen, but not everything I mentioned is caused by mistakes, sometimes the shortcut is completely intentional. Companies only care about security when it’s too late, at which point they will blame you for writing unsafe software, but if your company or your job depend are at stake, that’s often a risk you have to take

            • jadero@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              4 months ago

              … if your company or your job depend are at stake, that’s often a risk you have to take

              Take all the risks you want. Just be sure that you’re the one actually taking the risk, not the people whose data you manage. I get really tired of people and companies who claim that it was a necessary risk when they’re not the ones paying for the bad outcomes.

              You risk something by standing your ground, not in agreeing to that which puts me at risk.

  • lightnegative@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    8
    ·
    4 months ago

    Why is it that security guys always think their issues are more important than any other issues?

    Like well done you, you ran an automated tool over the codebase and it picked up some outdated dependencies.

    We cant just update these dependencies because the newer versions have breaking changes and we already have a backlog of 32767 issues to deal with.

    It’s not security debt, it’s just general technical debt.

    Why is the issue that is only exploitable in a contorted scenario where the user has broken out of a VM and gained root on the hypervisor more important than the issue preventing our largest customer from tripling their volume on our platform?

    Not to mention the joke that’s been made of the CVE system due to resume padding by the security industry…

    • Mischala@lemmy.nz
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      4 months ago

      Generally a regular issue is much less likely to get you hacked.
      Security issues often come with legal liability which is why a bad security department will act overly important and stomp around demanding changes be made right the fuck now.

      But I do get it, a good security team should be enabling their dev teams to solve issues in the least disruptive way possible, not just thrown them work and barking orders.

      In some places I have worked, the sec teans will find an issue and push PRs to fix them, explaining the security concern, and requesting only a review and merge.

    • Soviet Pigeon@lemmygrad.ml
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      It’s not security debt, it’s just general technical debt.

      I would also say, that this is just technical debt. I also fully understand, that there are things like breaking changes. I remember clearly when we used asyncore in the past for Python at work and then it became deprecated. It was still possible to use it for a long time, but a change was needed. Such breaking changes caused work and are not nice. Especially if it is a big software.

      On the other side, I am not happy if I buy software or hardware, which has probably insecure dependencies. I understand the developers, I am also one, and I know that many things are not under their control. I am also not blaming them. But it is a no-go if something new is sold with 10-year-old OpenSSH Server, 15-year-old curl or other things.

      But I am not taking exotic vulnerabilities that seriously. Like, if you need specific constellations, so this is somehow hackable.