• Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    2 days ago

    Ah yes the typical workflow for LLM generated changes:

    1. LLM produces nonsense at the behest of employee A.
    2. Employee B leaves a bunch of edits and suggestions to hammer it into something that’s sloppy but almost kind of makes sense. A soul-sucking error prone process that takes twice as long as just writing the dang code.
    3. Code submitted!
    4. Employee A gets promoted.

    Also the fact that this isn’t integrated with tests shows how rushed the implementation was. Not even LLM optimists should want code changes that don’t compile or that break tests.

    • wjs018@piefed.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      2 days ago

      I just looked at the first PR out of curiosity, and wow…

      this isn’t integrated with tests

      That’s the part that surprised me the most. It failed the existing automation. Even after prompted to fix the failing tests, it proudly added a commit “fixing” it (it still didn’t pass…something that copilot should really be able to check). Then the dev had to step in and say why the test was failing and how to fix the code to make it pass. With this much handholding all of this could have been done much faster and cleaner without any AI involvement at all.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        2 days ago

        The point is to get open source maintainers to further train their program because they already scraped all our code. I wonder if this will become a larger trend among corporate owned open source projects.