Four months ago, we asked Are LLMs making Stack Overflow irrelevant? Data at the time suggested that the answer is likely “yes:”

  • Natanael@infosec.pub
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    19 hours ago

    Well trained humans are still more consistent and more predictable and easier to teach.

    There’s no guarantee LLM will get reliably better at everything. It still makes some mistakes today that it did when introduced and nobody knows how to fix that yet

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      19 hours ago

      You’re still setting a high standard here. What counts as a “well trained” human and how many SO commenters count as that? Also “easier to teach” is complicated. It takes decades for a human to become well trained, an LLM can be trained in weeks. And an individual computer that’ll be running the LLM is “trained” in minutes, it just needs to load the model into memory. Once you have an LLM you can run as many instances of it as you want to spend money on.

      There’s no guarantee LLM will get reliably better at everything

      Never said they would. I said they’re as bad as they’re ever going to be, which allows for the possibility that they don’t get any better.

      Even if they don’t, though, they’re still good enough to have killed Stack Overflow.

      It still makes some mistakes today that it did when introduced and nobody knows how to fix that yet

      And humans also make mistakes. Do we know how to fix that yet?

      • Natanael@infosec.pub
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        16 hours ago

        Getting humans to do their work reliably is a whole science and lots of fields can achieve it