• Hector_McG@programming.dev
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    1 year ago

    LLMs produce code that is functionally error prone while looking reasonable (in the same way that it produces answers that are grammatically correct, correctly spelled, but factually incorrect).

    As we all know, fixing bugs in someone else’s code is generally more difficult than writing the code correctly in the 1st place , and that’s going to apply to a LLMs code output just as much as a humans, if not more.