• Pelicanen@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    11 months ago

    maybe we should not be building our world around the premise that it is

    I feel like this is a really important bit. If LLMs turn out to have unsolvable issues that limit the scope of their application, that’s fine, every technology has that, but we need to be aware of that. A fallible machine learning model is not dangerous; AI-based grading, plagiarism checking, resume-filtering, coding, etc. without skepticism is dangerous.

    LLMs probably have very good applications that could not be automated in the past but we should be very careful of what we assume those things to be.