• bdonvrA
    link
    fedilink
    arrow-up
    4
    ·
    6 months ago

    It’s not completely effective, but one thing to know about these kinds of models is they have an incredibly hard time IGNORING parts of a prompt. Telling it explicitly to not do something is generally not the best idea.