• bdonvrA
    link
    fedilink
    arrow-up
    4
    ·
    27 days ago

    It’s not completely effective, but one thing to know about these kinds of models is they have an incredibly hard time IGNORING parts of a prompt. Telling it explicitly to not do something is generally not the best idea.