I currently use Emerhyst-20B without much reason beyond that it is the largest NSFW model I’ve been able to run. 7B models have a tendency to repeat themselves after a while and don’t have much reference material, and to a lesser extent 13B models. I run it using koboldcpp which seems to fit my needs.

So I’m just wondering, what models do you use? Are any models better at certain areas than other?

  • magn418@lemmynsfw.comM
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    I’ve been using LLaMA2-13B-Psyfighter2 lately. Tiefighter is another good choice. Before that I used Mythomax in the same size but this is outdated.

    I use RoPE scaling to get 8k of context instead of the 4k a Llama2 does. That works very well. And I just use Mirostat 2 in case I haven’t found better manual settings.

    I don’t face repetition loops or something like that often. Some models do it, but usually if that happens and it’s not an insane model-merge I find out I got some setting way off, selected the wrong (instruction) prompt format or the character card included a complicated jailbreak that confused the model. I usually delete all the additional ‘Only speak as the character’, ‘don’t continue’, never do this and that. As it can also confuse the LLM.

    I think a 13B model is fine for me. I’ve tried some models in various sizes. But for erotic roleplay or storywriting, I’ve come to the conclusion that it’s really important that the model got fine-tuned with data like that. A larger model might be more intelligent, but if the material is missing in their datasets, they always brush over the interesting roleplay parts, get the pacing wrong or always play the helpful assistant to some degree. You might be better off with a smaller model if it’s tailored to the use-case.