Hey people of Perchance and to whoever developed this generator,

I know people keep saying, “The new model is better, just move on,” but I need to say something clearly and honestly: I loved the old model.

The old model was consistent.

If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt. When I used things like double brackets ((like this)), the model respected my input.

And when I asked for 200 images, the results looked like the same character across the whole batch. It was amazing for making characters, building stories, and exploring different poses or angles. The style was consistent. That mattered to me. That was freedom.

Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.

I get that the new model might be more advanced technically — smoother lines, better faces, fewer mistakes. But better in one way doesn’t mean better for everyone. Especially not for those of us who care about creative control and character accuracy. Sometimes the older tool fits the job better.

That’s why I’m asking for one thing, and I know I’m not alone here:

Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.

I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.

This isn’t about resisting change. This is about preserving what worked and giving users a real choice. You made a powerful tool. Let us keep using it the way we loved.

Thanks for reading this. I say it with full respect. Please bring the old model back — or at least give us a way to use it again.

please

  • zmkart@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    I’m guessing it’s cost money to make these changes BUT if I had a website that users absolutely loved then changed it and “most” users hated those changes,i’d change it back in an instant…“the customer is always right” (D-Fens).

  • beautifulsoup@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    The way the devs blatantly replaced the old model overnight, like it was worthless or something, shows how little they care about their own community

    • RudBo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      tbh that’s both right and wrong. The dev was doing the update for months and we did learn some hints, but not much.

  • zmkart@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    Have to agree with you…i’d happily pay for the old version…I used to come on here every day to create…now I only come on to read the comments…alas people very rarely (if ever) listen to the “customer”,they just plough on regardless afraid of admitting to a mistake.

  • 𞋴𝛂𝛋𝛆@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    8 hours ago

    It likely isn’t the model itself. They probably updated the software running the model, likely llama.cpp. There are code breaking changes in this software. The biggest change is likely in how the softmax settings were altered to add better repetition avoidance stuff. This reduces the total tokens and some patterns that emerge over longer contexts.

    If you have access to some of the softmax settings like temperature and token probabilities try changing these until you find something better that you like again. These settings often need to be drastically altered to get back to something you can tolerate, but the patterns will never be exactly the same.

    In my experience, the newer versions of llama.cpp are much more sensitive to the first few tokens. They also do not resume old conversations in the same way due to how stuff is cached. Try building stuff from scratch every time. Alternatively, you need to set the starting seed to get similar results every time regardless of the seed per generative query.