Hey people of Perchance and to whoever developed this generator,
I know people keep saying, “The new model is better, just move on,” but I need to say something clearly and honestly: I loved the old model.
The old model was consistent.
If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt. When I used things like double brackets ((like this)), the model respected my input.
And when I asked for 200 images, the results looked like the same character across the whole batch. It was amazing for making characters, building stories, and exploring different poses or angles. The style was consistent. That mattered to me. That was freedom.
Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.
I get that the new model might be more advanced technically — smoother lines, better faces, fewer mistakes. But better in one way doesn’t mean better for everyone. Especially not for those of us who care about creative control and character accuracy. Sometimes the older tool fits the job better.
That’s why I’m asking for one thing, and I know I’m not alone here:
Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.
I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.
This isn’t about resisting change. This is about preserving what worked and giving users a real choice. You made a powerful tool. Let us keep using it the way we loved.
Thanks for reading this. I say it with full respect. Please bring the old model back — or at least give us a way to use it again.
please
It likely isn’t the model itself. They probably updated the software running the model, likely llama.cpp. There are code breaking changes in this software. The biggest change is likely in how the softmax settings were altered to add better repetition avoidance stuff. This reduces the total tokens and some patterns that emerge over longer contexts.
If you have access to some of the softmax settings like temperature and token probabilities try changing these until you find something better that you like again. These settings often need to be drastically altered to get back to something you can tolerate, but the patterns will never be exactly the same.
In my experience, the newer versions of llama.cpp are much more sensitive to the first few tokens. They also do not resume old conversations in the same way due to how stuff is cached. Try building stuff from scratch every time. Alternatively, you need to set the starting seed to get similar results every time regardless of the seed per generative query.
Appreciate the technical insight — I think you’re half right, but still missing the core issue.
Yeah, I get that it might not just be the model itself — changes in things like llama.cpp, token handling, softmax behavior, and temperature tuning could totally affect how the model generates images or text. I’m not saying you’re wrong on that.
But even with tweaking — temperature, repetition penalties, seed control, all of that — what I’m saying is that the feel and functionality of the old model is still missing. Even with the same prompt and same seed, the new system doesn’t give me the same results in terms of styling, framing, and consistency across batches. It’s like asking for a toolbox and getting a magic wand — powerful, but unpredictable.
I’m not trying to get exact copies of old patterns — I just want the same level of control and stability I had before. I’ve already tried building from scratch, resetting seed behavior, prompt front-loading, etc. It still doesn’t replicate the experience the old model gave me.
So again — I’m not dismissing the technical updates. But for people like me who rely on visual consistency for characters across dozens of images, the user-facing behavior changed in a way that broke that workflow. That’s what I’m asking to have restored — whether through old model access or a toggle that emulates the old behavior.