Hey people of Perchance and to whoever developed this generator,
I know people keep saying, “The new model is better, just move on,” but I need to say something clearly and honestly: I loved the old model.
The old model was consistent.
If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt. When I used things like double brackets ((like this)), the model respected my input.
And when I asked for 200 images, the results looked like the same character across the whole batch. It was amazing for making characters, building stories, and exploring different poses or angles. The style was consistent. That mattered to me. That was freedom.
Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.
I get that the new model might be more advanced technically — smoother lines, better faces, fewer mistakes. But better in one way doesn’t mean better for everyone. Especially not for those of us who care about creative control and character accuracy. Sometimes the older tool fits the job better.
That’s why I’m asking for one thing, and I know I’m not alone here:
Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.
I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.
This isn’t about resisting change. This is about preserving what worked and giving users a real choice. You made a powerful tool. Let us keep using it the way we loved.
Thanks for reading this. I say it with full respect. Please bring the old model back — or at least give us a way to use it again.
please
This is the very last image I created with the old model. Good-ole-classy Perchance
Nah, you are absolutely wrong. The new model is hundreth time better than previous one . It just needs training on more data. Or they can replace it with sdxl . I wouldn’t mind paying for sdxl but the previous sd1.5 was very bad . I would agree if you say some art styles were masterpiece but we can get such art styles in this model too after the training finishes so be patient.
The old model was consistent.
If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt.
When I used things like double brackets ((like this)), the model respected my input.
Well, that was a syntax from SD, while the new model is Flux. It requires different prompting; it doesn’t accept the same syntax, from what people tested. Some have had success reinforcing desired aspects with more adjectives, or even repeating specific parts of the prompt.
Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.
As I explained in another thread, you can use the seed system to preserve the some details of the image while changing others: https://lemmy.world/post/30084425/17214873
With a seed, notice the pose and general details remain. One of them had glasses on, while others were clean shaven. But the prompt wasn’t very descriptive on the face.
If I keep the same seed, but change a detail in the prompt, it preserves a lot of what was there before:
a guy in a blue jumper, red jeans, and purple hair, he is wearing dark sunglasses (seed:::1067698885)
Even then, the result will try to be what you describe. You can be as detailed as you want with the face. On that thread I showed that you can still get similar faces if you describe them.
Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.
Keeping two models hosted at once would very likely involve additional costs. While it might be possible, it seems unlikely due to this reason.
I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.
On the discord server, I’ve seen people create all of these. A lot of it is a matter of prompting. People on the discord are very helpful and quite active at experimenting styles, seeds, prompts, and I’ve had a lot of help with getting good results there.
With the new model, everyone started on the same footing. We don’t know the new best practices on the prompting, but people are experimenting, and many have managed to recreate images they made before.
Just a frustrated opinion. I agree that I would pay for the old version. This thing went from fantastic to absolutely awful.
I’m guessing it’s cost money to make these changes BUT if I had a website that users absolutely loved then changed it and “most” users hated those changes,i’d change it back in an instant…“the customer is always right” (D-Fens).
The way the devs blatantly replaced the old model overnight, like it was worthless or something, shows how little they care about their own community
tbh that’s both right and wrong. The dev was doing the update for months and we did learn some hints, but not much.
Have to agree with you…i’d happily pay for the old version…I used to come on here every day to create…now I only come on to read the comments…alas people very rarely (if ever) listen to the “customer”,they just plough on regardless afraid of admitting to a mistake.
It likely isn’t the model itself. They probably updated the software running the model, likely llama.cpp. There are code breaking changes in this software. The biggest change is likely in how the softmax settings were altered to add better repetition avoidance stuff. This reduces the total tokens and some patterns that emerge over longer contexts.
If you have access to some of the softmax settings like temperature and token probabilities try changing these until you find something better that you like again. These settings often need to be drastically altered to get back to something you can tolerate, but the patterns will never be exactly the same.
In my experience, the newer versions of llama.cpp are much more sensitive to the first few tokens. They also do not resume old conversations in the same way due to how stuff is cached. Try building stuff from scratch every time. Alternatively, you need to set the starting seed to get similar results every time regardless of the seed per generative query.