

Yes… I tried some self hosted models (=> ComfyUI), but I miss a secret sauce. The results are far below perchance.org T2I results regarding… well… the “overall feeling”. Most of them are plain and simple bad at quality, composition and consistency. Z-Image-Turbo was the only model which performed good (to a certain extent), but the training material seems to have been heavily filtered for “all good looking and shiny things and especially people”. It’s quite incapable of generating some more down-to-earth or slightly ugly stuff. perchance.org seems less biased here, which results in a wider variety of “person types” you could achieve. If the devs would disclose, what they exactly do, to modify or enhance a model - that would be quite interesting.

A small peek behind the scene provided by the dev would be awesome. I guess, there is something to be learned :-)…