Where is the “negative prompt” in the ai-text-to-image-generator? I’ve just noticed it disappeared.
Just add your negative prompt after your positive prompt inside parentheses like this : (negativePrompt:::ugly, blurry, bad anatomy).
Example :
a beautiful landscape(negativePrompt:::low quality, deformed)(guidanceScale:::11)(resolution:::512x768)
This helps keep all your prompt info in one place.
technically negatives should go in first, because negation is suppose to preload, before the rest of the prompt… Im working on within my formula blocka advanced negation logics, at the moment Gemini and few others tipped me off that the Archaic F fire trigger/reinitialization functions will come handy for Klein, been along while since I leveraged them, an essentially because Klein runs so fast It makes data tabling sorta powerful as far as super distillation is concerned anway… Hard to explaon why reinitialization operations are of much help here maybe because you can basically run the data into post and then re fire it ig… although what makes Klein great for this makes it “atm” the majority of tactical formula compartmentalization decapartum phasing “all formula formatting considered to be tactical like that originally previewed and concept with TF∅X formula formatting/&series, are basically extremely touchy, if not compete broke for the most part, if just broke AF it’s because Klein has the heaviest load specified sampling sub routines ever conceived” It basickly has some low level compartmentalization of its raw data tables, ontop of already having specific prior flux.1 Sampling Auto blending over blend autonomous blending routines.
Devs switched to a new model. To save $$$ it does things differently to use less resources. Images now completely hit or miss, so you have to generate 10x more images to get usable results. Resource use ACTUALLY being reduced??? Hard to say when I have to generate SOOOO many more images because this one ignores MUCH prompt data. Seems like “Cut off nose to spite face.”
You mean yesterday?? my images are looking very weird now.
Flux schnell and dev both don’t take separate negative prompts anyways unlike SD
We are now using Klein, similar to Schnell, and built ontop of Flux.1 networking,… “Flux.2 modeling & super distillation models”
A post on this allready exist!, please🙏 … Look around at others posts before making new ones. visit this libk to find it and more information on perchance T2i&prompting Dev…Notes…ect. https://lemmy.world/post/43127973
Gone, because negative prompt is not working
Thats true! Since nat lang text encoders are more complex , the negatives of the stuff being encoded is rarely the opposite.
As in , the model itself (Chroma on perchance) isnt trained to comprehend negative vectors. Tho lodestones (creator of Chroma) never specified but I assume not.
Negatives are an off shoot in training that sorta worked in CLIP based models SD1.5 SDXL (Pony / illustrious) that carried over into the nat lang mod releases out of habit in the community.
It worked in CLIP models because the CLIP encoder is simple. Write ‘ice cream’ in CLIP and the text encoding vector will point roughly same direction no matter where in the 75 token batch ‘ice cream’ .
Compare that to the many number of different answers you can get from chatGPT or GROK containing the word ‘ice cream’ and you can see how the 512 token batch encoding of the T5 in Chroma , or Qwen encoder in Klein / Z image varies drastically depending how common words are arranged in the text.
Quite sad… It’s being harder than ever to create something that doesn’t look either a loli or a bimbo, and both are often naked or almost…
deleted by creator





