Where is the “negative prompt” in the ai-text-to-image-generator? I’ve just noticed it disappeared.

  • Hexagonal_Druid@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    Just add your negative prompt after your positive prompt inside parentheses like this : (negativePrompt:::ugly, blurry, bad anatomy).

    Example :

    a beautiful landscape(negativePrompt:::low quality, deformed)(guidanceScale:::11)(resolution:::512x768)

    This helps keep all your prompt info in one place.

    • RR∆S®MinoriMirari®.Prod@lemmy.worldBanned from community
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      technically negatives should go in first, because negation is suppose to preload, before the rest of the prompt… Im working on within my formula blocka advanced negation logics, at the moment Gemini and few others tipped me off that the Archaic F fire trigger/reinitialization functions will come handy for Klein, been along while since I leveraged them, an essentially because Klein runs so fast It makes data tabling sorta powerful as far as super distillation is concerned anway… Hard to explaon why reinitialization operations are of much help here maybe because you can basically run the data into post and then re fire it ig… although what makes Klein great for this makes it “atm” the majority of tactical formula compartmentalization decapartum phasing “all formula formatting considered to be tactical like that originally previewed and concept with TF∅X formula formatting/&series, are basically extremely touchy, if not compete broke for the most part, if just broke AF it’s because Klein has the heaviest load specified sampling sub routines ever conceived” It basickly has some low level compartmentalization of its raw data tables, ontop of already having specific prior flux.1 Sampling Auto blending over blend autonomous blending routines.

  • DBaluchi@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    5 days ago

    Devs switched to a new model. To save $$$ it does things differently to use less resources. Images now completely hit or miss, so you have to generate 10x more images to get usable results. Resource use ACTUALLY being reduced??? Hard to say when I have to generate SOOOO many more images because this one ignores MUCH prompt data. Seems like “Cut off nose to spite face.”

  • superuser9@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    6 days ago

    Flux schnell and dev both don’t take separate negative prompts anyways unlike SD

    • RR∆S®MinoriMirari®.Prod@lemmy.worldBanned from community
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      3 days ago

      We are now using Klein, similar to Schnell, and built ontop of Flux.1 networking,… “Flux.2 modeling & super distillation models”

    • RandomPerchanceUser@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      Thats true! Since nat lang text encoders are more complex , the negatives of the stuff being encoded is rarely the opposite.

      As in , the model itself (Chroma on perchance) isnt trained to comprehend negative vectors. Tho lodestones (creator of Chroma) never specified but I assume not.

      Negatives are an off shoot in training that sorta worked in CLIP based models SD1.5 SDXL (Pony / illustrious) that carried over into the nat lang mod releases out of habit in the community.

      It worked in CLIP models because the CLIP encoder is simple. Write ‘ice cream’ in CLIP and the text encoding vector will point roughly same direction no matter where in the 75 token batch ‘ice cream’ .

      Compare that to the many number of different answers you can get from chatGPT or GROK containing the word ‘ice cream’ and you can see how the 512 token batch encoding of the T5 in Chroma , or Qwen encoder in Klein / Z image varies drastically depending how common words are arranged in the text.

    • Crimson_Frost@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      7 days ago

      Quite sad… It’s being harder than ever to create something that doesn’t look either a loli or a bimbo, and both are often naked or almost…