“It’s like AI gacha for 3D devs” cringe

translation: The stupid bot doesn’t do what you want it to do most of the time

Such efficiency! Woaw so-true

Also of course the example image is a scantily clad young lady

    • Deadend [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      30
      ·
      17 days ago

      Can’t forget rigging and those details that matter.

      Also promoting for a model is dumb. Taking 2D images and trying to generate a shitty 3d model is more useful.

      I hate the whole “AI placeholder” thing, as those placeholders are only useful for showing things to dumb fuckers with no imagination. Like Disney executives.

      • NephewAlphaBravo [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        28
        ·
        17 days ago

        placeholder art SHOULD look like shit, so you don’t forget to replace it! using ai to make something that doesn’t look immediately obviously like a placeholder is defeating the purpose entirely

        • chgxvjh [he/him, comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          25
          ·
          17 days ago

          Hot pink default texture so you don’t miss it during QA. The entire point of AI place holder assets is that they already plan to ship it like that depending how things shake out.

        • Deadend [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 days ago

          Scumbags like good ai placeholders. Like the Clair obscur team who whoops had a bunch of AI art on day 1. That they then said was placeholder, but missed.

          And the reason I mention Disney execs is that they want to look at final pass quality stuff during early phases in games and movies.

          They want ‘vertical slices’ which… are generally a huge waste of time, and the number of slices wanted for internal demos keep going up.

    • SorosFootSoldier [he/him, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      24
      ·
      17 days ago

      Like if you’re farting around with ai for fun, sure whatever, it’s all good (minus the environmental cost unless you’re running local models on your hardware). But if you’re vibe coding and making people’s lives worse by using duct taped together slop then no, fuck you.

  • SorosFootSoldier [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    21
    ·
    17 days ago

    afaic prompting is like tea leaf reading, you can write six ways from sunday “blue shirt” “(blue shirt)” “((blue shirt))” “((blue shirt:1.6))” and the sd model will be like “ah yeah red shirt coming right up fam!” People that claim to be “prompt engineers” are blowing smoke up your ass, it’s a crapshoot guessing game.

    • KobaCumTribute [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      17 days ago

      afaic prompting is like tea leaf reading, you can write six ways from sunday “blue shirt” “(blue shirt)” “((blue shirt))” “((blue shirt:1.6))” and the sd model will be like “ah yeah red shirt coming right up fam!”

      Mostly, yeah, although some of that is UI frontend formatting. For certain frontends and model types something like “(tag:2)” increases the weight of the tag during the turning the text into usable numbers stage, and it only did anything if that was actually a tag the model or lora was trained with. It had some limited ability to force like SD1.5 or SDXL based models more or less towards a concept, but there’s always so much random noise and incoherence that means actually making the shitty gacha churn out a desired result means lots and lots of rerolling and poking at the prompt and it never actually does a good job.

      Modern qwen based natural language prompt models are literally just you describe something in as much detail as possible and then the image model gives something that’s still dogshit and still randomly broken, but is a little more like what it’s told than the older ones did.

      There’s no secret to it, and even at it’s most esoteric it was less complicated than the markup formatting used in reddit or lemmy posts lmao.