“It’s like AI gacha for 3D devs” 
translation: The stupid bot doesn’t do what you want it to do most of the time
Such efficiency! Woaw 
Also of course the example image is a scantily clad young lady
yeah and then import the model into a game engine and have it explode because you have 5 gorillion quads and the topology is shit
Can’t forget rigging and those details that matter.
Also promoting for a model is dumb. Taking 2D images and trying to generate a shitty 3d model is more useful.
I hate the whole “AI placeholder” thing, as those placeholders are only useful for showing things to dumb fuckers with no imagination. Like Disney executives.
placeholder art SHOULD look like shit, so you don’t forget to replace it! using ai to make something that doesn’t look immediately obviously like a placeholder is defeating the purpose entirely
Hot pink default texture so you don’t miss it during QA. The entire point of AI place holder assets is that they already plan to ship it like that depending how things shake out.
Thanks, this makes me feel a lot better about all the placeholder art I use in things. I’m always worried it looks really bad, but that’s the point, so I don’t just leave it there.
Scumbags like good ai placeholders. Like the Clair obscur team who whoops had a bunch of AI art on day 1. That they then said was placeholder, but missed.
And the reason I mention Disney execs is that they want to look at final pass quality stuff during early phases in games and movies.
They want ‘vertical slices’ which… are generally a huge waste of time, and the number of slices wanted for internal demos keep going up.
only useful for showing things to dumb fuckers with no imagination. Like Disney executives.
So what you’re saying is that AI is actually useful? Just ship it, we don’t have time for this “creativity” nonsense.That’s their intended goal. “The boss thinks it’s good enough”.
I think it’s for the better if those “Give me the Epstein special, but make it look legal” models explode your engine. Nothing good could come out of that
Introducing promptAI~🌟 Don’t know what to prompt? Have no ideas of your own? Forgot how to read and write?Don’t worry! Just put your thumb print here and say I consent to everything and click!
😔😮💨❓️👌💪🤖🔴❗️🎊🤯😻
I’ve seen plenty of “pay for my course and I’ll teach you the way to make the perfect prompt!” kind of grifters, but that really is the next step of this sort of thing.
learned helplessness as a service
Like if you’re farting around with ai for fun, sure whatever, it’s all good (minus the environmental cost unless you’re running local models on your hardware). But if you’re vibe coding and making people’s lives worse by using duct taped together slop then no, fuck you.
(minus the environmental cost unless you’re running local models on your hardware)
This is Deepseek erasure
if we’re blowing up copyright as we should anyway why not just rip a model from a game and change it up a little like when you copy homework?
But where’s the avenue for rent seeking in that
Also the users can pretend to be creative and stick it to real artists at the same time
From everything I’ve seen (Not done either, for reference) ripping models from some games seems about 200x harder then creating a similar model from scratch.
And modelling looks like hell itself to me.
I recently started learning. It’s actually not too bad. I was dreading it, but have actually really enjoyed learning
depends on the game really. sometimes the models are in 100 pieces and you have to arrange them manually to re-rig, sometimes they’re not a mess and it’s much much easier than making something from scratch.
depends on the game
afaic prompting is like tea leaf reading, you can write six ways from sunday “blue shirt” “(blue shirt)” “((blue shirt))” “((blue shirt:1.6))” and the sd model will be like “ah yeah red shirt coming right up fam!” People that claim to be “prompt engineers” are blowing smoke up your ass, it’s a crapshoot guessing game.
afaic prompting is like tea leaf reading, you can write six ways from sunday “blue shirt” “(blue shirt)” “((blue shirt))” “((blue shirt:1.6))” and the sd model will be like “ah yeah red shirt coming right up fam!”
Mostly, yeah, although some of that is UI frontend formatting. For certain frontends and model types something like “(tag:2)” increases the weight of the tag during the turning the text into usable numbers stage, and it only did anything if that was actually a tag the model or lora was trained with. It had some limited ability to force like SD1.5 or SDXL based models more or less towards a concept, but there’s always so much random noise and incoherence that means actually making the shitty gacha churn out a desired result means lots and lots of rerolling and poking at the prompt and it never actually does a good job.
Modern qwen based natural language prompt models are literally just you describe something in as much detail as possible and then the image model gives something that’s still dogshit and still randomly broken, but is a little more like what it’s told than the older ones did.
There’s no secret to it, and even at it’s most esoteric it was less complicated than the markup formatting used in reddit or lemmy posts lmao.
translation: The stupid bot doesn’t do what you want it to do most of the time
It’s the perfect comparison I never knew I was missing. Hit the button, 99.999% of the time you’ll get shit.
every new industry in america for the past 10 years has been powered by the spirit of just giving up
the ultimate goal of work under capitalism is to escape it
are they deliberately adding “she must have scoliosis” into the prompt
The training data ensures that anyway.

Part of me hope this takes off since its more work for those who know rigging/3d modeling and I don’t mind editing work.














