I do agree with your “averaging machine” argument. It makes a lot of sense given how LLMs are trained as essentially massive statistical models.
Your conjecture that bad writing is due to roleplaying on the early internet is a bit more… speculative. Lacking any numbers comparing writing trends over time I don’t think one can draw such a conclusion.
Large discord groups and forums are still the proving ground for new, young writers who try to get started crafting their prose to this day, and I have watched it for over 30 years. It has changed, dramatically, and I would be remiss to say I have no idea where the change came from if I didn’t also see the patterns.
Yes it’s entirely anecdotal, I have no intention of making a scientific argument, but I’m also not the only one worried about the influence of LLM’s on creators. It’s already butchering the traditional artistic world, just for the very basic reason that 14-year-old Mindy McCallister who has a crush on werewolves at one time would have taught herself to draw terrible, atrocious furry art on lined notebook paper with hearts and a self-inserted picture of herself in a wedding dress. This is where we all get started (not specifically werewolf romance but you get the idea) with art and drawing and digital art before learning to refine our craft and get better and better at self-expression, but we now have a shortcut where you can skip ALL of that process and just have your snarling lupine BF generated for you within seconds. Setting aside the controversy over if it’s real art or not, what it’s doing is taking away the formative process from millions of potential artists.
I do agree with your “averaging machine” argument. It makes a lot of sense given how LLMs are trained as essentially massive statistical models.
For image generation models I think a good analogy is to say it’s not drawing, but rather sculpting - it starts with a big block of white noise and then takes away all the parts that don’t look like the prompt. Iterate a few times until the result is mostly stable (that is it can’t make the input look much more like the prompt than it already does). It’s why you can get radically different images from the same prompt - the starting block of white noise is different, so which parts of that noise look most prompt-like and so get emphasized are going to be different.
I do agree with your “averaging machine” argument. It makes a lot of sense given how LLMs are trained as essentially massive statistical models.
Your conjecture that bad writing is due to roleplaying on the early internet is a bit more… speculative. Lacking any numbers comparing writing trends over time I don’t think one can draw such a conclusion.
Large discord groups and forums are still the proving ground for new, young writers who try to get started crafting their prose to this day, and I have watched it for over 30 years. It has changed, dramatically, and I would be remiss to say I have no idea where the change came from if I didn’t also see the patterns.
Yes it’s entirely anecdotal, I have no intention of making a scientific argument, but I’m also not the only one worried about the influence of LLM’s on creators. It’s already butchering the traditional artistic world, just for the very basic reason that 14-year-old Mindy McCallister who has a crush on werewolves at one time would have taught herself to draw terrible, atrocious furry art on lined notebook paper with hearts and a self-inserted picture of herself in a wedding dress. This is where we all get started (not specifically werewolf romance but you get the idea) with art and drawing and digital art before learning to refine our craft and get better and better at self-expression, but we now have a shortcut where you can skip ALL of that process and just have your snarling lupine BF generated for you within seconds. Setting aside the controversy over if it’s real art or not, what it’s doing is taking away the formative process from millions of potential artists.
For image generation models I think a good analogy is to say it’s not drawing, but rather sculpting - it starts with a big block of white noise and then takes away all the parts that don’t look like the prompt. Iterate a few times until the result is mostly stable (that is it can’t make the input look much more like the prompt than it already does). It’s why you can get radically different images from the same prompt - the starting block of white noise is different, so which parts of that noise look most prompt-like and so get emphasized are going to be different.