- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Here’s my Costco receipt for pens…
I’ve got a pen just like that, inherited from my mother - Parker 51
I mean, the best proof is just to write well. AI can’t do that.
I’m actually more confident that bad writers don’t use AI.
That’s because LLMs cannot be interesting, even something poorly written can be interesting if it was written from a human perspective.
There was a bit in a hbomberguy video where he went on a tangent about The Room and speculated that it might be semi-autobiographical. That Tommy Wiseau must have experienced a breakup that at least on his end felt like this. Lisa’s actions in the film are so illogical that the only way they make sense is if you assume that the director has bad experiences with women and made a movie instead of just going to therapy or something.
Plagiarism machines have no life experiences, so they can’t make something like The Room.
Hah, that’s a great example. The Room is nothing if not interesting!
Yep, or heck- write really badly! It can’t do that either!
I do my writing on an old laptop that doesnt know the touch of internet. When im done writing, it goes on a flash drive that goes to an internet connected computer to send it out. I dont know how to prove shit about fuck in my text itself. But, I have the original device I used and all that so.
On one hand I understand the need to feel seen as authentic, but on the other, I think there is a danger in giving into the hysteria surrounding slop by twisting one’s work into knots. If it’s good, it’s good. As Martin Scorsese said: “The most personal is the most creative.”
I know that I’ve never personally never seen any LLM generated content that was interesting beyond it’s novelty. Adam Savage also said (paraphrasing bc I can’t find it) “Sharing your point of view is what makes something interesting and I have yet to see AI-generated content that has a point of view”.
If it’s good, it’s good
“Sharing your point of view is what makes something interesting and I have yet to see AI-generated content that has a point of view”.
The problem is that readers and writers necessarily are playing an adversarial game. If you’re reading, you want to determine whether what you are reading is a waste of your time as quickly as possible. If you’re writing, you want to signal to the reader that it is worth it to keep reading. Even if what you have to say is really deep and genuine and true and someone reading the whole thing would be able to tell, a reader could see an em dash and bail in the first two seconds, and that doesn’t mean they are shallow or not worth writing for, it’s just basic self defense.
Like it or not if you are writing something you want people to read it’s not optional to consider how to get past their filters, they aren’t going to give you a second look if you fail one. IMO that doesn’t have to be a bad thing, honestly a lot of formal writing style was already just signaling of a different type; that you are educated and likely know what you are talking about, but a lot of that was always bullshit and unrelated to whether you have something worthwhile to say. At least now that AI has dragged old styles of writing through the mud, there is less incentive to waste time prettying up your words to match a standard.
If you’re reading, you want to determine whether what you are reading is a waste of your time as quickly as possible.
I have honestly not heard of this behavior, and I myself certainly don’t do this. I wouldn’t determine “what’s worth reading” in the middle of reading, but well before I start. For example, if a piece is published somewhere I trust, or a friend recommended it, or say, it was posted in a Lemmy community known to have good moderation.
Like I said I understand why an artist would have a desire to present as authentic, but that is an unwinnable game because:
- The AI is also “trying to present as authentic” (any automated filtering system is beatable)
- Adjusting one’s behavior so as to appear a particular way is definitionally inauthentic.
If you are determining it before you start reading, you are doing so on the basis of reading that others have done, and your trust in their judgment. Going by the source is not infallible, take the Ars Technica scandal as an example, where an AI hallucinated quote was falsely attributed to someone. As for Lemmy, there are many articles from lesser known sources that get positive attention here that seem to use AI but do not get removed, I think mostly because they are delivering a political message that is well regarded. To identify them and have them removed without cutting out all blog content (which given the above example clearly is not even enough) requires someone to read them, evaluate them, and make the case that they should be removed, and that case has to be strong enough to overcome pushback from people biased to believe they are authentic or maybe even suspicious that the real reason for removal could be political.
It is a nontrivial task, but it’s one that anyone could contribute to by developing their own sense of what is and isn’t AI output, and reading far enough to make their own judgment. It’s important to be able to do that without a full read, because the main threat of AI content is unlimited scaling.
any automated filtering system is beatable
What I’m calling for is not an automated system, but for people to develop skill at manually and dynamically identifying and signalling humanity in a way that resists automated systems. Attempting to do this is not inauthentic, just like trying to write poems in defined styles is not inauthentic.
As for Lemmy, there are many articles from lesser known sources that get positive attention here that seem to use AI but do not get removed
Well, Lemmy is not one thing, your instance for example is explicitly in favor of boosting AI-generated content. So that behavior is what I would expect if I had an account there. I personally wouldn’t go there expecting to see links to human-made content.
I don’t believe it’s possible for human writers to write both authentically and also in a way that is coded to verify they are human (as the article discusses) that an LLM couldn’t eventually come to replicate. I also don’t believe it’s possible for an LLM to write from their unique perspective. Therefore, I believe the strongest method for verifying ones own human-ness is to write from one’s own unique perspective.
signalling humanity in a way that resists automated systems
I think I would understand your perspective better if you gave an example or two of what signals could be used?
What I’m talking about is posted across all popular instances and is not specific to db0, and imo there is a very big difference between content that is explicitly AI and AI blog posts that portray themselves as being human written. I support the existence of a space for the former while opposing the latter.
Therefore, I believe the strongest method for verifying ones own human-ness is to write from one’s own unique perspective.
I agree, but it is possible to adjust your personal filter to let your unique signature be expressed in different ways, and it’s possible to write with your audience in mind without being inauthentic. Throwing up your hands and giving up is not the right approach, even though it’s a hard problem that by its nature resists specific actionable answers. The article gives an example of a contrived way AI can attempt to falsify such a signal:
“You’ll be reading someone’s Substack or blog post, and all of a sudden in the middle of a perfect paragraph, there’ll be a mistake sitting out there like a sore thumb,” said O’Bryan, 62. “It’s like, try harder.”
There are lots more, such as reducing the probability of the top weighted words the LLM chooses from in the last stage of its process. But this level of extra attention to automated signaling isn’t always applied, and I believe it can be defeated by developed intuition if people will bother to try to develop it. From the writing side, the approach should be to put more of yourself into more parts of what you write, to try to match the intuitions of readers, and to reduce efforts to converge on concepts of correct writing that could be in conflict with this.
reducing the probability of the top weighted words the LLM chooses from
My feeling is that a writer who adjusts their word choice to present a particular way is definitionally behaving inauthentically. I would characterize such writing as “slop” even if it’s human made, because it was still heavily influenced by how LLMs “write”.
Put another way- I don’t believe that “not worrying about appearing as an LLM” is “giving up”, I think it’s a recognition that an LLM is not capable of fighting you in the first place. If you, a creative soul, allow fear of “coming off a certain way” (ANY way) to determine how you write, you have already lost.
To clarify, that quote was not what I am suggesting, rather it’s part of the bar to be overcome.






