• hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    2 days ago

    I mean, the best proof is just to write well. AI can’t do that.

      • James R Kirk@startrek.website
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 day ago

        That’s because LLMs cannot be interesting, even something poorly written can be interesting if it was written from a human perspective.

        • Jomega@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          1 day ago

          There was a bit in a hbomberguy video where he went on a tangent about The Room and speculated that it might be semi-autobiographical. That Tommy Wiseau must have experienced a breakup that at least on his end felt like this. Lisa’s actions in the film are so illogical that the only way they make sense is if you assume that the director has bad experiences with women and made a movie instead of just going to therapy or something.

          Plagiarism machines have no life experiences, so they can’t make something like The Room.

  • Dippy@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    I do my writing on an old laptop that doesnt know the touch of internet. When im done writing, it goes on a flash drive that goes to an internet connected computer to send it out. I dont know how to prove shit about fuck in my text itself. But, I have the original device I used and all that so.

  • James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    2 days ago

    On one hand I understand the need to feel seen as authentic, but on the other, I think there is a danger in giving into the hysteria surrounding slop by twisting one’s work into knots. If it’s good, it’s good. As Martin Scorsese said: “The most personal is the most creative.”

    I know that I’ve never personally never seen any LLM generated content that was interesting beyond it’s novelty. Adam Savage also said (paraphrasing bc I can’t find it) “Sharing your point of view is what makes something interesting and I have yet to see AI-generated content that has a point of view”.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      2 days ago

      If it’s good, it’s good

      “Sharing your point of view is what makes something interesting and I have yet to see AI-generated content that has a point of view”.

      The problem is that readers and writers necessarily are playing an adversarial game. If you’re reading, you want to determine whether what you are reading is a waste of your time as quickly as possible. If you’re writing, you want to signal to the reader that it is worth it to keep reading. Even if what you have to say is really deep and genuine and true and someone reading the whole thing would be able to tell, a reader could see an em dash and bail in the first two seconds, and that doesn’t mean they are shallow or not worth writing for, it’s just basic self defense.

      Like it or not if you are writing something you want people to read it’s not optional to consider how to get past their filters, they aren’t going to give you a second look if you fail one. IMO that doesn’t have to be a bad thing, honestly a lot of formal writing style was already just signaling of a different type; that you are educated and likely know what you are talking about, but a lot of that was always bullshit and unrelated to whether you have something worthwhile to say. At least now that AI has dragged old styles of writing through the mud, there is less incentive to waste time prettying up your words to match a standard.

      • James R Kirk@startrek.website
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        If you’re reading, you want to determine whether what you are reading is a waste of your time as quickly as possible.

        I have honestly not heard of this behavior, and I myself certainly don’t do this. I wouldn’t determine “what’s worth reading” in the middle of reading, but well before I start. For example, if a piece is published somewhere I trust, or a friend recommended it, or say, it was posted in a Lemmy community known to have good moderation.

        Like I said I understand why an artist would have a desire to present as authentic, but that is an unwinnable game because:

        1. The AI is also “trying to present as authentic” (any automated filtering system is beatable)
        2. Adjusting one’s behavior so as to appear a particular way is definitionally inauthentic.
        • chicken@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          If you are determining it before you start reading, you are doing so on the basis of reading that others have done, and your trust in their judgment. Going by the source is not infallible, take the Ars Technica scandal as an example, where an AI hallucinated quote was falsely attributed to someone. As for Lemmy, there are many articles from lesser known sources that get positive attention here that seem to use AI but do not get removed, I think mostly because they are delivering a political message that is well regarded. To identify them and have them removed without cutting out all blog content (which given the above example clearly is not even enough) requires someone to read them, evaluate them, and make the case that they should be removed, and that case has to be strong enough to overcome pushback from people biased to believe they are authentic or maybe even suspicious that the real reason for removal could be political.

          It is a nontrivial task, but it’s one that anyone could contribute to by developing their own sense of what is and isn’t AI output, and reading far enough to make their own judgment. It’s important to be able to do that without a full read, because the main threat of AI content is unlimited scaling.

          any automated filtering system is beatable

          What I’m calling for is not an automated system, but for people to develop skill at manually and dynamically identifying and signalling humanity in a way that resists automated systems. Attempting to do this is not inauthentic, just like trying to write poems in defined styles is not inauthentic.

          • James R Kirk@startrek.website
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            As for Lemmy, there are many articles from lesser known sources that get positive attention here that seem to use AI but do not get removed

            Well, Lemmy is not one thing, your instance for example is explicitly in favor of boosting AI-generated content. So that behavior is what I would expect if I had an account there. I personally wouldn’t go there expecting to see links to human-made content.

            I don’t believe it’s possible for human writers to write both authentically and also in a way that is coded to verify they are human (as the article discusses) that an LLM couldn’t eventually come to replicate. I also don’t believe it’s possible for an LLM to write from their unique perspective. Therefore, I believe the strongest method for verifying ones own human-ness is to write from one’s own unique perspective.

            signalling humanity in a way that resists automated systems

            I think I would understand your perspective better if you gave an example or two of what signals could be used?

            • chicken@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              23 hours ago

              What I’m talking about is posted across all popular instances and is not specific to db0, and imo there is a very big difference between content that is explicitly AI and AI blog posts that portray themselves as being human written. I support the existence of a space for the former while opposing the latter.

              Therefore, I believe the strongest method for verifying ones own human-ness is to write from one’s own unique perspective.

              I agree, but it is possible to adjust your personal filter to let your unique signature be expressed in different ways, and it’s possible to write with your audience in mind without being inauthentic. Throwing up your hands and giving up is not the right approach, even though it’s a hard problem that by its nature resists specific actionable answers. The article gives an example of a contrived way AI can attempt to falsify such a signal:

              “You’ll be reading someone’s Substack or blog post, and all of a sudden in the middle of a perfect paragraph, there’ll be a mistake sitting out there like a sore thumb,” said O’Bryan, 62. “It’s like, try harder.”

              There are lots more, such as reducing the probability of the top weighted words the LLM chooses from in the last stage of its process. But this level of extra attention to automated signaling isn’t always applied, and I believe it can be defeated by developed intuition if people will bother to try to develop it. From the writing side, the approach should be to put more of yourself into more parts of what you write, to try to match the intuitions of readers, and to reduce efforts to converge on concepts of correct writing that could be in conflict with this.

              • James R Kirk@startrek.website
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 hours ago

                reducing the probability of the top weighted words the LLM chooses from

                My feeling is that a writer who adjusts their word choice to present a particular way is definitionally behaving inauthentically. I would characterize such writing as “slop” even if it’s human made, because it was still heavily influenced by how LLMs “write”.

                Put another way- I don’t believe that “not worrying about appearing as an LLM” is “giving up”, I think it’s a recognition that an LLM is not capable of fighting you in the first place. If you, a creative soul, allow fear of “coming off a certain way” (ANY way) to determine how you write, you have already lost.