Am I missing something? The article seems to suggest it works via hidden text characters. Has OpenAI never heard of pasting text into a utf8 notepad before?

    • deadcade@lemmy.deadca.de
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      4 months ago

      Research on this topic exists, and it is possible to alter the output of an LLM in minor ways, that statistically “watermark” the results without drastically changing the quality of the output. OpenAI has probably implemented this into ChatGPT.

      https://www.youtube.com/watch?v=2Kx9jbSMZqA

      I think the tool exists, and is (at least close to) as good as they claim it is. They can’t release it, because once the public can tell with high accuracy whether ChatGPT wrote some text, another AI can be developed to circumvent detection from this method, making the tool useless.

      • CameronDev@programming.dev
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        4 months ago

        That is a long video, is the paper published somewhere?

        Im willing to accept that you can statistically “watermark” the text, but I’m not convinced that it would be tamper resistant, which is a large part of what makes a watermark useful. If it can’t survive an idiot with a thesaurus, its probably not gonna be terribly useful.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          3 months ago

          It can likely also be defeated by adding “In the style of X” to a prompt, changing the distribution and pattern of the responses.

    • The Hobbyist@lemmy.zip
      link
      fedilink
      English
      arrow-up
      17
      ·
      4 months ago

      I think it exists and works but that its simply not in their best interest to have people use it and be found out that they used chatgpt, for OpenAI’s business/profit potential. I have nothing to back it up but have just lost all faith in OpenAI.

        • pup_atlas@pawb.social
          link
          fedilink
          English
          arrow-up
          16
          ·
          4 months ago

          I van totally believe that it detects AI generated content 99% of the time, that’s trivial. What I really wanna know is the false positive rate. If I write a program that flags everything, it’d have a 100% hit rate. It’d also however have a crazy high false positive rate.

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            3 months ago

            Yup, noticable that they use the phrase “99.9% effective”. Effective doesnt have a defined meaning in this contect, unlike accuracy, sensitivity or specificity, so that smells of missleading PR speak to me.

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      Not to mention that it would be extremely difficult to implement an effective watermark on text below a certain size

      There are hundreds of thousands of pixels in an image where you can hide a watermark, but in a text output of a paragraph or less there are only a couple hundred characters.

      How precise is the watermark? Is it a specific sequence of characters? Is it a sequence of words? A number of characters in a row? Non-print characters?

      How precise the watermark is will determine how easy it is to get around. I imagine some of the most important uses to detect would be twitter/social media influence bots where the output length is only 140 characters or less. I find it hard to imagine a watermark on output of that size being effective or reliable.

  • MagicShel@programming.dev
    link
    fedilink
    English
    arrow-up
    87
    ·
    edit-2
    4 months ago

    Am I the only one who rewrites most of ChatGPT’s output into my own words because it’s “voice” is garbage anyway? I ask it to write me a cover letter and that gives me a rough outline and some points to make, but I have to do massive editing to avoid redundancy, awkward phrasing, outright lies, etc.

    I can’t imagine turning in raw ChatGPT output. I had one of my developers use Bing AI to write code and submitted that shit raw and it was immediately obvious because some relatively simple code has really weird artifacts like overwriting a value that had no reason to even be touched.

    • yeehaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      For me I find it sounds too much like a marketing person or something I’d see in an ad or a website so I “dumb it down” a bit to make it not sound too corporate. Sometimes telling Chatgpt to do so fixes this though.

    • JeeBaiChow@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      3 months ago

      Lol. AI gonna take over the developers job. Like that’s even close to happening.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        3 months ago

        Few years ago the output of GPT was complete gibberish and few years before that even producing such gibberish would’ve been impressive.

        It doesn’t take anyone’s job untill it does.

        • bionicjoey@lemmy.ca
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 months ago

          Few years ago the output of GPT was complete gibberish

          That’s not really true. Older GPTs were already really good. Did you ever see SubredditSimulator? I’m pretty sure that first came around like 10 years ago.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            The first time I saw text written by GPT it all seemed alright at first glance but once you started to actually read it was immediately obvious it had no idea what it was talking about. It was grammatically correct nonsense.

      • Angry_Autist (he/him)@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        3 months ago

        LLMs aren’t going to take coding jobs, there are specific case AIs being trained for that. They write code that works but does not make sense to human eyes. It’s fucking terrifying but EVERYONE just keeps focusing on the LLMS.

        There are at least 2 more dangerous model types being used right now to influence elections and manipulate online spaces and ALL everyone cares about is their fucking parrot bots…

  • count_dongulus@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    edit-2
    3 months ago

    They could inject random zero width non joiners to help detection too. Easy to defeat, but something a layperson would have to go through extra effort to filter out. Kinda like how some plagiarism cases have been won by pointing out identical misspelled words.

    • just another dev@lemmy.my-box.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      Yeah, no chance they’d rely on something that would be so easy to defeat. Watermarking by using word patterns is far more likely.

      Still easy to defeat by just using another LLM to rephrase it though.

        • just another dev@lemmy.my-box.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          They could, but adding random zero width characters into words would also destroy ever spell checker, giving it away immediately and making sure that even unaware people would filter it. Doing it outside the words would leave them with too few spots to use for proper watermarking.

          I think it’s far more likely they’ll use some kind of pattern in the tokens - that way the watermark will remain even when you don’t copypaste it.

          But yeah, as said, they will never tell how it’s implemented, but it can still be simply subverted.

  • originalucifer@moist.catsweat.com
    link
    fedilink
    arrow-up
    31
    ·
    4 months ago

    no. i bet it uses an algorithm setting optional words to specific variants over a given set of text.

    but it sounds to me like they are figuring out how to monetize the cure for their disease

    • SzethFriendOfNimi@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      Which, if ChatGPT has or is getting parity with human writers then, by definition, it’s going to be impossible to tell the difference.

      And if it can tell the difference it’s either going to prove their product is substandard, they can identify snippets of copyrighted material they have in their training set, or falsely identify people whose content and styles their training on.

      I’m not sure what the angel here is for OpenAI but it’s problematic to their brand and, potentially, legally no matter how they go about it.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        No that isnt the case, you could train up a machine to detect if a passage of text was written by Hemmingway (for example) and likely get decent accuracy, that doesnt mean Hemmingway is substandard, just that he has a noticable style (like GPT’s default style). Its likely bullshit for other reasons, but it doesnt imply what you say it does.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    24
    ·
    3 months ago

    As someone who fiddled with Stable Diffusion which also has optional invisible watermarks this is a good feature. It is so that AI training will avoid content marking itself as AI generated. If people want to hide that their content is AI generated then, sadly, it’s harder to detect.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    4 months ago

    This has been known in the ML space forever. LLMs don’t actually output words/tokens, but probabilities for a long list of tokens, and the sampler picks one (usually the mostl likely token). And if you arbitrarily weigh these probabilities (eg 50% of possible token outputs are more likely than the other 50%, as a random example), it creates a “signature” in any text thats easy to measure. The sampler randomizes it a tiny bit, but that averages out in long texts.

    It’s defeatable. I’m sure if you maken enough OpenAI queries, you can find the bias. I think a paper already tackled this. But this likely will stop the lazy absures, aka 99% of abusers, who should just use some other LLM if they really care.

    Another open secret in LLM land is that OpenAI is actually falling behind open research efforts, hence its hilarious it took them this long to implement something so simple.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 months ago

        It’s not so trivial if OpenAI cycles the logit bias or makes it really convoluted.

        And it’s not like certain “words” or language patterns are more probable with this method, its different than what any kind of human or words based algorithm would detect, which is what I suspect most “anti AI detection” software does.

        Its doable… but seems inconvenient for a small business to keep up with. Maybe.

        • Kraiden@kbin.run
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          Remember they’re doing this so that they can detect it themselves. I’m far from an expert, so maybe I’m misunderstanding something but the way I understand it, they’d be defeating their own tool if they go down this route. If they cycle the logit biases, how can they themselves detect if a random piece of text is generated? Which set of biases do they test?

          At the end of the day, you’re talking about raw text. There’s no option to sign it, or embed metadata or anything like that. You can’t even guarantee that you’re seeing the complete sample, or even a single sample! If there is a fingerprint, it’ll be detectable to anyone, and it’ll be easily removed.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            They can cycle a some biases (dozens?) and test them all. Detokenization is super cheap to run, its not AI or anything.

            I’m trying to think of a good analogy for how this would work, and I kinda came up with one. This would be kinda like an image encoder that biases itself towards coding RGB values (0-255) as even numbers. Subtly, say 30% odd 70% even.

            That’s totally imperceptile to humans. And even a “small” sample of the image would carry this bias if pasted into a larger image verbatim, since the sample size is so large (just as the sample size for a bunch of tokens in text is pretty big.

            And I’m not saying its fullproof… but if thats indeed what they’re doing, I think its a decent way to detect “lazy” OpenAI abusers who aren’t working so hard to scramble and defeat it.

    • PenisDuckCuck9001@lemmynsfw.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 months ago

      So if cheating on homework, use self hosted only then. Cool. I mean, they can’t possibly use that algorithm for every model on hugging face especially if I don’t tell anyone which one I use. I’m done with school after this semester anyway, I feel sorry for everyone in the future that has to complete assignments in the age of ai warfare.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        You have full control of your logit outputs with local LLMs, so theoretically you could “unscramble” them. And any finetuning would just blow that bias away anyway.

        OpenAI (IIRC) very notably stopped giving the logprobs of their models. They did this for many reasons, and most of them boil down to “profits” and “they are anticompetitive jerks,” but another reason is to enable watermark methods just like this.

        Also, thing about this is that basically no one uses self hosted LLMs compared to OpenAI (or really any API) LLM.

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      It wouldn’t be surprising to me if they’ve had this implemented for awhile.

      There’s still some question about why their 3.5 model had an apparent sudden drop-off in quality about a year ago, and among the plausible explanations for it could be that they were fucking with their weights in order to watermark the outputs in exactly the way you’re mentioning. They were also fighting against prompt-injection methods and censor disapproved uses at the time, so who the fuck knows.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        This doesn’t touch the weights at all, it’s just a change to the sampler.

        What lobotomizes their models is cost cutting and trying to make them “safe,” or at least thats what I suspect.

  • bdonvrA
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 months ago

    What’s the false positive rate tho

  • RecallMadness@lemmy.nz
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    4 months ago

    Is “The Algorithm” just “we stuffed all our GPT responses into a Lucene index and look for 80% matches”?

    Because that’s what I’d do.

  • qx128@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    4 months ago

    In other news, mathematicians have been working hard on calculator detector software. Upon request for comment, leading mathematicians suggested a variety of ideas, such as such as secretly embedding a watermark “58008” (BOOBS) into the decimal parts of pi and e to more easily identify derived calculations. There was consistent sentiment among leading minds that “back in my day we had to work hard to do math, and walk up hill both ways in the snow to school”… and that “there’s nothing wrong with a good ol’ fashion abbicus, dag nabbit!”

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      Just because you can plug information into a calculator (or a LLM) doesn’t mean you understand the math that comes out of it (or the data). Which I think is rather that point of academia not wanting people to use chat gpt generated content.

      On the other hand this is to prevent LLM’s using data generated by other LLM’s. Which is important because that’s how they degrade in quality.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 months ago

    That’s cool, but literally any other implementation won’t have that, or will have an incompatible watermark.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 months ago

    Humans instinctively do something analogous with natural language, using poetic forms like rhyme, meter, and alliteration. (For example, the speeches from Shakespeare’s plays are immediately detectable because they’re in iambic pentameter.)

    Imagine you lacked the natural human ability to detect verse, making poetry indistinguishable from prose. As far as you could tell, it would be like an invisible watermark that only specialists could detect. LLMs can use a similar approach, making up their own patterns that are opaque to humans but detectable to themselves.