• b3nsn0w@pricefield.org
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    1 year ago

    There simply isn’t enough entropy in prose to accurately detect the use of language models if either

    1. the text is short enough, like a school essay, or
    2. there has been enough of a human-ai collaboration

    Like I’m sure many of us are familiar with ChatGPT’s style when asked to write text on its own or with very little prompting. However, that’s just the raw style, and ChatGPT is only one of the many language models (albeit it’s clearly the most accessible). If you provide example prose for the AI to imitate, for example with a simple tool like Sudowrite (the use of which tends to be the subject of many accusations) you will not pick out those segments from human-written prose unless the human using it is too lazy or too stupid to leave in obvious tells.

    The sooner we let go of this comfortable fantasy that AI somehow leaves in easy to isolate markers that enable a different (and vastly inferior) AI model to tell if the text was AI or not, the better. The simple truth is that if that was the case, AI companies would use the same isolation strategies to teach their AI to imitate human prose better, therefore breaking detection.

    And with a high chance for false positives we’re just going to recreate a cyberpunk version of the Salem witch trials. Because we simply have no proof – if you don’t trust ChatGPT with anything important, why would you trust a vastly less sophisticated AI, or something that amounts to a gut feeling, to condemn people?

    • Iteria@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I think that for school, assignments will just evolve. We’ll go back to in classroom essays probably. We’ll also see the nature of assignments change. I remember when I was in high school and the internet was in its infancy. My history just gave us more specific topics to write on to force us to use books. My young cousin who came a decade behind me. Found himself with topics that necessitated using scholarly sources. Wikipedia wasn’t gonna cut it. I imagine in grade school it will be like that with assignments that need some kind of human invention while allowing for inescapable technology use.

      I went to a top engineering college, to say that cheating wad rampant and creative was an understatement. I thought how the professors got around that was fascinating in retrospect. Some assignments were intentionally unpassable. If you got a passing grade, you failed. Your curved grade was based on standard distribution of not only your class, but every other class that had every been.

      Projects and in person things were super common. Assignments were keyed to you as a person and no one in the class had the exact same assignment. For that reason collaboration in project based classes was expected and encouraged.

      The one thing AI will never have is discretion and yoi can’t get their output with a linked computer. I look forward to seeing how schools adapt their assessments around these facts.

  • bluemellophone@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    1 year ago

    Using AI with humans-in-the-loop is a fantastically powerful combination. We can solve many problems with smart system design that humans alone cannot achieve.

    Source: PhD on automated animal censusing: we can build visual databases of individual animals for passive and long-term conservation work.

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      Absolutely, but that’s not what she’s saying. She’s saying that the products that tout the capability to detect the usage of LLMs to cheat on essays and the like are really rubbish and give a lot of false positives.

      She mentions that they’re particularly inaccurate when it comes to English-as-second-language speakers, meaning foreign/exchange students are more likely to get marked as cheaters even though they might not be.

      I think the issue is that our education system is dated. Grades in general aren’t effective measures of knowledge, and they suck as motivators.

      AI tools aren’t going anywhere so the education system will need to find a way of working with them. It’s time to modernise.