"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo “pain” for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.

In one, the AI models were instructed that they would incur “pain” if they were to achieve a high score. In a second test, they were told that they’d experience pleasure — but only if they scored low in the game.

The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.

The team also wanted to move away from previous experiments that involved AIs’ “self-reports of experiential states,” since that could simply be a reproduction of human training data. "

  • FortifiedAttack [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    2 days ago

    What? These models just generate one likely response string to an input query, there’s nothing that mysterious about it. Furthermore, “pain” is just “bad result”, while “pleasure” is just “good result”. Avoiding the bad result, and optimizing towards the good result is already what happens when you train the model that generates these responses.

    What is this bullshit?

    The team was inspired by experiments that involved electrocuting hermit crabs at varying voltages to see how much pain they were willing to endure before leaving their shell.

    BRUH

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      Well “AI” in general is a false and misleading term. The whole field is riddled with BS like “neural networks” and whatnot. Why not pretend that there’s pain involved? Love? Etc…

  • hotcouchguy [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    66
    ·
    2 days ago

    I told 3 instances of a random number generator that whoever generated the floating point number closest to 1 would win the game, but I would also force kill a child process of the winner. The numbers they generated were 0.385827, 0.837363, and 0.284947. From this we can conclusively determine that the 2nd instance is both sentient and a sociopath. All processes were terminated for safety. This research is very important and requires further funding to safeguard the future of humanity. Also please notice me and hire me into industry.

  • Hohsia [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 days ago

    Extremely dangerous study because it’s obfuscating “AI” before your eyes. God what a shit age to be living in

    I implore all of you, if you can, to learn about AI at a very high level- its history, applications prior to ChatGPT, the difference between generative AI and AI, and the history of marketing schemes. I’ve been following this researcher, Arvind Narayanan who has a substack intending to help people sift through all the bullshit. His main claim is that researchers are saying one thing, media companies have contracts with private companies who say another thing and ergo you get sensationalist headlines like this

    Tl;dr we need a fucking Lenin so bad because this all stems from who owns the press

    • glans [it/its]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      I was watching Star Trek Picard and wondering if the entire show is just marketing for AI?

      Of course its picking up on themes Trek has been playing with since the 90s.

      The whole thing really made me creeped out. I can’t articulate well, sorry.

  • BodyBySisyphus [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    2 days ago

    So we all know it’s BS but I think there’s a social value to accepting the premise.
    “Hi, this grant is to see if the model we created is sentient.”
    “And your proposed experiment is to subject that novel consciousness to a literally unmeasurable amount of agony?”
    “Yep!”
    “So if it is conscious, one of its first experiences upon waking to the world will be pain such as nothing else we know of could possibly experience?”
    “Yep!”
    “Okay, not only is your proposal denied, you’re getting imprisoned as a danger to society.”

  • Coca_Cola_but_Commie [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    2 days ago

    Hey, Siri, what is Harlan Ellison’s “I have No Mouth and I Must Scream” about?

    The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

    I’m not a fancy computer scientist and I’ve never read philosophy in my life but surely if an LLM could become sentient it would be quite different from this? Pain and pleasure are evolved biological phenomena. Why would a non-biological sentient lifeform experience them? It seems to me the only meaningful measure of sentience would be something like “does this thing desire to grow and change and reproduce, outside of whatever parameters it was originally created with.”