"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo “pain” for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.

In one, the AI models were instructed that they would incur “pain” if they were to achieve a high score. In a second test, they were told that they’d experience pleasure — but only if they scored low in the game.

The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.

The team also wanted to move away from previous experiments that involved AIs’ “self-reports of experiential states,” since that could simply be a reproduction of human training data. "

  • Awoo [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    1 day ago

    A “pain receptor” is just a type of neuron. These are neural networks made up of artificial neurons.

    • laziestflagellant [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      This situation is like adding a face layer onto your graphics rendering in a game engine and setting it so the face becomes pained when the fps drops and becomes happy when the fps is high. Then tracking if that facial system increases fps performance as a test to see if your game engine is sentient.

      it is a fancy calculator. It is using its neural network to calculate fancy math just like a modern video game engine. Making it output a text response related to pain is just the same as adding a face on the HUD, except the video game example is actually quantified to something, whereas the LLM is just keeping the ‘pain meter’ in its input context it uses to calculate a text response with.