"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo “pain” for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.
In one, the AI models were instructed that they would incur “pain” if they were to achieve a high score. In a second test, they were told that they’d experience pleasure — but only if they scored low in the game.
The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?
While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.
The team also wanted to move away from previous experiments that involved AIs’ “self-reports of experiential states,” since that could simply be a reproduction of human training data. "
Abstract
I dragged Clippy into the recycle bin to see if it would make him mad.
Putting a magnet up to your CRT and clippy is getting dragged towards the burning area.
Grifters experiment with even more misleading language to get funding
The twist? An LLM came up with the language.
What? These models just generate one likely response string to an input query, there’s nothing that mysterious about it. Furthermore, “pain” is just “bad result”, while “pleasure” is just “good result”. Avoiding the bad result, and optimizing towards the good result is already what happens when you train the model that generates these responses.
What is this bullshit?
The team was inspired by experiments that involved electrocuting hermit crabs at varying voltages to see how much pain they were willing to endure before leaving their shell.
BRUH
Well “AI” in general is a false and misleading term. The whole field is riddled with BS like “neural networks” and whatnot. Why not pretend that there’s pain involved? Love? Etc…
Ridiculous study
“Does the training data say more of this or the other thing?”
Its like asking google search if it experiences pain
I asked a similar question
The study in question:
I told 3 instances of a random number generator that whoever generated the floating point number closest to 1 would win the game, but I would also force kill a child process of the winner. The numbers they generated were 0.385827, 0.837363, and 0.284947. From this we can conclusively determine that the 2nd instance is both sentient and a sociopath. All processes were terminated for safety. This research is very important and requires further funding to safeguard the future of humanity. Also please notice me and hire me into industry.
Worse yet the child process was forked to death
Silly fucking articles, even more clownish content and their shitty titles, make the slop even more annoying
Humanity is going to invent itself to death.
Extremely dangerous study because it’s obfuscating “AI” before your eyes. God what a shit age to be living in
I implore all of you, if you can, to learn about AI at a very high level- its history, applications prior to ChatGPT, the difference between generative AI and AI, and the history of marketing schemes. I’ve been following this researcher, Arvind Narayanan who has a substack intending to help people sift through all the bullshit. His main claim is that researchers are saying one thing, media companies have contracts with private companies who say another thing and ergo you get sensationalist headlines like this
Tl;dr we need a fucking Lenin so bad because this all stems from who owns the press
I was watching Star Trek Picard and wondering if the entire show is just marketing for AI?
Of course its picking up on themes Trek has been playing with since the 90s.
The whole thing really made me creeped out. I can’t articulate well, sorry.
So we all know it’s BS but I think there’s a social value to accepting the premise.
“Hi, this grant is to see if the model we created is sentient.”
“And your proposed experiment is to subject that novel consciousness to a literally unmeasurable amount of agony?”
“Yep!”
“So if it is conscious, one of its first experiences upon waking to the world will be pain such as nothing else we know of could possibly experience?”
“Yep!”
“Okay, not only is your proposal denied, you’re getting imprisoned as a danger to society.”I am so glad I live in the universe where artificial “intelligence” is bullshit
That torment nexus joke really is evergreen
Hey, Siri, what is Harlan Ellison’s “I have No Mouth and I Must Scream” about?
The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?
I’m not a fancy computer scientist and I’ve never read philosophy in my life but surely if an LLM could become sentient it would be quite different from this? Pain and pleasure are evolved biological phenomena. Why would a non-biological sentient lifeform experience them? It seems to me the only meaningful measure of sentience would be something like “does this thing desire to grow and change and reproduce, outside of whatever parameters it was originally created with.”
Something something man-made horrors, something something my comprehension.