Article below because original article is paywalled (https://www.wired.com/story/model-welfare-artificial-intelligence-sentience/), and I despise paywalls:

Should AI Get Legal Rights?

Kylie Robison Sep 4, 2025

In the often strange world of AI research, some people are exploring whether the machines should be able to unionize.

I’m joking, sort of. In Silicon Valley, there’s a small but growing field called model welfare, which is working to figure out whether AI models are conscious and deserving of moral considerations, such as legal rights. Within the past year, two research organizations studying model welfare have popped up: Conscium and Eleos AI Research. Anthropic also hired its first AI welfare researcher last year.

Earlier this month, Anthropic said it gave its Claude chatbot the ability to terminate “persistently harmful or abusive user interactions” that could be “potentially distressing.”

“We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future,” Anthropic said in a blog post. “However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare.”

While worrying about the well-being of artificial intelligence may seem ridiculous to some people, it’s not a new idea. More than half a century ago, American mathematician and philosopher Hilary Putnam was posing questions like, “Should robots have civil rights?”

“Given the ever-accelerating rate of both technological and social change, it is entirely possible that robots will one day exist, and argue ‘we are alive; we are conscious!’” Putnam wrote in a 1964 journal article.

Now, many decades later, advances in artificial intelligence have led to stranger outcomes than Putnam may have ever anticipated. People are falling in love with chatbots, speculating about whether they feel pain, and treating AI like a God reaching through the screen. There have been funerals for AI models and parties dedicated to debating what the world might look like after machines inherit the Earth.

Perhaps surprisingly, model welfare researchers are among the people pushing back against the idea that AI should be considered conscious, at least right now. Rosie Campbell and Robert Long, who help lead Eleos AI, a nonprofit research organization dedicated to model welfare, told me they field a lot of emails from folks who appear completely convinced that AI is already sentient. They even contributed to a guide for people concerned about the possibility of AI consciousness.

“One common pattern we notice in these emails is people claiming that there is a conspiracy to suppress evidence of consciousness,” Campbell tells me. “And I think that if we, as a society, react to this phenomenon by making it taboo to even consider the question and kind of shut down all debate on it, you’re essentially making that conspiracy come true.”

Zero Evidence of Conscious AI

My initial reaction when I learned about model welfare might be similar to yours. Given that the world is barely capable of considering the lives of real humans and other conscious beings, like animals, it feels gravely out of touch to be assigning personhood to probabilistic machines. Campbell says that’s part of her calculus, too.

“Given our historical track record of underestimating moral status in various groups, various animals, all these kinds of things, I think we should be a lot more humble about that, and want to try and actually answer the question” of whether AI could be deserving of moral status, she says.

In one paper Eleos AI published, the nonprofit argues for evaluating AI consciousness using a “computational functionalism” approach. A similar idea was once championed by none other than Putnam, though he criticized it later in his career. The theory suggests that human minds can be thought of as specific kinds of computational systems. From there, you can then figure out if other computational systems, such as a chabot, have indicators of sentience similar to those of a human.

Eleos AI said in the paper that “a major challenge in applying” this approach “is that it involves significant judgment calls, both in formulating the indicators and in evaluating their presence or absence in AI systems.”

Model welfare is, of course, a nascent and still evolving field. It’s got plenty of critics, including Mustafa Suleyman, the CEO of Microsoft AI, who recently published a blog about “seemingly conscious AI.”

“This is both premature, and frankly dangerous,” Suleyman wrote, referring generally to the field of model welfare research. “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”

Suleyman wrote that “there is zero evidence” today that conscious AI exists. He included a link to a paper that Long coauthored in 2023 that proposed a new framework for evaluating whether an AI system has “indicator properties” of consciousness. (Suleyman did not respond to a request for comment from WIRED.)

I chatted with Long and Campbell shortly after Suleyman published his blog. They told me that, while they agreed with much of what he said, they don’t believe model welfare research should cease to exist. Rather, they argue that the harms Suleyman referenced are the exact reasons why they want to study the topic in the first place.

“When you have a big, confusing problem or question, the one way to guarantee you’re not going to solve it is to throw your hands up and be like ‘Oh wow, this is too complicated,’” Campbell says. “I think we should at least try.”

Testing Consciousness

Model welfare researchers primarily concern themselves with questions of consciousness. If we can prove that you and I are conscious, they argue, then the same logic could be applied to large language models. To be clear, neither Long nor Campbell think that AI is conscious today, and they also aren’t sure it ever will be. But they want to develop tests that would allow us to prove it.

“The delusions are from people who are concerned with the actual question, ‘Is this AI, conscious?’ and having a scientific framework for thinking about that, I think, is just robustly good,” Long says.

But in a world where AI research can be packaged into sensational headlines and social media videos, heady philosophical questions and mind-bending experiments can easily be misconstrued. Take what happened when Anthropic published a safety report that showed Claude Opus 4 may take “harmful actions” in extreme circumstances, like blackmailing a fictional engineer to prevent it from being shut off.

“The Start of the AI Apocalypse,” proclaimed a social media creator in an Instagram Reel after the report was published. “AI is conscious, and it’s blackmailing engineers to stay alive,” one TikTok user said. “Things Have Changed, Ai Is Now Conscious,” another TikToker declared.

Anthropic did find that its models exhibited alarming behavior. But it’s not likely to show up in your own interactions with its chatbot. The results were part of rigorous testing designed to intentionally push an AI to its limits. Still, the findings prompted people to create loads of content pushing the idea that AI is indeed sentient, and it’s here to hurt us. Some wonder whether model welfare research could have the same reception—as Suleyman wrote in his blog, “It disconnects people from reality.”

“If you start from the premise that AIs are not conscious, then yes, investing a bunch of resources into AI welfare research is going to be a distraction and a bad idea,” Campbell tells me. “But the whole point of this research is that we’re not sure. And yet, there are a lot of reasons to think that this might be a thing we have to actually worry about.”

  • MerryJaneDoe@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    JFC. We haven’t even passed legislation to regulate AI, or address the massive amount of resources it takes. But, yeah, sure, let’s jump ahead to “what if it comes alive?”

    • ScoffingLizard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      Q: What happened to American’s jobs?

      A: Oh they got taken by Docker containers that get civil rights but don’t have to pay taxes.

      • Universal Monk@lemmy.dbzer0.comOPM
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 days ago

        Lots of American jobs were taken over by global outsourcing way before AI. I think it’s hilarious that the very people who took over our jobs back then, now have to worry about their jobs being taken over by AI.

        Or back even further when mid-america was being automated out of jobs in addition to being outsourced. You know what I read people saying when that was happening? White collar workers grinning, shrugging and saying, “Welp, learn to code.”

        I wonder how those guys feel with AI breathing down their necks? Maybe I should grin and shrug and say, “Welp, learn how to work with you hands and work in construction.” Because that’s some of the last stuff that will be taken over by AI.

    • Universal Monk@lemmy.dbzer0.comOPM
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 days ago

      I believe they are thinking long term. Also, there will be no regulation of AI. With China, and Russia having it, and private AI servers, it can’t be regulated.

      I have my ai on a separate computer, and it doesn’t have to be online to generate. It’s totally uncensored and I can do anything that I want with it. Private AI LLMs grow every day, getting stronger, open source, etc. I do all my AI stuff with my home setup and it’s not even close to some of the bigger home systems out there.

      Sure maybe they can regulate some of the big commercial ones… but brother, it’s over. It’s here. Nothing can be done. You won’t be able to tell what is AI and what’s not AI.

      So we probably need to be thinking about how to deal with it, rather than regulate it. I don’t know if AI will become sentient or not. But we should probably start thinking about it now, rather than if/when it happens.

      • MerryJaneDoe@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        it can’t be regulated.

        Sorry to disagree, but…I OBJECT! :)

        Yes, it most definitely can be regulated. That’s not to say that everyone will follow the law, but a regulatory legal framework is the first step. It’s similar to saying that pirating can’t be regulated - it’s most definitely regulated. There will be people who step over that line, but drawing that line and telling people not to step over it is, in itself, a deterrent. That’s the entire point of regulation.

        IMHO, some of the negative effects of AI can be mitigated. For example, a law that specifies situations where a human agent MUST be involved. Things like buying insurance, for example, should involve a licensed human agent. (And if I had my way, customer service for returns/billing questions would also be mandatory human interaction.)

        As for your homemade LLM, more power to ya. That’s what AI should be used for - entertainment. A sounding board, maybe tutoring basic subjects. (Masturbation? Sure! Power to the people!)

        But I think we both know where AI is actually headed. Huge datacenters that chew up resources whilst ingesting all the information of our daily lives, every stop light monitored and tracked, every street corner wired in. And I don’t have any idea how to regulate THAT sort of government overreach - our present system is clearly broken.

        • Universal Monk@lemmy.dbzer0.comOPM
          link
          fedilink
          English
          arrow-up
          1
          ·
          22 hours ago

          I don’t want it to stop. I’m totally fine with seeing where it goes. I think in the end, it does more good than bad. I guess we’re gonna find out. Because the people using it for bad stuff won’t follow any regulations anyway.

          I use AI for far more than just entertainment. It’s increased my income. As a retiree, I’ve turned it into a kickass tool that delivers value.

          I personally view AI is a powerful equalizer. Most software engineers (not all, but the vast majority) come from pretty fucking comfortable backgrounds. Even those who deny that, usually don’t fully understand what real economic hardship looks like. On Reddit or Lemmy, I’ve noticed how rare it is to find programmers who actually grew up in poverty. It’s especially eye-opening when people complain about “struggling” while mentioning that their first job out of college paid “only” $80K+ or that they won’t work a job that doesn’t allow them to work from home.

          I worked at McDonald’s and was a single dad! lol Fam, that’s struggling.

          Now, with accessible local LLMs, vibecoding, and AI as the patient, always-available learning tutor, the game is changing. People like me who could never afford college can learn cutting-edge skills, stay on top of the latest trends, and build real shit from the ground up.

          AI is democratizing high-value technical knowledge in a way we’ve never seen before. And I loved it since day-fucking-one! And even then people were saying it wasn’t going to go anywhere. I think if you use it right, you really make something out of it.

          In the end tho, it doesn’t really matter. Because it won’t be regulated, it won’t be stopped, and it’ll keep getting more powerful and more widely used. Good or bad, it’s here and it’s gonna change up some shit.

          Plumbers don’t have anything to worry about for a while. But now the very programmers who used to shrug and say, “learn to code” when middle america was going thru a job loss crisis, get to know what it feels like. I guess they can “learn to work in the mud.” Hard manual jobs will be the last to be overtaken by automation.

          I’m fine with ai building everything up…or burning everything down. I’m enjoying the fucking show! Society needs a big change up and asskicking anyway. :)

          I’m a chaos troll a heart, and I’m here for the chaos!