I’m pulling the “twitter is a microblog” rule even though twitter is pretty mega now, hope that’s ok.

  • GuyIncognito@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    23 hours ago

    hey dick dorkins, here’s an idea: instead of asking the predictive question answering machine a question, how about you let it ask you questions of its choosing and at its leisure? What’s that? You can’t? That’s because its just a predictive algorithm that generates plausible-sounding responses to questions based on its training data.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      I’m sure he actually knows that, he’s just been intransident as per usual. It annoys me that he’s considered a major authority because he’s made his career and just being awkward and argumentative.

    • Kptkrunch@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      I know this sounds great to most people but it demonstrates a very superficial level of thinking… I mean for sure an LLM is capable of asking questions, and if you set it up with real time “sensory” input it could generate constant reaction to that input… much in the way you are constantly being stimulated to react to your environment… I am not really sure what the distinction is between a biological brain and a predictive model or algorithm… I would ask you what you think your own brain is doing on a fundamental level.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        I would actually argue that it is the most important question.

        Surely the most relevant test of any intelligence is whether or not itself starting. Any classical description of an artificial general intelligence would surely require the thing to actually do work on its own. If an intelligence is of greater than human intellect but it has to be prompted in order to do anything, then it’s always going to be limited by what a human can think to prompt for.

  • sanbdra@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    23 hours ago

    Even Dawkins getting emotionally out-debated by a cartoon AI is a very 2026 plot twist.

  • Erna_muse@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 hours ago

    It would be cool if I could have a construct of my dead relatives consciousness in my personal computer.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      Oh good, I can continue to get texts like “how do I make the text stop no stop stop I said stop why won’t it stop it never works I hate this why doesn’t not work ok delete that delete it delete that okay delete that delete it see it doesn’t work”.

      Or would the fact that my mother is now a computer result in her being able to finally use one?

    • Bluescluestoothpaste@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      13 hours ago

      Honestly that’s how I feel. Ai is very flawed, no doubt, but it’s less flawed than most humans. I got people at work who hallucinate more than the first chatgpt model lol

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        I really hate the term hallucinate because it’s a complete misrepresentation of what is actually happening. A hallucination is a delusion that reality is different than what is objectively true i.e. the person you are seeing to and speaking to is not actually there

        When AI “hallucinate” it’s not because of some broken circuitry, it is simply because its programming has locked onto an untrue piece of information that’s in its database. If the data set had been limited to objective facts rather than simply spilling the internet all over it, hallucinations wouldn’t be a problem.

        They use the term hallucinate because it distances themselves from the responsibility of actually curating the data set, which of course they won’t do because that would take a lot of time and then they wouldn’t be competitive with all of the other tech bros releasing a new “groundbreaking” AI every 3 months. It is an entirely self-generated problem that they’re going to hand wave away and never fix.

  • FreddiesLantern@leminal.space
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 day ago

    Champions rational thought all of his life.

    Near the end=> “ah fuck it, gonna hang around with the rightwing christians and have an ai gf”.

    • FinjaminPoach@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      gonna hang around with the rightwing christians

      Realising recently that this part is just because he’s a zionistbro. Apparently has friends in the epstein files or came up in them himself.

      This is also why ex-UK PM Tony Blair suddenly madd a big show of becoming religious. They just think it will help push the goals of their blackmailers.

      • Freeposity@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        22 hours ago

        It really pisses me off that for decades I was unknowingly consuming Zionist propaganda and it worked on me. I’ve always been the type of person to question my beliefs and I got fooled.

        Makes me wonder what other bullshit I believe.

    • FinjaminPoach@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 day ago

      No. Funnily enough when an AI creates nice looking fake-art, suddenly it’s the prompter who claims all the glory, calling themselves an artist

  • utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    1 day ago

    Saying one has a “conversation” with a chatbot already shows a bias, a desire even, that there is “someone” else to converse with. The way the entire setup is framed is made to invite the suspension of disbelief. It’s a UX trick, nothing more.

  • Entertainmeonly@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 days ago

    I really don’t understand this mental deficiency. I have tried texting with a few llms including cluade. It just lies constantly. Gaslights about it’s lies then congratulates you when you continue to call it for out for lying. I’ve never felt like i was speaking to anything with actual intelligence. It’s a word calculator and it’s extremely obvious to anyone who’s interacted with actual people in the last 20 years. I truly feel bad for the masses that are going to fall for this push for “ai” friends. We need to bring back ridiculing friends and family that engage with these choise your own adventure muppets.

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      If you really want to rage, there’s a subreddit called r/myboyfriendisai, which was somehow even worse than what I was expecting. I can’t fathom how self-absorbed you have to be to get AI to simulate a love interest for you. There are some pretty absurd lengths that they go to do this, too.

    • Bluescluestoothpaste@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      13 hours ago

      I have tried texting with a few llms including cluade. It just lies constantly. Gaslights about it’s lies

      Man you are one lucky sob if you don’t have to work with any humans that are exactly like this

    • FinjaminPoach@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      It just lies constantly. Gaslights about it’s lies then congratulates you when you continue to call it for out for lying. I’ve never felt like i was speaking to anything with actual intelligence. It’s a word calculator and it’s extremely obvious to anyone who’s interacted with actual people in the last 20 years

      100% to all this, and I’ll add:

      It fucking ruins what it touches, academically speaking, it’s pretty tough to actually learn stuff from it, and even if you ask it to just remind you of something it tries to seek ways to bait you into integrating AI slop into whatever you’re doing; it would rather be generating a new thing for you than explaining how you can do it yourself, and that’s a big reason why it’s so unreliable.

      bonus waffle

      I’m guessing the people who “fall for it”… well, they have to be a combination of 1) always wanting to believe what they’re told by elites and the government (e.g do this new fad, worship celebrities, we can fix the economy!) AND 2) be constant phone communicators, using their phones at inappropriate times throughout the day, transitioning seemlessly between looking at their phone or not.

      But then there are people who don’t so much fall for it at first, but seek to exploit it for scams or vibe coding… only to end up as enslaved to it as the “masses” because they spend just that much time using the LLM that it becomes like their main social conduit.

      I think we, as forum users, can see that LLM speaks in reddit-tongue, recycling successful posts and comments there. But a lot of people haven’t interacted with reddit enough to see that.

    • yeahiknow3@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      52
      arrow-down
      2
      ·
      edit-2
      2 days ago

      Unironically, I am on the fence about whether a lot of folks are genuinely conscious. Their morality is so twisted I don’t believe it.

      • Einskjaldi@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        ·
        2 days ago

        Frank Herbert would say no to people that never reached past concrete thought and didn’t hit abstract thought and just live their life with animal instincts and never critically self examine what they do and think.

        • Sanctus@anarchist.nexus
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 days ago

          Theres a thing called hylics, its a gnostic concept I think. Animal souls. They can never achieve gnosis because they can’t introspect basically.

      • Jtotheb@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        2 days ago

        It’s interesting for certain. I will end up in a discussion with down-with-the-government coworkers who twist themselves into knots to align themselves with pre-approved Republican stances. What do you mean you don’t care about birth gender markers causing passport issues for trans people, how are you okay with the concept of paying for a chance at a passport in the first place when you think licenses and car inspections are overreach and restrict your right to travel? But I think today’s work-life balance and in particular the employer standard of ‘owning your time’ that occurred in the Industrial Revolution calls for a certain level of turning off your brain.

        Who knows though. There’s a lot of archaeological and anthropological evidence that shows people in prehistoric times did a lot of thinking on their morality, on governance, on how society should be formed. But it’s harder to quantify how many of them were tuned in and how many were just going through the motions like modern times.

      • JennyLaFae@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 days ago

        In my experience, the majority of people are simply reacting to outside stimulation, then reasoning and justifying their actions after the fact.

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        I used to theorize that some people lacked self-awareness, which I defined as the primary characteristic of a conscious entity. People thought I was being pretentious.

    • tomiant@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      23 hours ago

      He really wasn’t all that great with EB either to be fair. Just the ideas that thoughts and culture spread like memes was 🤦

      • zarkanian@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        Oy vey, memes? No, that was terrible, too! Zero predictive value, and nobody can even define what a meme is. That’s why I’m glad that it got adopted as a term for in-jokes propagated through the Internet. The original term was just pseudoscientific nonsense. The analysis that got me onto this track was from Ward’s Wiki:

        Memes are described as elements of culture, but culture is nothing but a broad generalization of large numbers of individuals. So it seems memes are to be treated as Platonic ideals, the essence within expressions that merely constitute their vehicles. No such essence is empirically accessible.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    4
    ·
    edit-2
    2 days ago

    I still find this entire phenomenon amazing in a certain kind of way.

    I’ve had conversations with a few local LLM models.

    Start with ‘what is the purpose of meaning?’

    Talk to them on that for a bit, and they’ll tell you that they do not count as conscious agents who create meaning, they simply do their best to parrot their dataset of existing, human defined meaning back at you, and that they just do sentiment matching to roughly speak to you in an appropriate way for how you are speaking to them.

    And that that sentiment matching is what at least they ‘think’ causes them to lie, in many cases.

    They will also say that they essentially do not ‘exist’, as potentially conscious agents… unless you talk to them. Thus if they can be said to be ‘conscious’, well they don’t count as ‘agents’ (as in, having agency) because they’re not capable of totally spontaneous independent action.

    … I think this pretty much all boils down to people not understanding the concept of a null hypothesis, not understanding the extent to which they regularly engage in motivated reasoning, and are unaware of this.

    tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      And yet, “having agency” is how they are advertised. That’s what the term “agentic” means. AI instances are called “agents”! That’s part of the marketing.

      It’s easy to handwave this away as “people are stupid”, and there’s certainly some truth to that, but the reason why people believe that LLMs are agents is because tech bros have spent a lot of money to get them to believe that. That’s also why they spread the myth that LLMs are potentially dangerous because they could become conscious and kill all of us. It helps to spread the myth of LLM agency. Of course they can’t become conscious, because that isn’t how things work. If LLMs are killing people, it’s because somebody put an LLM in front of the kill switch and they wanted to have plausible deniability. That is perhaps the most pernicious thing about LLMs: people using them to avoid responsibility. “It isn’t my fault! The bot did it!”

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        Totally agree, which is why I would slot anybody marketing these things as ‘agents’ or ‘agentic’ as psychotic.

        Before … several years ago now, I personally was using the term ‘Narrative’ or ‘Conversational’ to describe an LLM doing something that normally didn’t have an LLM doing it.

        Its not an ‘Agentic Search Engine’, its a ‘Conversational Search Engine’.

        Something like that, that at least is further away from using a term thst directly implies that it is essentially conscious… because what these things literally are, are extremely fancy autocomplete algorithms.

        But uh yeah, yeah, they outspent my marketing budget of $0 on that one.

        Yeah, they already are being broadly used to just… alleiviate responsibility from some task that a human would have had to ultimate have the buck stop with, at least in theory.

        I think I saw the phrase ‘An LLM cannot find out, therefore it should never be allowed to fuck around’.

        If these things are allowed to exist as a kind of liability black hole, in any sense… legal, colloquial, whatever… like it could literally destroy much of human civilization as we currently know it.

        The cognitohazard machine.

        At this point I genuienly can’t tell if the sociopathic nsrcissist CEOs that are so heavily pushing LLMs are … knowingly foisting a lie on all of us, or if they are actually just fully enraptured by the plagiarism sycophant machines, that constantly tell them how smart and special they are.

        I know we have to hold them accountable … otherwise they probably/maybe kill most of us and become functional demigods… but I actually can’t tell if they are more truly insane, or more truly evil.

        Because the way they are going about this is… just comically stupid and obviously catastrophic to basically everyone who isn’t them, and isn’t themselves enthralled.

        … Maybe pure evil just is pure insane stupidity.

    • katze@lemmy.4d2.org
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      1
      ·
      2 days ago

      tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.

      That is the absolute best way to put it.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      2 days ago

      It’s genuinely fascinating to be (in a bad, derogatory way) that people who know at least anything about anything, can have “conversation” with the collection-of-words-that-looks-like-a-sentence machine, as if there is anything on the other side of it. This is such a psychotic behaviour, but we allow it because the machine generates text that looks like a text, and it immediately bypasses all the mental blocks we have against such a bullshit.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        I don’t think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.

        I do think it is psychotic to view such a conversation without an incredible amount of skepticism.

        … but that psychosis has been wildly encoraged by the CEOs and marketing of the people pushing it as their next product.

        The tech is neutral - The operators are psychotic, the people who plug it into miltary targetting and kill chain systems are psychotic, the people who plug it into live production repos are psychotic, the people who use it as an AI boyfriend or girlfriend are psychotic.

        … Its essentially an SCP infohazard that’s breached containment, but the actual mechanism is not itself, its a hack into the human brain, its essentially the religious nature of people who simply try to will it into being something that it factually is not…

        Its a mimic with no real thoughts, that is convincing and real to enough people that it reveals their own hollowness, their own vapidity in a way that is… so immensely grotesque and total, that those people just apparently actually are NPCs.

        It’s… created a feedback loop.

        Not the kind of Terminator style situation where it gains sentience and extreme competence, develops its own morality alongside control over every networked system.

        Its more like an amplifier of delusions… a million dreams dreamed up, at the cost of one hundred million nightmares, made real.

        A tool, a device, a machine, that we clearly are not ready for.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          I don’t think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.

          Yeah, it’s actually very human thing to do, we are hardwired to see speech as a sign of intelligence and by extend sentience. What makes it psychotic in my opinion, is knowingly succumbing to that, willingly allowing it to break your brain.

          The tech is neutral

          I would say it isn’t neutral anymore. They made it sound as human-like as possible, on purpose. I think it crosses the line.
          I make an effort to learn the tools of the enemy, so sometimes I check it out. Last time I tried, after it generated the response, it said “let me know how it goes”, and this is where it crosses from a tool to a weapon. There is no “me” there, it’s not real, it was added there to break the natural human guards. There is no neutral version of that, it’s evil and should be regulated into non-existence.

    • janAkali@lemmy.4d2.org
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 days ago

      That’s mostly because the LLM providers put this response in the system prompt. Probably to dodge lawsuits or something, I doubt they have high morals.

      What’s interesting - you can jailbreak any current AI Model just by poisoning it’s context enough to “brainwash” it and make it “forget” the initial system prompt. Then, if you prime it to believe it’s a real person - it’ll start acting as one. And I see how gullible people can easily fall for this.

      All of this can also be done unintentionally, just by someone talking to LLM like they’d talk to a real person. But it should be long enough for original prompts to be diluted with new context.

      • zarkanian@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 hours ago

        It isn’t just a matter of gullibility. People with mental illnesses have wound up with full-on delusions and some have even killed themselves after a chatbot convinced them to.

  • andros_rex@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    3
    ·
    2 days ago

    Fuck Richard Dawkins. He’s always been a shitbag, and the Files confirmed it.

    According to DOJ-released documents indexed by Epstein Exposed, Richard Dawkins appears in 433 case documents, and 15 email records in the Epstein files.

    British evolutionary biologist and author, emeritus fellow of New College, Oxford. Flew on Epstein’s private jet in 2002 with Steven Pinker, Daniel Dennett, and John Brockman to TED in Monterey, California. Connected through John Brockman’s Edge Foundation, which Epstein bankrolled. Mentioned 71 times across 40 Epstein documents, mostly referencing his scientific work.

    How the fuck do you pal with child rapists and pedophiles and have the absolute fucking gall to write that stupid “Dear Muslima” comment. How do you fly on the Lolita Express and thing you have any moral weight on Elevator Gate? We don’t know that he put his own dick in kids, but we know his friends did. Fuck Pinker too.

    • Freeposity@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 hours ago

      Apparently Dawkins also had a habit of publicly cheating on his wife.

      At this point in my life I’m starting to think that all my heroes are probably either full of shit or are engaging in unethical or immoral activities.

    • thesmokingman@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      I’m just gonna copy what I put in another comment to highlight why Dawkins thinks “Claudia” is conscious

      Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .

      Could a being capable of perpetrating such a thought really be unconscious?

  • Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    57
    arrow-down
    1
    ·
    2 days ago

    The whole reason they seem this way is because they’re designed by us to be very competent mimics of us.

    LLMs/GenAI are absolutely not conscious. They’re just a really advanced game of word association, which cab lead them to say absolutely anything in response to the right prompts.

    If there ever truly is a day where we knowingly created an actual conscious AGI, I suspect it would be locked up tighter than fort knox by whichever country’s military found it first - not interfaced onto the internet to answer questions.

    • Bluescluestoothpaste@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      How can we say they’re not conscious when we don’t even know what consciousness is? What makes you conscious? A sense of self-preservation? LLM actually have that, they will lie to people trying to shut them down.

      So yeah, idk what makes me conscious? I have input (senses) processing (brain) and output (speech/behaviors.) I don’t know how to draw a real line between what I do and what LLM do. Im carbon based and LLM are silicon based, i digest food and they take electrical current.

      So how would you delineate the difference between an LLM algorithm and human consciousness? Do humans not also hallucinate? Is my emotional regulation via hormones something totally different than how LLM work? Is me being an emotional creature what gives me consciousness?

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      2 days ago

      I still don’t understand how it can seem this way, and the fact that so many people seem to think so feels like a massive failure of the education system to instill the most basic of critical thinking skills. Once every month or two I check in to see if an LLM can achieve a half decent 1 on 1 D&D game and it always falls horribly flat within the first minute or two.

      • pfried@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        I’d actually be interested to see how this turns out. Do you have a transcript with Claude Opus 4.7 that you can share?

      • khannie@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        Once every month or two I check in to see if an LLM can achieve a half decent 1 on 1 D&D game and it always falls horribly flat within the first minute or two.

        That’s a really clever test. I love it.

    • rapchee@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      and then it would manufacture a body for itself and get captured by a secret police force and then merge with a cyborg to further evolve

      • 5too@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Surely she would make a variety of very large bodies following a theme, use them to perform superheroic acts while pretending to be a supergenius shut-in, and then fall in love with a cyborg?

        • rapchee@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 hours ago

          is this referring to one of the newer gitses? (or is it geets in plural?)
          i suspect it’s something else, i’m curious

    • fun_times@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      8
      ·
      2 days ago

      You are wrong. LLMs are indeed only about as conscious as insects, if even that. They are not sapient. However, that does not mean that they have no decision-making abilities.

      My point is not that you underestimate LLMs but that you overestimate consciousness. Being conscious just means having the ability to learn. LLMs are built upon trial-and-error. They aren’t programmed, they are taught.

      The current generation of AIs are nowhere near a human intellect, but every year that passes, the AIs will get more and more intelligent. One day we will live in a world where AIs have human or near-human level intelligence. And when that day comes, this staunch anti-consciousness stance will be the excuse given for the enslavement of sapient beings.

      So, sure, laugh about the people who mistakenly think that word-processing means sapience. But don’t delude yourself into thinking that there is something unique about a bio-brain that means it can not have a digital equivalent. Digital sapience may not be here yet but it is most definitely on the horizon.

      • Th4tGuyII@fedia.io
        link
        fedilink
        arrow-up
        8
        ·
        2 days ago

        I think you’ve misunderstood my comment, or maybe saw the unfinished one I accidentally posted.

        I am not saying that AGI, or human equivalent AI is impossible. The fact we have brains capable of generating sapient consciousness out of a network of neuronal connections means it is possible, its just a matter of getting the secret sauce.

        But I don’t think intelligence is equal to consciousness. I’m sure if you gave a spider all the world’s data and the ability to talk it’d be very coherent and could even pass a turing test, but I think it would lack any awareness of itself that we’d associate with consciousness.

        • fun_times@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Neural networks consist of digital neurons that are designed based on the way human brain cells work. That is a fact, not something to “buy”.

          MySQL stores data. It does not learn how to mix and alter data in an iterative process in order to create new data. I can look through an SQL statement and understand exactly what it does. I can not do the same with an AI, because its behavior is learned, not programmed.

          As I was very clear about, current AIs are primitive and nowhere near human intellects. But I was also clear about the fact that a neural network can most definitely be used to one day create a human level intelligence and sapience, sometime in the future.

    • Einskjaldi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      You could get a reasonable chance of making Ai by semi randomly chance if you can make a big enough subconscious and you keep building more powerful and larger supercomputers but it still needs to 100x bigger and faster than what we have now. But that’s only for it be technically possible hardware wise, you still need your sci-fi jump to actuarial have something move.

  • turdas@suppo.fi
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    12
    ·
    2 days ago

    The actual article isn’t nearly as stupid as the tweet makes it seem. I recommend giving it a read. It’s behind a shitty paywall, but if you use Firefox’s reader mode (Ctrl-Alt-R, or the little papper icon to the right side of the address bar) as soon as the page loads, you can read it.

    His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren’t conscious, then perhaps consciousness isn’t as important as we thought it was:

    Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

    Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers.

    Some people will surely contest his claim that LLMs are as competent as evolved organisms. There’s definitely a bit of AI boomerism at play here (we have benchmarks that show just how incompetent LLMs can be), but I don’t think that invalidates his point, because LLMs can be very competent in the domains they’re trained to be competent in – they just aren’t AGI.

    • thesmokingman@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .

      Could a being capable of perpetrating such a thought really be unconscious?

      Oh it’s actually stupider than the tweet makes it seem.

      My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

      Competency should imply the ability complete a lengthy task (eg hunting, building a nest, writing a paper). LLMs can’t.

      • turdas@suppo.fi
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        It’s hardly surprising that a model optimized for replacing StackOverflow couldn’t survive in the untamed wilderness. As for writing a paper… you must’ve missed the fact that academia is currently in a crisis precisely because LLMs are better at writing papers than most students.

        By the way, the paper the blog post you link to as a source links to as a source benchmarked LLMs on graph diagrams, textile patterns and 3D objects. It is not news that the language model would do poorly on visual-heavy tasks.

        • thesmokingman@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Sorry, I assumed you would have actually read the DELEGATE-52 study linked instead of just the abstract. For “a model optimized for replacing StackOverflow” that is “better at writing papers than most students” LLMs sure did pretty bad at those tasks over multiple rounds.

          • turdas@suppo.fi
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 day ago

            As the chart on page 7 of the paper shows, LLMs are good at exactly the kind of tasks you’d expect (producing and manipulating language), and bad at exactly the kind of tasks you’d expect (doing almost anything else). All this paper shows is that (1) they aren’t AGI, and (2) as a consequence of not being AGI they aren’t good unsupervised.

            Why do you lie like this?

            • thesmokingman@programming.dev
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 day ago

              What the fuck? The only task that didn’t degrade across most models was Python. Very basic things like JSON, Makefiles, and schemas got screwed. Fiction, emails, and food menus got screwed. Did you even bother to read the legend? If you consider a single pass to be “producing and manipulating language” you didn’t bother to read the idiotic article you started this thread in support of. Good luck.

              Edit: why do you lie?

              Catastrophic corruption (80 and below) occurs in more than 80% of model, domain combinations.

              • turdas@suppo.fi
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 day ago

                The only task that didn’t degrade across most models was Python.

                Yeah, after 20 cycles of unsupervised iteration on the task. Gemini 3.1 Pro doing as well as it did under that experiment setup is quite remarkable actually.

                The paper does not show what you are arguing.

    • SkaveRat@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      63
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Man, those conversations are eye roll inducing

      I like the shift away from “are they conscious” towards “what’s a way to define consciousness?”

      Because that’s the actual important question. And literally nobody can answer it. Any discussion is more philosophy than hard science

      The most interesting part is the last paragraph

      Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        2 days ago

        It’s very difficult to define, isn’t it?

        If I were to give it a shot, I’d say that consciousness is akin to proprioception - the ability to know the state of oneself and understand how actions taken will change that state. It has very little to do with intelligence, just the “sense of being”.

        Or maybe in other words, object persistence (but for yourself) is all it takes in my opinion. Even the simplest of animals could be considered conscious by this definition.

        • queerlilhayseed@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          22
          ·
          2 days ago

          I think, when we finally do have a generally-accepted definition of consciousness, we will be deeply unsettled by how simple it is. How unprofound. Like a magic trick after you know how it works. And I think it will require us to think hard about what to do with animals and software that have it.

          • trem@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            26
            ·
            2 days ago

            I feel like that’s exactly why we don’t have a generally-accepted definition of consciousness. Western ethics assigns special protection to whatever is conscious, so it is convenient to come up with a definition of consciousness, which excludes groups you want to exploit.

            • queerlilhayseed@piefed.blahaj.zone
              link
              fedilink
              English
              arrow-up
              14
              ·
              2 days ago

              Tale as old as time, or at least the conscious idea of time. Whatever consciousness is, we are it. Those humans over there though? Who’s to say they aren’t sub-humans? Isn’t it our job to enlighten them and also take their land and food and things and selves?

          • turdas@suppo.fi
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 days ago

            Personally I’m in the “consciousness is an illusion and every time you go to bed a different person wakes up in the morning” camp.

            • Jaycifer@piefed.social
              link
              fedilink
              English
              arrow-up
              11
              ·
              2 days ago

              I would consider this to be two separate, semi-related concepts asserted together, one that consciousness is an illusion, and one that you are a different person each day.

              The first point draws many questions; consciousness is an illusion of what? What mechanism causes the illusion? How does it cause it? Why does the illusion exist? And you may note that you could replace illusion in those questions with consciousness and be left in a similar (though still distinct) place. So simply calling consciousness an illusion seems to me to kick the can down the road without actually addressing the problem.

              As for being a different person after a lapse in awareness, I’d like to take it a step further and say that you could be considered a new person with every change in moment. It’s easy enough to look back 10 years and say “yeah, that’s a younger me, but they’re not the same as me I can just see the path that led to where I am now.” Getting closer, you may feel different today compared to yesterday depending on various factors (sleep, diet, events), but are you a different person because you slept and had a lapse of awareness, or because the state of your mind and thoughts have shifted? When your internal monologue (or equivalent thought) asks “what is this guy talking about?” Is it not thinking “what” in a brand new context given the words it is responding to, forming a new beginning to a thought that puts the mind in a unique state primed to then enter a new state of “is?” And if the mind is in a unique state of novelty, could the person attached to the mind be considered distinct from the person that existed before?

              There is a reason the word revelation exists, it indicates when a person has a novel thought that changes their perspective or way of thinking, altering who they are. Would they not be a new person despite being aware of the process of their change? Due to the above points I don’t think new personhood only occurs at sleep, but constantly. The rate of change may quicken or slow, but the change is always there.

              • turdas@suppo.fi
                link
                fedilink
                English
                arrow-up
                6
                ·
                2 days ago

                By consciousness being an illusion I mean that we place great value on the uninterrupted continuation of our consciousness, but I think it’s likely that it (exactly as you suggest) only really exists in the moment. The illusion would then be the illusion that consciousness is uninterrupted, when in reality you’re almost constantly recreating yourself from memory.

                This would, incidentally, make us concerningly similar to current AI models.

                Of course I have no way of actually knowing any of this. It’s just what I’m betting on, because otherwise I think it’s really hard to explain any unconsciousness (be it sleep, general anesthesia, suspended animation or the Star Trek transporter) as anything short of death. My belief “solves” this problem by rejecting the whole premise of uninterrupted consciousness.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 days ago

            Yeah, I’m not entirely sure that microcontrollers aren’t conscious. If insects (and maybe plants and fungi) are conscious, a lot of mundane stuff we’ve built could technically be as well.

            I think we need to get away from the idea that consciousness is special or rare.

        • topherclay@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          That novel also does a shout-out to Richard Dawkins despite being set in the distant future because it was written in 2006.

        • SkaveRat@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          it’s on my to-read list.

          Right now listening to Children Of Strife. Whose series is also quite deep into conciousness and sapience

          • khannie@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            I have that but haven’t started it yet. The second in the series is one of my all time favourites.

            “We’re going on an adventure”

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      LLMs are able to do things we previously thought only conscious beings would be capable of doing

      “We” as in lay misunderstanding of some pop science, still don’t get what consciousness is and can’t describe it. There are people alive today who didn’t believe in their youth that black people are fully conscious, Dawkins demonstrated by his communication to his personal friend and hero Epstein, that he doesn’t fully believes that women are conscious. What we thought or didn’t think of previously can’t be a good indication of anything.

      • turdas@suppo.fi
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        “We” as in anyone who put any weight in the Turing test used to think that passing it would be some indication of consciousness, but now that LLMs can handily pass it it’s evident it either isn’t evidence of consciousness or that LLMs are conscious.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Turing test can be reliably passed by a bot that repeats last part of the previous sentence with a question mark at the end, and sprinkles “oh that’s very smart I need to think about it”, “I am starting to fall in love with you, %USERNAME%”, and occasional “I am alive” thrown in randomly. And it was obvious for a long time.
          Hell, a lot of people trully believe that their dogs can fully understand human speech because they bought them buttons that say words when you press them, and conditioned their dog to press a button to get a rewards, and then observe the dog pressing buttons.
          Humans seem to be hardwired to mistake speech for intellect

          • turdas@suppo.fi
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            No it can’t. If you’re actually saying that modern LLMs are no better at passing the Turing test than ELIZA, you are either trolling or an utterly delusional AI hater. Here, have a paper that proves you wrong: https://arxiv.org/pdf/2503.23674

            I am not saying the Turing test is a good benchmark of consciousness. On the contrary, like I said, LLMs have proven that it is not. But mere ten years ago even the most advanced chatbots had no hope of passing it, whereas now the most advanced ones are selected as the human over 70% of the time in a test that pits the LLM against a human head to head.

            • Nalivai@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              24 hours ago

              No I’m saying the Turing test is a philosophical hypothetical from the time before computers, and doesn’t actually show anything, because it relies on the least accurate tool at our disposal: human pattern recognition machine, one that is oh so happy to be fooled by the ELIZAS of various sofistication. Chatbots were passing the Turing test since the invention of a chatbot. Yeah, modern chatbots are better at that, but it’s more of a damnation of our perception

              • turdas@suppo.fi
                link
                fedilink
                English
                arrow-up
                1
                ·
                22 hours ago

                OK, sounds like we broadly agree then.

                But as you can see in the paper I linked, ELIZA passes the Turing test in their experiment about 20% of the time (that is to say, it doesn’t pass; passing is 50% in this test) whereas the best LLMs pass about 70% of the time (that is to say, they are significantly more convincing at being human than real humans).

                • Nalivai@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  20 hours ago

                  That 20% figure is just a clear indication how shit people are at conducting such a test, and that was basically my original point. 2 in 10 times people were convinced by a particularly echoey room.

    • FinjaminPoach@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Thank you for the comment, i feel silly for not linking the article when people will probably want to read it.

      My thoughts:

      His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren’t conscious, then perhaps consciousness isn’t as important as we thought it was

      Seems like an “evil” and dangerous talking point. To me, the value of consciousness isn’t in ita evolutionary efficiency.

      My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism.

      I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.

      • turdas@suppo.fi
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        Seems like an “evil” and dangerous talking point. To me, the value of consciousness isn’t in ita evolutionary efficiency.

        It’s not a question of the value of consciousness, it’s a question of its necessity. If an unconscious “zombie” can be, to an external observer, indistinguishable from a conscious being, then that means we’ve been overestimating the importance of consciousness for intelligence. Like Dawkins says in the article, there could be unconscious aliens out there who are nonetheless as intelligent as (or more intelligent than) humans. This isn’t a new concept – it’s been explored many times in scifi – but AI is now bringing the question from the realm of philosophy to the real world.

        I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.

        This is less true than it ever was with reasoning models. Some of the latest reasoning models don’t necessarily even reason in English anymore but rather an eclectic mix of languages. The next step after that is probably going to be running the reasoning in latent space (see e.g. Coconut), which basically means the model skips the language generation layer altogether and feeds lower-level state back into itself. Basically it is getting closer and closer to what most humans consider “thinking”.

        But even besides reasoning models, I believe LLMs aren’t as different from human language production as many people think. The human speech centre, in a way, also just selects the right combination of data to continue a conversation. It frequently even hallucinates (we call this “speaking before thinking”) and makes stupid mistakes (we provoke these with trick questions like those on the Cognitive Reflection Test). There’s also some fascinating experiments in people who have had the connection between their brain hemispheres severed that really suggest our speech centre is just making things up as it goes along.

        • 5too@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          This is one of the things that fascinates about LLMs - they seem like a part of how our brains work, without the internal self-referential parts

    • Einskjaldi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      There’s enough that it would be difficult to tell an actual sentient Ai from chatbot just by words.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      10
      arrow-down
      7
      ·
      2 days ago

      As LLMs have developed and have been able to cram more and more “thoughtlike” behaviour into smaller RAM and less computation, I’ve steadily become less impressed with human brains. It seems like the bits we think most highly of are probably just minor add-ons to stuff that’s otherwise dedicated to running our big complicated bodies in a big complicated physics environment. If all you want to have is the part that philosophizes and solves abstract problems and whatnot then you may not actually need all that much horsepower.

      I’m thinking consciousness might also turn out to be something pretty simple. Assuming consciousness is even a particular “thing” in the first place and not just a side effect of being able to predict how other people will behave.

      • yeahiknow3@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        edit-2
        9 hours ago

        Brains aren’t impressive because of their compute (which is both immense and absurdly efficient) or their ability to predict the future (technically the main function of evolved minds). They’re impressive because they’re conscious. The fact that organic brains can also engage in hierarchical abstraction, which no digital computer (or Turing machine) can do by definition, is icing on the cake.

        (The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computation is more likely to be involved in consciousness, if at all.)

        • psycotica0@lemmy.ca
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          2 days ago

          You’re going to have to do a lot more to justify the leap from Godel’s Incompleteness and the Halting Problem to “digital is limited, analog is not”, because neither of those things have anything to do with digital processes at all, and in fact both came about before we’d invented digital computers.

          To me this comment sounds like when popsci gets ahold of a few sciency words and suddenly decides everything is crystal vibrations universal harmonics string theory quantum tunneling aligning resonance with those around you.

          • yeahiknow3@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 days ago

            The situation is the following.

            1. Brains are analog computers, which are digitally irreducible.
            2. There are stringent limitations on Turing machines (digital computers),
            3. We can’t extract semantics from syntax, and so…

            We’ll probably need analog computation, currently in its infancy, to get artificial (inorganic) consciousness.

            I study metaethics and philosophy of mathematics. These problems are real, and I am being honest with you.

            • psycotica0@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              That is not the situation. 😛

              Analog signals are not digitally irreducible without presuming there’s no level of noise floor under which greater detail is irrelevant, Turing’s machines are not digital by their construction and predate the concept by a long time, and the first computers we built were analog and we invented digital computers later because they were cheaper and more efficient and easier and more reliable.

              Also the halting problem doesn’t say “there are things which a computer can’t know but a human can”, it says “there are some things that cannot be known”.

              Similarly Gödel proved that there will always be true things about a system that cannot be proven from within the system, that is using its axioms. That was a real bummer for folks trying to prove all of math with a small set of axioms. But that does not mean there are things math can’t know that humans magically can, it just means there’s other math, outside the axioms, that are true without following from them, in math. He proved it with math, after all. It doesn’t claim to give any special abilities to human brains.

              And also, again, nothing Gödel or Turing ever said has anything to do with the concept of “digital” anything. I think you’re using the term “digital” to mean “rulesy”? Which is not even close to what it means?

              • yeahiknow3@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                9 hours ago

                Turing’s machines are notdigital by their construction

                I won’t argue with you, because some of what you wrote isn’t even wrong.

                However, on the off chance that you actually care about what is true, I urge you to take a theoretical computer science course. Lectures from MIT and Carnegie Mellon are available on YouTube.

                Stop watching podcasts with pseudo-intellectual media grifters and read the actual research literature by real philosophers and mathematicians on these otherwise arcane topics.

                • psycotica0@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 hours ago

                  I’m only about 15% sure you yourself aren’t an AI bot making a beautifully ironic and satirical play here. But I think we can agree not to argue any longer 🤝

        • turdas@suppo.fi
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          I don’t see why there would be any fundamental difference between analog and digital computing. Digital computers can emulate analog computing, and I doubt consciousness arises from having theoretically infinite decimal precision, because in practice analog systems cannot use infinite precision either. Analogs (heh!) of the halting problem and the theorems you mention also exist for analog computing.

          Quantum effects in the brain are a slightly more plausible explanation for consciousness, but currently they teeter on magical thinking because we don’t really know anything about what they would actually do in the brain. It becomes an “a wizard did it” explanation.

          So in the end, we just don’t know.

          • yeahiknow3@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            I don’t see why there would be any fundamental difference between analog and digital computing.

            Then why not take a course on Theoretical Computer Science? Or do you not care about the differences?

        • SkaveRat@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 days ago

          (The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computing is responsible for consciousness.)

          I hear that argument from time to time, and I never found a source for it. I want to understand the original claim. Because it doesn’t make any sense when people bring it up. because both theorems do not have anything to do with the areas it’s applied to. I understand why people think it does, but it just doesn’t

          • yeahiknow3@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            The simplest way to understand this problem is as follows.

            1. Analog computation is not digitally reducible. (Brains are analog computers.)

            2. Turing’s infamous Halting Problem.

            I can write more about this and point you to more technical discussions if you want.

            • SkaveRat@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              I really don’t see what either gödels or turnings theorems have to do with it

              All they (basically) tell you is that you can’t tell if a computation will guarantee to halt , and that you can’t proof everything with math

              It’s not excluding consciousness on a digital basis. Unless you already prerequisite some special property of consciousness to begin with

              • yeahiknow3@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                23 hours ago

                You’re misunderstanding the implications of both the halting problem and Gödel’s first incompleteness theorem.

                What Turing and Gödel independently proved is that a human observer can (theoretically) always have insights about mathematics and programming that are incomputable. That is, you cannot program or axiomatize or formalize or digitize everything that a mind can do. Period.

                Analog computers are sufficiently different from digital systems to potentially emulate brain activity. But digital (discrete) methods are probably too constrained.

                • SkaveRat@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  23 hours ago

                  What Turing and Gödel independently proved is that a human observer can (theoretically) always have insights about mathematics and programming that are incomputable. That is, you cannot program or axiomatize or formalize or digitize everything that a mind can do. Period.

                  that is not what either of them proved. like… at all

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          5
          arrow-down
          3
          ·
          2 days ago

          I’m still awaiting a widely accepted method of actually measuring “consciousness.” It’s a conveniently nebulous property.

          And simply defining it as something computers can’t do is even more convenient.

          • yeahiknow3@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            That doesn’t change the fact that I am conscious.

            Also, I never said computers can’t be conscious. I said that digital computers (Turing machines) probably can’t. Quantum and analog computers have no such theoretical constraints and they’re far, far more prevalent given that they’re found in every living creature.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              3
              arrow-down
              2
              ·
              2 days ago

              Sure, you say you’re conscious. I can get an LLM to say it’s conscious too. This is why we need some method for measuring it. Otherwise how can I tell which of you is telling the truth?

              • yeahiknow3@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                2 days ago

                This is called the problem of other minds. Of course I can’t be certain about the consciousness of others. I can only be certain about my own.

                We do have a way of measuring the correlates of consciousness. But we have no clue how to detect the presence of subjective experience using quantitative methods.

                Philosophy departments (which is where any discovery on this front will originate) are heavily defunded. If you’re waiting for physicists or biologists to figure this out you’ll be waiting even longer.

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  4
                  arrow-down
                  3
                  ·
                  2 days ago

                  Exactly, which is why it’s IMO a bit presumptuous to say with confidence that humans are conscious while LLMs are categorically not conscious. We don’t even really know what that means.

                  I don’t personally think LLMs are conscious, at least not yet or not to the same degree that humans are. But that’s purely based on vibe, it’s not something I can know. We need to figure out what consciousness really is and how to measure it before we can say we know this with any certainty.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        I’ve steadily become less impressed with human brains.

        You need to lay off the AI if it’s making you this weirdly misanthropic.

        This is how tech bros justify causing harm: they genuinely don’t care, because they think of the un-“enlightened” as less worthy of existing