“Suppose you have a model that assigns itself a 72 percent chance of being conscious,” Douthat began. “Would you believe it?”

Amodei called it a “really hard” question to answer, but hesitated to give a yes or no answer.

Be nice to the stochastic parrots, folks.

  • Carl [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    38
    ·
    edit-2
    3 days ago

    It would be really funny if a sentient computer program emerged but then it turned out that its consciousness was an emergent effect from an obscure 00s linux stack that got left running on a server somewhere and had nothing to do with llms.

  • AlyxMS [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    38
    ·
    3 days ago

    I swear Anthropic is the drama queen of AI marketing

    First they kept playing the China threat angle, saying if the government don’t pump them full of cash, China will hit singularity or someshit

    Then they say supposedly Chinese hackers used Anthropic’s weapons grade AI to hack hundreds of websites before they put a stop to it. People in the industry presses F to doubt

    Just not so long ago they’re like “Why aren’t we taking safety seriously? The AI we developed is so dangerous it could wipe us all out”

    Now it’s this

    Why can’t they be normal like the 20 other big AI companies that turns cash, electricity and water into global warming

    • BodyBySisyphus [he/him]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      29
      ·
      3 days ago

      Will McCaskill became a generational intellectual powerhouse when he discovered you could just put arbitrary probabilities on shit and no one would call you on it, and now he’s inspiring imitators.

        • BodyBySisyphus [he/him]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Prolly closer to 66% chance if you’re getting the standard level of sleep but the important part is that it’s readily predictable, rather than, say, there being uncountable quadrillions of simulated space people in the distant future.

  • Juice@midwest.social
    link
    fedilink
    arrow-up
    41
    ·
    3 days ago

    If poor people are human, then this machine I spent all this money building has to be better than them, therefore it’s probably conscious, q.e.d

  • mrfugu [he/him, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    37
    ·
    3 days ago

    I’d believe it if it could show its work on how it calculated 72% without messing up most steps of the calculation

    edit: no actually I wouldn’t shrug-outta-hecks

        • DasRav [any, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          3 days ago

          That’s a terrible argument. It wasn’t me making the claim so I don’t know why I gotta prove anything. The frauds making the theft machines have to prove it. If the guy says '“Suppose you have a model that assigns itself a 72 percent chance of being conscious" and then the thing can’t show it’s math, how is it on me to prove I can do that math I haven’t seen next?

        • purpleworm [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          We can pass the Turing test and it can’t. I don’t see what your point is, and it seems detrimental to the purpose of pushing back on the bullshit in the OOP.

            • purpleworm [none/use name]@hexbear.net
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              3 days ago

              Here’s a post from someone who also doesn’t like the Turing Test. As they point out, you can pedantically call it a Turing Test but it’s a version that was very deliberately rigged in favor of the AI, including the tests only being ~4-5 exchanges, which is completely ridiculous for trying to make a thorough evaluation by this metric. I don’t think it has all that much to do with gullibility because the limitations of these models become much more apparent over time. It’s just more headline-mill bullshit. I don’t share the author’s view that the “coaching” is a relevant factor to consider the outcome’s validity, though.

              Granted, I’m also not trying to say that the Turing test is the ultimate metric or anything, just that it’s an extremely low baseline that, employed in good faith, current LLMs plainly do not clear. They often can’t even pass for one prompt if the one prompt is “spell strawberry” or something like that.

              Edit: I also think the alternative that they propose is not great because it’s mostly a question of video-processing. It’s getting too hung up on information-processing questions to use something other than text.

  • MolotovHalfEmpty [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    32
    ·
    3 days ago

    This is bullshit and they know it. It’s to flood the zone for SEO/attention reasons because the executive and engineering rats are fleeing the Anthropic ship over the last week or two and more will follow.

  • Rom [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    ·
    3 days ago

    Sycophantic computer program known for telling people what they want to hear tells someone what he wants to hear