“Suppose you have a model that assigns itself a 72 percent chance of being conscious,” Douthat began. “Would you believe it?”
Amodei called it a “really hard” question to answer, but hesitated to give a yes or no answer.
Be nice to the stochastic parrots, folks.

I’m not convinced CEOs are conscious.


Guy selling you geese: I swear to god some of these eggs look really glossy and metallic
“Anthropic CEO reveals that’s he’s a fucking idiot”
To his credit, he could also be a con man
I have assigned myself a 99% chance to make a 500% ROI in the stock market over the next year, better give me $200 million in seed money.
It would be really funny if a sentient computer program emerged but then it turned out that its consciousness was an emergent effect from an obscure 00s linux stack that got left running on a server somewhere and had nothing to do with llms.
So much like SCP-079 ?
I swear Anthropic is the drama queen of AI marketing
First they kept playing the China threat angle, saying if the government don’t pump them full of cash, China will hit singularity or someshit
Then they say supposedly Chinese hackers used Anthropic’s weapons grade AI to hack hundreds of websites before they put a stop to it. People in the industry presses F to doubt
Just not so long ago they’re like “Why aren’t we taking safety seriously? The AI we developed is so dangerous it could wipe us all out”
Now it’s this
Why can’t they be normal like the 20 other big AI companies that turns cash, electricity and water into global warming
Why can’t they be normal like the 20 other big AI companies that turns cash, electricity and water into global warming
Sam Altman suggested Dyson spheres
Smh if only we had more electrons
First they kept playing the China threat angle, saying if the government don’t pump them full of cash, China will hit singularity or someshit
But I want that
My god, 72%. I ran the numbers on an expensive calculator and that’s almost 73%.
Will McCaskill became a generational intellectual powerhouse when he discovered you could just put arbitrary probabilities on shit and no one would call you on it, and now he’s inspiring imitators.
Arbitrary?! I’m a human and there’s only a 76% chance of me being conscious.
Prolly closer to 66% chance if you’re getting the standard level of sleep but the important part is that it’s readily predictable, rather than, say, there being uncountable quadrillions of simulated space people in the distant future.
If poor people are human, then this machine I spent all this money building has to be better than them, therefore it’s probably conscious, q.e.d
I’d believe it if it could show its work on how it calculated 72% without messing up most steps of the calculation
edit: no actually I wouldn’t

The answer: “I made it the fuck up”

I mean to be fair can either of you “show the calculations” that “prove” consciousness
“Cogito ergo sum” sure buddy sure you’re not just making that up??
That’s a terrible argument. It wasn’t me making the claim so I don’t know why I gotta prove anything. The frauds making the theft machines have to prove it. If the guy says '“Suppose you have a model that assigns itself a 72 percent chance of being conscious" and then the thing can’t show it’s math, how is it on me to prove I can do that math I haven’t seen next?
We can pass the Turing test and it can’t. I don’t see what your point is, and it seems detrimental to the purpose of pushing back on the bullshit in the OOP.
LLMs pass the Turing test, which is just proof of the Turing test being a poor test of anything but people’s gullibility.
Here’s a post from someone who also doesn’t like the Turing Test. As they point out, you can pedantically call it a Turing Test but it’s a version that was very deliberately rigged in favor of the AI, including the tests only being ~4-5 exchanges, which is completely ridiculous for trying to make a thorough evaluation by this metric. I don’t think it has all that much to do with gullibility because the limitations of these models become much more apparent over time. It’s just more headline-mill bullshit. I don’t share the author’s view that the “coaching” is a relevant factor to consider the outcome’s validity, though.
Granted, I’m also not trying to say that the Turing test is the ultimate metric or anything, just that it’s an extremely low baseline that, employed in good faith, current LLMs plainly do not clear. They often can’t even pass for one prompt if the one prompt is “spell strawberry” or something like that.
Edit: I also think the alternative that they propose is not great because it’s mostly a question of video-processing. It’s getting too hung up on information-processing questions to use something other than text.
they’re not even trying to pump the bubble smh, nobody wants to work anymore
This is bullshit and they know it. It’s to flood the zone for SEO/attention reasons because the executive and engineering rats are fleeing the Anthropic ship over the last week or two and more will follow.
Ooh got a source for that?
Sycophantic computer program known for telling people what they want to hear tells someone what he wants to hear
someone’s funding round is going badly




















