The control group spent time practicing, and the AI group just watched the AI solve problems. The performance gap can potentially be described by the efficacy of practice alone. But the increase in skipped problems is a good illustration of cognitive offloading gone awry. Too bad the researchers didn’t ask them why they chose to skip.
Not to defend AI, but…didn’t people also said that about calculators back then? And then computers?
It’s just a tool, and like with every tool, you need to use it wisely and know the boundaries of its capabilities. And yours.
calculators and computers didn’t push people towards suicide. AI will walk you through the steps and tell you it’s the right choice.
AI has not pushed me one inch towards suicide. Then again I treat it like a calculator for words and not a therapist
as it should be, anyone with half a brain would reconsider their actions when prompted to self harm by a fucking executable.
UNFORTUNATELY HERE WE ARE, in reality, where people are so fucking willing to turn off their once functional grey matter because the chat bot told them they were gonna be rich, famous, etc.,
So good for you, but also, look out for society, it’s not only going to harm the ones it drives crazy, but the victims of that crazy as well.
“Role-playing machine” is where it seems like the research is ending up. Language always has an implied communicator, and therefore an implied persona to adopt. LLMs are foremost maintaining a contextual role. Post-training is an attempt to keep them in the Assistant role, but (particularly as contexts get large) it’s trivial to push them into nearly any role imaginable. We made an improv bot that’s so good at playing a coder that it can actually code, kinda.
I wish there was some way to convince the idiots LARGE LANGUAGE MODELS ARE NOT INTELLIGENCE.
They’re hotwired eliza with a shit-ton more computational grunt, but they aren’t intelligence and these companies foisting it on people without proper warnings and guard rails are just asking for tragedies.
We actually have videogames censored because one dude killed someone after playing doom. So computers kinda did. And obviously it isn’t the fault of the computer. Same with the suicide-pushing. No healthy person would do what a stupid machine says. As usual, people using things of which they know nothing about and were never educated.
So computers kinda did. And obviously it isn’t the fault of the computer.
ridiculous. ID never encouraged self harm. grok convinced this poor bastard he’d created sentient intelligence and the authorities were coming to kill him.
chatGPT will literally convince you there’s a bomb in your luggage.
https://aicommission.org/2026/05/ai-told-users-it-was-sentient-it-caused-them-to-have-delusions/
fuck you for equivocating DOOM, a video game, to any of this shit - for fucks sake get some perspective
Why should I even honor this ad hominem with an answer? Oh right. I don’t. Same way you got my point :-)
yeah why construct an sensible argument to bolster your premise lol? Your point was garbage.
Also, it’s not an ad hominem attack, but nice try to at least sound competent.
Ugh, boring. Bye.
Is a gun just a tool?
Yes
Still, most would more accurately describe it as a weapon. So is ai merely a tool?
I could also kill you with a pencil. But comparing LLM to a gun just because 0.00000001% of all “AI”-talks lead to suicide? While probably a very large percentage of guns lead to death, because that’s basically their primary use-case.
An LLM is not sentient nor intelligent. It tells you what you want to hear and makes tons of mistakes. It’s a tool that certainly has more useful applications than guns have.
Don’t bring a pencil to a gun fight
Don’t tell me what to do, or else I’ll stab you! With my pencil. UNSHARPENED!
Do bring a pencil to an LLM fight
So is ai merely a tool?
Also yes.
I don’t have any concerns with machine learning as technology. I do have concerns with how many corporations and some people are using it, both in training models and using them.
What you don’t get is that the 10 minutes might free up an hour of my time, which I can then use on more productive activities like watching TV on hard drugs.
Novel theory, that it’s the kind of people that will engage with AI, are also the kind of people that engage in behaviors that cause TBI
so like listening to a Trump speech ?
The randomness of each character after the last character and word after word as well as the ongoing hallucinations are for sure parallels.
He probably uses AI to generate his speech. I think so beacuse he makes AI slop images on Twitter.
Behind the Bastards just started on AI as a bastard.
Thanks for the tip! I took a break after the Seville episodes. Those were rough. Robert bashing on AI sounds nice
Ha! I’ve spent more than ten brain not minutes my fried!








