Test subjects who consulted AI were overwhelmingly willing to accept its answers without scrutiny, whether correct or not.
Not surprising. We always want to take the path of least resistance. I mean 20 years ago, you could access the world’s information via the internet, but you had to know how to search for things. We slowly went from that to “I’ll just google it.” and “Well, google says.” Now that we have LLM’s (which IMHO are mostly just fancier, faster google search) we have people legitimately saying “let me ask ChatGPT” and “ChatGPT says.”
I think there is a positive future for LLM’s and true AI, but its not right now, and it definitely is not in the hands of capitalists.A lot of people never adopted logical thinking in the first place.
Which makes careless use of big babble machines even more problematic.
deleted by creator
It’s not just about factual knowledge and its correctness, but especially about critical thinking abilities. As soon as we started printing books, looking up knowledge was never an issue. LLMs don’t change that (even with limited reliability). But was has been an issue is using this knowledge, reflecting on it, evaluating it, analyzing it, etc. make something meaningful out of it in order to grow and advance.
If we leave the critical thinking part to machines, we are even more prone to being manipulated, misguided or become the victims of consequences that are based on intentional or accidental errors. We stop functioning as independent humans and become even more gullible as we already are.
This is also what the research points to.
So my bottom line is: use technology to enhance and extend your thinking abilities, not replace them. That’s why care is required when using these tools. It’s easy to just delegate everything to the machine. But that leaves you… empty.
(Sry for the dramatic writing style, I was just in the middle of a text adventure. :'D )
Yes.
I think we have abundant evidence that a sincere belief in alternative facts is pretty destructive.
I mentioned that I don’t use AI today, and the person I was talking to was really surprised. They didn’t understand how I could not use AI for anything.
its almost like the world has been turning for millions of years without it
They were like, “so you don’t Google anything?”
Well, first, yes, I don’t use Google. But second, you can turn AI off! xD
thats crazy.
Partly why I support age-gating some things are the scary stories I hear from schoolteachers I know: about whole classrooms of kids that have trouble concentrating on anything for more than 60 seconds, or how they hear every day: “If AI can do this, why do I have to learn it in the first place?”
(And don’t come at me about age gating because I don’t care to argue about it.)
I saw somebody in work upload a firewall config xml and start querying if stuff was blocked. I actually thought it was a pretty clever use of it.
I probably wouldn’t trust it to write a config and upload it back, but for an assistant to an untrained eye it was pretty solid.
I’ve also used copilot for silly things like
“Take these 10 lines of process steps, make them sound professional and format them for easy reading”.
Stuff like that isn’t my job, but when it lands on my desk it’s a quick way to get it down and back to what I’m supposed to be focussing on.
This is a long way of saying, there are definitely use cases, but nobody’s being replaced.
I saw somebody in work upload a firewall config xml and start querying if stuff was blocked. I actually thought it was a pretty clever use of it.
I would find it some place between worrisome and you-should-lose your-job, depending on how important that firewall is. This might seem exaggerated, but if your colleague had showed that config to a child, and then asked them yes and no questions, a game to which the child happily participated in. I would consider that exactly as reasonable, and exactly as responsible, as asking an LLM. Imagine someone doing this, for an important firewall config… and taking the child’s answers at face value. It should be fair to think that this person is grossly unqualified, and showing a dangerous lack of judgment.
And, that’s just the issues I would have regarding using a bullshit generator as a source of truth. If the firewall config could be considered sensitive information, uploading that to a third party, would be grounds for dismissal for entirely separate reasons.
But LLM’s are bad at math and logic.
And facts. They are very good at sounding confident though.
Yeah, thats kinda the problem, they couldn’t think for themselves, and would rather trust a hallucinating autocomplete program
would rather trust a hallucinating autocomplete program
I mean, outsourcing your thinking would still negatively affect your cognitive capabilities even if you were to rely on something actually intelligent.
Depends.
Extend and assist. Not externalize.
It’s fine to use as assistive tool. But outsourcing thinking is problematic. Unfortunately, this outcome was likely.
Thinking is hard.
Yeah, what a new and scary thing that started with AI and nothing else ever. Jfc, AI has been the best thing for liberal moralistic arguments of social degeneracy since social media.
duh











