Read about how LLMs actually work before you read articles written by people who don’t understand LLMs. The author of this piece is suggesting arguments that imply that LLMs have cognition. “Lying” requires intent, and LLMs have no intention, they only have instructions. The author would have you believe that these LLMs are faulty or unreliable, when in actuality they’re working exactly as they’ve been designed to.
Well, designed is maybe too strong a term. It’s more like stumbling on something that works and expand from there. It’s all still build on the fundaments of the nonsense generator that was chatGPT 2.
Given how dramatically LLMs have improved over the past couple of years I think it’s pretty clear at this point that AI trainers do know something of what they’re doing and aren’t just randomly stumbling around.
A lot of the improvement came from finding ways to make it bigger and more efficient. That is running into the inherent limits, so the real work with other models just started.
So working as designed means presenting false info?
Look , no one is ascribing intelligence or intent to the machine. The issue is the machines aren’t very good and are being marketed as awesome. They aren’t
That’s not completing a task. That’s faking a result for appearance.
Is that what you’re advocating for ?
If I ask an llm to tell me the difference between aeolian mode and Dorian mode in the field of music , and it gives me the wrong info, then no it’s not working as intended
See I chose that example because I know the answer. The llm didn’t. But it gave me an answer. An incorrect one
I want you to understand this. You’re fighting the wrong battle. The llms do make mistakes. Frequently. So frequently that any human who made the same amount of mistakes wouldn’t keep their job.
But the investment, the belief in a.i is so engrained for some of us who so want a bright and technically advanced future, that you are now making excuses for it.
I get it. I’m not insulting you. We are humans. We do that. There are subjects I am sure you could point at where I do this as well
But a.i.? No. It’s just wrong so often. It’s not it’s fault. Who knew that when we tried to jump ahead in the tech timeline, that we should have actually invented guardrail tech first?
Instead we let the cart go before the horses, AGAIN, because we are dumb creatures , and now people are trying to force things that don’t work correctly to somehow be shown to be correct.
I know. A mouthful. But honestly. A.i. is poorly designed, poorly executed, and poorly used.
It is hastening the end of man. Because those who have been singing it’s praises are too invested to admit it.
You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.
It’s just semantics in this case. Catloaf’s argument is entirely centered around the definition of the word “lie,” and while I agree with that, most people will understand the intent behind the usage in the context it is being used in. AI does not tell the truth. AI is not necessarily accurate. AI “lies.”
In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.
It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.
Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.
I can’t believe this Is the supposed high level of discourse on lemmy
I can’t believe this Is the supposed high level of discourse on lemmy
Lemmy users and AI have a lot of things in common, like being confidently incorrect and making things up to further their point. AI at least agrees and apologises when you point out that it’s wrong, it doesn’t double down and cry to the mods to get you banned.
As someone on Lemmy I have to disagree. A lot of people claim they do and pretend they do, but they generally don’t. They’re like AI tbh. Confidently incorrect a lot of
Uh, just to be clear, I think “AI” and LLMs/codegen/imagegen/vidgen in particular are absolute cancer, and are often snake oil bullshit, as well as being meaningfully societally harmful in a lot of ways.
Read the article before you comment.
Read about how LLMs actually work before you read articles written by people who don’t understand LLMs. The author of this piece is suggesting arguments that imply that LLMs have cognition. “Lying” requires intent, and LLMs have no intention, they only have instructions. The author would have you believe that these LLMs are faulty or unreliable, when in actuality they’re working exactly as they’ve been designed to.
Well, designed is maybe too strong a term. It’s more like stumbling on something that works and expand from there. It’s all still build on the fundaments of the nonsense generator that was chatGPT 2.
Given how dramatically LLMs have improved over the past couple of years I think it’s pretty clear at this point that AI trainers do know something of what they’re doing and aren’t just randomly stumbling around.
A lot of the improvement came from finding ways to make it bigger and more efficient. That is running into the inherent limits, so the real work with other models just started.
And from reinforcement learning (specifically, making it repeat tasks where the answer can be computer checked)
So working as designed means presenting false info?
Look , no one is ascribing intelligence or intent to the machine. The issue is the machines aren’t very good and are being marketed as awesome. They aren’t
Yes. It was told to conduct a task. It did so. What part of that seems unintentional to you?
That’s not completing a task. That’s faking a result for appearance.
Is that what you’re advocating for ?
If I ask an llm to tell me the difference between aeolian mode and Dorian mode in the field of music , and it gives me the wrong info, then no it’s not working as intended
See I chose that example because I know the answer. The llm didn’t. But it gave me an answer. An incorrect one
I want you to understand this. You’re fighting the wrong battle. The llms do make mistakes. Frequently. So frequently that any human who made the same amount of mistakes wouldn’t keep their job.
But the investment, the belief in a.i is so engrained for some of us who so want a bright and technically advanced future, that you are now making excuses for it. I get it. I’m not insulting you. We are humans. We do that. There are subjects I am sure you could point at where I do this as well
But a.i.? No. It’s just wrong so often. It’s not it’s fault. Who knew that when we tried to jump ahead in the tech timeline, that we should have actually invented guardrail tech first?
Instead we let the cart go before the horses, AGAIN, because we are dumb creatures , and now people are trying to force things that don’t work correctly to somehow be shown to be correct.
I know. A mouthful. But honestly. A.i. is poorly designed, poorly executed, and poorly used.
It is hastening the end of man. Because those who have been singing it’s praises are too invested to admit it.
It simply ain’t ready.
Edit: changed “would” to “wouldn’t”
That was the task.
No, the task was To tell me the difference in the two modes.
It provided incorrect information and passed it off as accurate. It didn’t complete the task
You know that though. You’re just too invested to admit it. So I will withdraw. Enjoy your day.
It completed the task, it was just wrong.
No. It gave the wrong answer therefore it didn’t complete the task. It gave the wrong answer. Task incomplete
That’s literally how a task works.
I’ve read the article. If there is any dishonesty, it is on the part of the model creator or LLM operator.
You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.
It’s just semantics in this case. Catloaf’s argument is entirely centered around the definition of the word “lie,” and while I agree with that, most people will understand the intent behind the usage in the context it is being used in. AI does not tell the truth. AI is not necessarily accurate. AI “lies.”
AI returns incorrect results.
In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.
It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.
Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.
I can’t believe this Is the supposed high level of discourse on lemmy
Lemmy users and AI have a lot of things in common, like being confidently incorrect and making things up to further their point. AI at least agrees and apologises when you point out that it’s wrong, it doesn’t double down and cry to the mods to get you banned.
I know. it would be a lot better world if a. I apologists could just admit they are wrong
But nah. They better than others.
AI doesn’t lie, it just gets things wrong but presents them as correct with confidence - like most people.
deleted by creator
As someone on Lemmy I have to disagree. A lot of people claim they do and pretend they do, but they generally don’t. They’re like AI tbh. Confidently incorrect a lot of
People frequently act like Lemmy users are different to Reddit users, but that really isn’t the case. People act the same here as they did/do there.
And A LOT of people who don’t and blindly hate AI because of posts like this.
That’s a huge, arrogant and quite insulting statement. Your making assumptions based on stereotypes
I’m pushing back on someone who’s themselves being dismissive and arrogant.
No. You’re mad at someone who isn’t buying that a. I. 's are anything but a cool parlor trick that isn’t ready for prime time
Because that’s all I’m saying. The are wrong more often than right. They do not complete tasks given to them and they really are garbage
Now this is all regarding the publicly available a. Is. What ever new secret voodoo one. Think has or military has, I can’t speak to.
Uh, just to be clear, I think “AI” and LLMs/codegen/imagegen/vidgen in particular are absolute cancer, and are often snake oil bullshit, as well as being meaningfully societally harmful in a lot of ways.
*you’re
You’re just as bad.
Let’s focus on a spell check issue.
That’s why we have trump