Trying to treat the discussion as a philisophical one is giving more nuance to ‘knowing’ than it deserves. An LLM can spit out a sentence that looks like it knows something, but it is just pattern matching frequency of word associations which is mimicry, not knowledge.
I’ll preface by saying I agree that AI doesn’t really “know” anything and is just a randomised Chinese Room. However…
Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn’t know things, you’re basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can’t just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can’t- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.
I never said discussing LLMs was itself philosophical. I said that as soon as you ask the question “but does it really know?” then you are immediately entering the territory of the theory of knowledge, whether you’re talking about humans, about dogs, about bees, or, yes, about AI.
This is a philosophical discussion and I doubt you are educated or experienced enough to contribute anything worthwhile to it.
Dude… the point is I don’t have to be. I just have to be human and use it. If it sucks, I am gonna say that.
Insulting, but also correct. What “knowing” something even means has a long philosophical history.
Trying to treat the discussion as a philisophical one is giving more nuance to ‘knowing’ than it deserves. An LLM can spit out a sentence that looks like it knows something, but it is just pattern matching frequency of word associations which is mimicry, not knowledge.
I’ll preface by saying I agree that AI doesn’t really “know” anything and is just a randomised Chinese Room. However…
Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn’t know things, you’re basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can’t just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can’t- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.
That is not what I said. In fact, it is the opposite of what I said.
I said that treating the discussion of LLMs as a philosophical one is giving ‘knowing’ in the discussion of LLMs more nuance than it deserves.
I never said discussing LLMs was itself philosophical. I said that as soon as you ask the question “but does it really know?” then you are immediately entering the territory of the theory of knowledge, whether you’re talking about humans, about dogs, about bees, or, yes, about AI.
I asked ChatDVP for a response to your post and it said you weren’t funny.
I can tell you’re a member of the next generation.
Gonna ignore you now.
A 3 day old account being a dick on Lemmy?
I’m shocked.
At first I thought that might be a Pepsi reference, but you are probably too young to know about that.