Knowing the limits of your knowledge can itself require an advanced level of knowledge.
Sure, you can easily tell about some things, like if you know how to do brain surgery or if you can identify the colour red.
But what about the things you think you know but are wrong about?
Maybe your information is outdated, like you think you know who the leader of a country is but aren’t aware that there was just an election.
Or maybe you were taught it one way in school but it was oversimplified to the point of being inaccurate (like thinking you can do physics calculations but end up treating everything as frictionless spheres in gravityless space because you didn’t take the follow up class where the first thing they said was “take everything they taught you last year and throw it out”).
Or maybe the area has since developed beyond what you thought were the limits. Like if someone wonders if they can hook their phone up to a monitor and another person takes one look at the phone and says, “it’s impossible without a VGA port”.
Or maybe applying knowledge from one thing to another due to a misunderstanding. Like overhearing a mathematician correcting a colleague that said “matrixes” with “matrices” and then telling people they should watch the Matrices movies.
Now consider that not only are AIs subject to these things themselves, but the information they are trained on is also subject to them and their training set may or may not be curated for that. And the sheer amount of data LLMs are trained on makes me think it would be difficult to even try to curate all that.
Knowing the limits of your knowledge can itself require an advanced level of knowledge.
Sure, you can easily tell about some things, like if you know how to do brain surgery or if you can identify the colour red.
But what about the things you think you know but are wrong about?
Maybe your information is outdated, like you think you know who the leader of a country is but aren’t aware that there was just an election.
Or maybe you were taught it one way in school but it was oversimplified to the point of being inaccurate (like thinking you can do physics calculations but end up treating everything as frictionless spheres in gravityless space because you didn’t take the follow up class where the first thing they said was “take everything they taught you last year and throw it out”).
Or maybe the area has since developed beyond what you thought were the limits. Like if someone wonders if they can hook their phone up to a monitor and another person takes one look at the phone and says, “it’s impossible without a VGA port”.
Or maybe applying knowledge from one thing to another due to a misunderstanding. Like overhearing a mathematician correcting a colleague that said “matrixes” with “matrices” and then telling people they should watch the Matrices movies.
Now consider that not only are AIs subject to these things themselves, but the information they are trained on is also subject to them and their training set may or may not be curated for that. And the sheer amount of data LLMs are trained on makes me think it would be difficult to even try to curate all that.
Edit: a word