I think the disagreement here is semantics around the meaning of the word “lie”. The word “lie” commonly has an element of intent behind it. An LLM can’t be said to have intent. It isn’t conscious and, therefor, cannot have intent. The developers may have intent and may have adjusted the LLM to output false information on certain topics, but the LLM isn’t making any decision and has no intent.
IMO parroting lies of others without critical thinking is also lies.
For instance if you print lies in an article, the article is lying. But not only the article, if the article is in a paper, the paper is also lying.
Even if the AI is merely a medium, then the medium is lying. No matter who made the lie originally.
Then we can debate afterwards the seriousness and who made up the lie, but the lie remains a lie no-matter what or who repeats it.
I think the disagreement here is semantics around the meaning of the word “lie”. The word “lie” commonly has an element of intent behind it. An LLM can’t be said to have intent. It isn’t conscious and, therefor, cannot have intent. The developers may have intent and may have adjusted the LLM to output false information on certain topics, but the LLM isn’t making any decision and has no intent.
IMO parroting lies of others without critical thinking is also lies.
For instance if you print lies in an article, the article is lying. But not only the article, if the article is in a paper, the paper is also lying.
Even if the AI is merely a medium, then the medium is lying. No matter who made the lie originally.
Then we can debate afterwards the seriousness and who made up the lie, but the lie remains a lie no-matter what or who repeats it.