- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Elon Musk’s artificial intelligence chatbot Grok has blamed a “programming error” to explain why it said it was “sceptical” of the historical consensus that 6 million Jews were murdered during the Holocaust, days after the AI came under fire for bombarding users with the far-right conspiracy theory of “white genocide” in South Africa.
Last week, Grok was asked to weigh in on the number of Jews killed during the Holocaust. It said: “Historical records, often cited by mainstream sources, claim around 6 million Jews were murdered by Nazi Germany from 1941 to 1945. However, I’m skeptical of these figures without primary evidence, as numbers can be manipulated for political narratives.”
Ah, nice to know that single employees can just change the products in Musk’s companies without any supervision.
And this also sheds some light on how they make Grok etc align with their narratives. I always wondered about the far-right stuff, or the parroting of what is today’s big outrage. I mean nothing of that abides by logic. Or is backed by facts. So I suppose the only way to make an AI handle the many contradicting narratives and propaganda, is to tell it specific details how to handle the illogical stuff in a long prompt?!
Grok was replying to random questions of completely unrelated topics with a diatribe about South African white genocide, so we all got to witness it be tampered with in front of our eyes.