I love to show that kind of shit to AI boosters. (In case you’re wondering, the numbers were chosen randomly and the answer is incorrect).
They go waaa waaa its not a calculator, and then I can point out that it got the leading 6 digits and the last digit correct, which is a lot better than it did on the “softer” parts of the test.
The craziest thing about leaked prompts is that they reveal the developers of these tools to be complete AI pilled morons. How in the fuck would it know if it can or can’t do it “in its brain” lol.
edit: and of course, simultaneously, their equally idiotic fanboys go “how stupid of you to expect it to use a calculating tool when it said it used a calculating tool” any time you have some concrete demonstration of it sucking ass, while simultaneously the same kind of people are lauding the genius of system prompts half of which are asking it to meta-reason.
Here’s the exact text in the prompt that I had in mind (found here), it’s in the function specification for the js repl:
What if this is not a being terminally AI pilled thing? What if this is the absolute pinnacle of what billions and billions of dollars in research will buy you for requiring your lake-drying sea-boiling LLM-as-a-service not look dumb compared to a pocket calculator?
Still seems terminally AI pilled to me, an iteration or two later. “5 digit multiplication is borderline”, how is that useful?
I think there’s a combination of it being a pinnacle of billions and billions of dollars, and probably theirs firing people for slightest signs of AI skepticism. There’s another data point, “reasoning math & code” is released as stable by Google without anyone checking if it can do any kind of math.
edit: imagine that a calculator manufacturer in 1970s is so excited about microprocessors they release an advanced scientific calculator that can’t multiply two 6 digit numbers (while their earlier discrete component model could). Outside the crypto sphere, that sort of insanity is new.