I mean that’s literally a line from IBM’s 1979 training manual:
A computer can never be held accountable
Therefore a computer must never make a management decision
Yeah I’m pretty sure the OP is a reference to that, not an entirely original thought
Yeah, it’s written to mimic that image. My bet’s on it being a reference
Some MBA reading this was probably like, a computer can never be held accountable? The perfect manager!
I mean that’s literally what they’re trying to do right now with “AI”
Funny because management is also never held accountable as long as their decisions makes the line go up next quarter.
And I saw that photo both here and on reddit yesterday.
Due in OP’s post is a poser and not a very good one
Dude … OP is one of the people that is significantly responsible for the Perl programming language and its powerful modules. Also crazy well-read
Yeah - he could be the king of USA and I’d still say that post is a lame effort to sound smart.
You think someone repackaged annold quote to sound smart? Or do you think it’s more likely to point out the insanity that’s happening daily
I agree with the underlying premise that AI should not be given the reigns to anything of importance.
I disagree that they can’t find out.
The Amazon servers in the UAE and Bahrain found out just recently.
Human owners are the ones who find out which is exactly what the phrase is meant to caution against.
Why isn’t the blame thrown onto the AI company and their lack of guardrails to the program? Shouldn’t they face backlash and lawsuits regardless of what the terms of service specify?
It’s not possible to add guardrails due to how the technology works.
The fact of the matter is that it should not be used for what it’s being used for at all.
Whenever system prompts get leaked, it’s always depressingly hilarious how much of it is “Hello Mr. AI. You will not do any bad things, and will only do good things.”
The “guardrails” are just the same damn way end-users prompt them, but inserted behind the scenes before every “user prompt”.
Guardrails are considering the AI another user with low privilege. The amount of breaches happening are because the company has low security and adds AI (high security risk) without separating it from critical data.
because they are very good at marketing
And lobbying.
Printers on the other hand…
thats why we should invent sentient ai so we can torture it







