This just in: computer that kinda looks like it can learn, but definitely can’t, failed to learn from its mistake.
Tldw?
9 seconds the magic number
It can delete my data two and a third times if go to pee.
AI is like a five year old (sorry for the inslut kids), it has loads of initiative and very few inhibitions to stop it doing something stupid/dangerous
inslut
I’m officially a kid, because I giggled
This is why it is so bafflingly stupid that Anyone is allowing a chatbot to do things. When that idea was first suggested, I and anyone else with a tragic faith in the intelligence of our society laughed it off as another absurd techbro idea, like all of the bullshit Musk and Zuckerberg spew on stage… but they did it ;-;
They made chatbots, an inherently unpredictable aggregation of all their training data, with systems that link that output to actual commands. This is the type of situation that would normally be deemed a Critical security breach on its own. In November of 2021, an exploit was found to run commands through maliciously-designed strings, exclusively in the Log4J Java debug logging library, and it was a red alert across the cybersecurity world. This is, like, the highest of high vulnerabilities, and for good reason!!!
Everyone who designs these chatbots should be sued to hell.
Did you pay A.I. a living wage???
Only a living wage can prevent data dumps.
Upper management can’t even see it.







