You must log in or # to comment.
This isn’t a sign that the technology is advancing, it’s actually a sign of its weakness.
Their bots can’t understand instructions. That’s it. They’re not disobeying because they don’t even know what’s going on.
Exactly.
These are just statistical models trained on the natural language output of real humans. If real humans are statistically likely to ignore instructions in a particular case (or do other undesirable things like misunderstand, lie, confabulate, etc), then the statistical model trained to simulate human output will do the same.
It would only be surprising if this weren’t the case.
AI does not have intent, it is unreliable. Like a car that doesn’t stop when you push the brakes isn’t ‘working against your foot pressure’.



