• queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    19
    ·
    2 days ago

    This isn’t a sign that the technology is advancing, it’s actually a sign of its weakness.

    Their bots can’t understand instructions. That’s it. They’re not disobeying because they don’t even know what’s going on.

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Exactly.

      These are just statistical models trained on the natural language output of real humans. If real humans are statistically likely to ignore instructions in a particular case (or do other undesirable things like misunderstand, lie, confabulate, etc), then the statistical model trained to simulate human output will do the same.

      It would only be surprising if this weren’t the case.

  • [deleted]@piefed.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    AI does not have intent, it is unreliable. Like a car that doesn’t stop when you push the brakes isn’t ‘working against your foot pressure’.