Axon’s Draft One debuted last summer at a police department in Colorado, instantly raising questions about the feared negative impacts of AI-written police reports on the criminal justice system. The tool relies on a ChatGPT variant to generate police reports based on body camera audio, which cops are then supposed to edit to correct any mistakes, assess the AI outputs for biases, or add key context.

But the EFF found that the tech “seems designed to stymie any attempts at auditing, transparency, and accountability.” Cops don’t have to disclose when AI is used in every department, and Draft One does not save drafts or retain a record showing which parts of reports are AI-generated. Departments also don’t retain different versions of drafts, making it difficult to assess how one version of an AI report might compare to another to help the public determine if the technology is “junk,” the EFF said. That raises the question, the EFF suggested, “Why wouldn’t an agency want to maintain a record that can establish the technology’s accuracy?”

  • dastanktal@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    8 days ago

    Oh for fucks sake. Police have avoided accountability and scrutiny for the majority of the institutions history.

    Why is this necessary, became profit.

    Guess I’m going to be working on my local area to ensure this stupid fucking AI tool isn’t used