Google’s AI model will potentially listen in on all your phone calls — or at least ones it suspects are coming from a fraudster.
To protect the user’s privacy, the company says Gemini Nano operates locally, without connecting to the internet. “This protection all happens on-device, so your conversation stays private to you. We’ll share more about this opt-in feature later this year,” the company says.
“This is incredibly dangerous,” says Meredith Whittaker, the president of a foundation for the end-to-end encrypted messaging app Signal.
Whittaker —a former Google employee— argues that the entire premise of the anti-scam call feature poses a potential threat. That’s because Google could potentially program the same technology to scan for other keywords, like asking for access to abortion services.
“It lays the path for centralized, device-level client-side scanning,” she said in a post on Twitter/X. “From detecting ‘scams’ it’s a short step to ‘detecting patterns commonly associated w/ seeking reproductive care’ or ‘commonly associated w/ providing LGBTQ resources’ or ‘commonly associated with tech worker whistleblowing.’”
There are a few ways this could work, but it hardly seems worth the effort if it’s not phoning home.
They could have an on-device database of red flags and use on-device voice recognition against that database. But then what? Pop up a “scam likely” screen while you’re already mid-call? Maybe include an option to report scams back to Google with a transcript? I guess that could be useful.
Any more more than that would be a privacy nightmare. I don’t want Google’s AI deciding which of my conversations are private and which get sent back to Google. Any non-zero false positive rate would simply be unacceptable.
Maybe this is the first look at a new cat and mouse game: AI to detect AI-generated voices? AI-generated voice scams are already out there in the wild and will only become more common as time goes on.