YouTube has (“voluntarily” or otherwise) assumed the role of a private business entity that “supports elections.”
Google’s video platform detailed in a blog post how this is supposed to play out, in this instance, in the EU.
With the European Parliament (EP) election just around the corner, YouTube set out to present “an overview of our efforts to help people across Europe and beyond find helpful and authoritative election news and information.”
The overview is the usual hodgepodge of reasonable concepts, such as promoting information on how to vote or register for voting, learning about election results, etc., that quickly morph into yet another battle in the “war on disinformation.”
And what better way to “support” an election (and by extension, democracy) – than to engage in another round of mass censorship? /s
But YouTube was happy to share that in 2023 alone, it removed 35,000 videos uploaded in the EU, having decided that this content violated the platform’s policies, including around what the blog post calls “certain types of elections misinformation” (raising the logical question if some types of “election misinformation” might be allowed).
As for who is doing this work, YouTube suggests it is a well-oiled machine hard at it around the clock, and “at scale” – made up of “global teams of reviewers” and machine learning algorithms.
The blog post first states that one of the goals of YouTube’s efforts is to help users “learn about the issues shaping the debate.” But then in the part of the article that goes into how the platform is “dealing with harmful content,” it at one point starts to look like the giant might be trying to shape that debate itself.
“Our Intelligence Desk has also been working for months to get ahead of emerging issues and trends that could affect the EU elections, both on and off YouTube,” reads the post.
In case somebody missed the point, YouTube reiterates it: “This helps our enforcement teams to address these potential trends before they become larger issues.”
And while machine learning (aka, AI) is mentioned as a positive when it comes to powering YouTube’s ability to censor at scale, later on in the post the obligatory mention is made of AI as a tool potentially dangerous to elections and democracy.
YouTube also states that coordinated influence campaigns are banned on the platform – and promises that this is true “regardless of the political viewpoints they support.”
And when Google’s Threat Analysis Group (TAG) decides it has spotted such a campaign, this information is shared with law enforcement, including EUROPOL.