Developers of apps for Android will have to adhere to a new set of rules if they wish to publish on the Google Play Store.
The “guidance” is seen by critics as yet another wave of sweeping censorship tied to AI, as Google continues to crack down on what it considers to be hate speech, profanity, bullying, harassment, and other content listed as “restricted.”
One of the types of content developers are now banned from generating refers to sensitive events – and Google’s description is another example of what is likely a deliberately vague definition, so it can be left open to arbitrary interpretation.
Namely, this is content about sensitive events that include things that “capitalize on or are insensitive toward a sensitive event with significant social, cultural, or political impact.”
In its support pages, Google is telling developers that the intent behind the new policies is to make sure AI-generated content is “safe for all users.” And, the giant wants to make sure developers allow users to flag what they see as offensive, and incorporate that “feedback” for the sake of “responsible innovation.”
According to the rules, developers are instructed to utilize user reports “to inform content filtering and moderation in their apps.”
Google also gives examples of what is covered by the restrictions: AI chatbots, which are the central feature of the app, as well as AI-generated images based on text, images, or voice prompts.
The full range of restricted content categories are listed and described on the “Inappropriate Content” page. These are: sexual content and profanity, hate speech (with the note that even educational, scientific, and documentary content about Nazis may result in an app getting blocked “in certain countries”).
There is also violence and violent extremism (this includes terrorist “or other dangerous organizations”), sensitive events, bullying and harassment, and apps that facilitate sales of dangerous products (such as firearms and firearms accessories), marijuana, tobacco, and alcohol.