You are currently offline

Google to Regulate AI-Generated Content in Android Apps, Strengthening User Protections

In response to the proliferation of generative AI technologies, Google is taking steps to regulate AI-generated content within Android apps. Starting early next year, Google will require Android apps using AI-generated content to incorporate a simple mechanism for users to report offensive material. This reporting process should be seamless, allowing users to flag inappropriate content without leaving the app, similar to existing in-app reporting systems.

The new policy is designed to address various forms of AI-generated content, such as AI chatbots, apps creating AI-generated images, and those producing voice or video content through AI manipulation. However, apps that host AI-generated content, use AI solely for summarizing purposes (e.g., books), or employ AI as a feature in productivity apps are exempt from this policy.

Google's policy defines problematic AI content, including nonconsensual deepfakes of explicit material, AI-generated recordings of individuals intended for fraudulent activities, misleading election content, generative AI apps with explicit sexual intent, and the creation of malicious code.

While acknowledging the rapidly evolving nature of generative AI, Google has indicated its willingness to revisit its AI policies as the technology advances. Beyond AI content regulation, Google is also reinforcing its Play Store's policies regarding photo and video permissions. The aim is to limit extensive access to users' personal media, ensuring greater privacy and security. Apps that genuinely require broad access to photos and videos will continue to receive general permissions, while those with limited media use will be required to utilize a photo picker for enhanced privacy protection.


blank strive to empower readers with accurate insightful analysis and timely information on a wide range of topics related to technology & it's impact

Post a Comment (0)
Previous Post Next Post