Google is enhancing its AI integration across Android devices, including Gmail and messaging apps, affecting over 2 billion users. The update leverages Google’s Gemini AI to assist with tasks such as sending messages, making calls, and setting reminders.
However, the rollout has raised privacy concerns. A recent email suggested that Gemini AI would access data from apps like Messages and WhatsApp regardless of whether users had disabled Gemini Apps Activity. Google later clarified that while Gemini can operate with Apps Activity off, it won’t use chats to train AI models or review content.
Previously, enabling Gemini for messaging apps required keeping Apps Activity on, which stored user interactions. The update separates these functions, meaning turning off Apps Activity prevents data from being used to improve Google’s AI, though interactions are still temporarily saved for up to 72 hours.
The update’s deeper AI integration has sparked debate over individual privacy and data security, especially since sensitive information within call logs and private messages can be accessed temporarily. This has led to calls for clearer user controls, similar to those Google provides for Gemini on Android, to be extended to Gmail.
Meanwhile, similar AI enhancements are emerging on other platforms. WhatsApp has introduced AI-generated summaries of text threads, stirring discussions on privacy and convenience. In education, Google plans to deploy Gemini AI for lesson planning and feedback, raising concerns about AI’s impact on learning and student privacy.
Incogni’s recent “Gen AI and LLM Data Privacy Ranking 2025” report highlights growing data privacy challenges as AI tools become embedded in daily workflows. The report notes that users often lack awareness of how their data is collected and used by AI, with big tech companies like Google, Microsoft, and Meta under scrutiny for data handling practices.
Despite Google’s privacy commitments—such as not using student data to train AI models in its education offerings—experts warn that the fast-paced AI integration demands ongoing transparency and user education to address complex privacy risks.