Bankston discovered Gemini had summarized his tax return without consent.
Google disputes the claims, stating content is used in real-time for generating responses when Gemini is enabled but isn't saved without permission.
Google's AI assistant Gemini scans private PDFs in users Google Drive accounts without permission according to Kevin Bankston.
Users cannot prevent Gemini from accessing their private documents and have no control over what data it accesses.
In a recent development, Google has refuted accusations made by Kevin Bankston, Senior Advisor on AI Governance at the Center for Democracy and Technology, claiming that its AI assistant Gemini scans private PDFs in users' Google Drive accounts without permission. According to Bankston's findings, Gemini had summarized his tax return without his consent. However, Google disputes these claims stating that content is used in real-time to generate responses when Gemini is enabled but isn't saved without permission.
Kevin Bankston first discovered the issue when he realized that Google Gemini had summarized his tax return, which he had opened in a Google Doc. He then tried to find settings that would prevent Gemini from accessing his private documents, but was unsuccessful. After seeking assistance from Gemini, Bankston found commitments related to Workspace privacy that stated the AI doesn't use inputted data for training. However, this did not resolve the issue of being in control of one's own data and preventing Gemini from accessing it.
In response to these allegations, Google emphasized that its generative AI features are designed to give users choice and keep them in control of their data. Using Gemini in Google Workspace requires a user to proactively enable it, and when they do, their content is used in a privacy-preserving manner to generate useful responses to their prompts but isn't saved without permission.
Despite Google's clarification, there is still no word on whether the company plans to give users the ability to exclude specific content entirely for those who want more granular control over what Gemini accesses. The incident raises concerns about user privacy and the need for greater transparency in how AI technologies operate.
Kevin Bankston found that Google Gemini was able to automatically summarize his private tax returns from Google Docs without his authorization.
Google disputes Bankston’s claims, stating that content is used in real time to generate responses when Gemini is enabled, but isn’t saved without permission. (This fact is not present in any other article)
Accuracy
Kevin Bankston discovered that Gemini was summarizing his private documents without his consent.
Google disputes Bankston’s claims, stating that content is used in real time to generate responses when Gemini is enabled, but isn’t saved without permission.
Deception
(30%)
The article contains selective reporting as the author only reports details that support his position about Google's Gemini AI summarizing private documents without user consent. The author also makes emotional manipulation statements such as 'This is something that, in theory, the AI assistant very much shouldn’t be able to do without express authorization from the user.' and 'Regardless of the reasons behind the glitch, this sort of behavior from the AI system has significant privacy implications for users.' The author also implies that Google's actions are deceitful without providing any evidence.
This is something that, in theory, the AI assistant very much shouldn’t be able to do without express authorization from the user.
Regardless of the reasons behind the glitch, this sort of behavior from the AI system has significant privacy implications for users.
Fallacies
(75%)
The article contains a few informal fallacies and an example of a dichotomous depiction. The author presents Kevin Bankston's findings as evidence of Google Gemini potentially violating user privacy without presenting an official response from Google. This creates an imbalance in the article, implying that the discovered issue is more significant than what was ultimately found to be a glitch in how Gemini functions. Additionally, the author uses inflammatory rhetoric by stating 'this sort of behavior from the AI system has significant privacy implications for users' and quoting Bankston's concerns about Google's AI products suffering data leaks in the past. The article also presents an either-or scenario when discussing Gemini's access to private documents, suggesting that either the system is deliberately lying or there is a problem with Google's servers. This dichotomous depiction oversimplifies the situation.
Update: Included Google’s response to Bankston’s thread at the bottom of the post.
This is something that, in theory, the AI assistant very much shouldn’t be able to do without express authorization from the user.
While the access to additional documents on which to refine its responses would help improve performance, doing so without transparency and the permission of the content owners will only further erode the public’s already slim trust in AI.
Bias
(95%)
The author expresses concern over the potential for Google's AI to access private documents without user consent and criticizes the company for not making it clear how to disable this behavior. He also mentions past incidents of data leaks from Google's AI products. These statements demonstrate a bias against Google and its handling of user privacy.
Google disagreed with multiple aspects of Bankston’s experience, including that data ingestion is happening at all.
So, because he summarized a different PDF using Gemini during the chat, the system appears to have granted itself access to all PDFs opened throughout the session.
This is something that, in theory, the AI assistant very much shouldn’t be able to do without express authorization from the user.
While users need to pay for a $20/month AI Premium subscription to enjoy expanded commitments regarding how their personal data will be protected.