“Well, color me stupid”, is how I start the video below, and for good reason. I just made an enormous mistake that I needed to fix pretty quickly. What was it?
STOP! Why Uploading Your Private Data to ANY AI (Like Gemini) Is An ENORMOUS Mistake
Using Google Gemini (their fantastic chat tool, that we have wonderful short video tutorial on for both beginners and pros HERE) for what I thought was an innocent task: comparing two insurance policies. To make the comparison easy, I uploaded a screenshot of one of my current policies. The problem? That screenshot included all my private information, private phone number, home address, all kinds of policy details, everything I absolutely do not want floating around the internet.
Now, I was able to fix it, but the experience highlighted a terrifying truth that everyone using ANY AI tool needs to understand.
The Terrifying Truth: Your Data Becomes AI Food
Here’s the absolute, terrifying truth you need to know about all Large Language Models (LLMs) like Gemini, ChatGPT, Claude, and any others that let you upload files or chat: They don’t forget, and they don’t treat your input as a one-time transaction.
Everything you feed into these AI tools, whether it’s a simple text prompt, a massive spreadsheet, or an image attachment, is generally considered fair game. That data, yes, your data, is explicitly used as training data for future answers and improving the model itself.
Think about that for a second. That contract you uploaded, that detailed financial analysis, or in my case, that insurance document full of private information, gets ingested by the AI.
The Risk: Your Policy Showing Up in Someone Else’s Chat
So, what does it mean when your data is used for training?
It means your highly confidential screenshot document that contains your private phone number, address, or policy details could be indexed and processed by the model. When another user, a total stranger, asks a related but general question, your private information could show up as an answer, a summary, or a partial output in their chat thread.
Imagine asking Gemini, “What are common policy exclusions for a house in [Your Town Name]?” And the AI decides to pull text from the policy you uploaded, perhaps even including your name or policy number, to craft its answer.
Oh my, that’s bad. We are talking about the potential exposure of personally identifiable information (PII) to the entire user base of a major AI platform.
The Lesson: Avoidance Is The Only Real Fix
Yes, there are steps you can take to delete the history.
- You can go into the chat thread and hit “Delete” (assuming you did that quickly)
- You can go to myactivity.google.com, select “Delete Activity By”, filter down to the Gemini product, and select a time range (the last hour/day) to PERMANENTLY ERASE your chat history and attached files from the server
I did the latter, and it worked, my embarrassing and dangerous chat thread vanished.
The Wrap
The best practice is to NEVER upload sensitive documents to an AI tool in the first place. You are counting on a digital eraser working perfectly every single time, across massive and complex systems. Don’t trust it.
Treat every single AI chat box as a public forum or, at best, a temporary intern’s email. Because once you hit “enter” or “upload,” that data is gone forever, or at least available for the AI to devour and potentially reuse. Protect your information. Stop uploading your private life to the AI.
0 Comments