ChatGPT processes an estimated 2.5 billion prompts each day, with around 330 million coming from users in the U.S. Unlike a traditional search engine, it replies in natural language, which can feel more like a conversation with a person than a list of links. That familiarity can make it easier to overshare.
At the same time, chatbots are not closed, static systems. They rely on user input and, in some cases, use those interactions to improve their models. Recent incidents have underlined the risks: in January 2025, Meta AI fixed a bug that briefly exposed private prompts; earlier versions of ChatGPT were vulnerable to prompt injection attacks that could reveal personal data; and search engines have, in some cases, indexed shared ChatGPT conversations, making them publicly searchable.
The same basic digital hygiene rules that apply elsewhere online should apply to AI chatbots — and, given the technology’s rapid evolution, extra caution is warranted. Here are five types of information you should not share with ChatGPT.
Personally identifiable information
Personally identifiable information (PII) includes details such as your full name, home address, government ID numbers, phone numbers, email addresses, and usernames or passwords. Research highlighted by Cyber Security Intelligence, based on an analysis of 1,000 publicly available ChatGPT conversations by cybersecurity group Safety Detectives, found users regularly shared exactly this kind of data.
This is especially concerning as more tools emerge that build on ChatGPT, such as agentic AI browsers that can act on your prompts. Once PII is entered into a chatbot, it may be stored, logged, or later exposed through bugs, misconfigurations, or user error. That information could then be misused for identity theft or other abuse.
If you are asking for help with something like a résumé or a cover letter, you rarely need to provide real personal details. Use placeholders for names, addresses, phone numbers, and other identifiers, and fill in the specifics yourself before sending the final document.
Users can also adjust settings to limit how their chats are used to train models, following instructions on the OpenAI website. This can reduce, but not eliminate, the risk of inadvertent exposure. The safest approach is straightforward: avoid sharing PII with ChatGPT at all.
Financial details
Many people turn to ChatGPT as an informal financial assistant, whether to draft a budget, compare savings strategies, or understand a loan offer. Even OpenAI warns that its system is fallible:
“ChatGPT can make mistakes. Check important info.”
That caveat applies not only to financial advice but also to what you share. There is no reason to enter sensitive financial data such as bank account numbers, credit card details, online banking logins, investment account credentials, or full tax records.
Chatbots are not subject to the same security, compliance, and audit controls that govern banks or payment processors. Once uploaded, financial data can sit outside the protections normally applied to transactions, creating opportunities for fraud, identity theft, phishing schemes, or other criminal activity if it is ever exposed.
If you need help understanding a financial document, use redacted versions that remove names, account numbers, and other identifiers, or replace real numbers with approximate figures that still illustrate the problem without revealing your actual data.
Medical details
AI chatbots are increasingly used to look up health information. A recent poll suggests about one in six adults use AI chatbots at least once a month for health-related questions, rising to roughly one in four among younger adults.
The concern is not just whether ChatGPT should be used for medical advice — it is that people often share very specific, highly personal health information. Detailed diagnoses, test results, full medical histories, prescription lists, or descriptions of mental health struggles can quickly become sensitive, especially when tied to identifying details like names or dates of birth.
Unlike records held by licensed healthcare providers, data you share with a general-purpose chatbot typically sits outside regulated health privacy frameworks. Once disclosed, you have limited visibility into where that information is stored, how long it is kept, or who might eventually gain access to it.
The conversational tone of ChatGPT can make it feel more private or understanding than a web search, encouraging people to open up. It is safer to keep health questions general — for instance, asking for a plain-language explanation of a medical term — and to discuss anything personal or specific with a qualified medical professional instead.
Work-related and confidential materials
Another risky category is professional information tied to your employer, clients, or ongoing projects. This includes internal documents, proprietary research, trade secrets, unreleased product details, legal or financial reports, and any other material that is not meant for public circulation.
It can be tempting to paste a draft contract, clinical note, strategy memo, or report into ChatGPT to summarize, proofread, or simplify it. For instance, a clinician might consider using a chatbot to clean up a referral letter or patient summary. But doing so can push sensitive information outside the secure systems and policies specifically designed to protect it.
Creative work and intellectual property—such as unpublished manuscripts, source code, design documents, or confidential pitches—also fall into this category. Once entered into a third-party AI service, regaining control over where that material resides or how it might be used becomes difficult.
When in doubt, avoid sharing confidential work content. If you must use an AI tool, remove names, client details, identifiers, and any information that could reveal the organization, project, or individuals involved, and check your employer’s policies before using external services.
Anything illegal or related to criminal activity
Finally, avoid using ChatGPT to discuss, plan, or facilitate illegal activity. OpenAI states that it may disclose user data in response to valid legal processes in the U.S. and can also cooperate with international law enforcement requests.
Laws can change quickly, and behavior that feels harmless today may be treated differently in the future. Treat anything you type into a chatbot as something that could, in principle, be reviewed later.
OpenAI has safeguards intended to stop ChatGPT from helping users commit crimes, spread fraud, or incite harm. The system uses specialized pipelines to detect and filter prompts that suggest plans to harm others. Nonetheless, there have been documented attempts to misuse AI tools to generate malicious code or automate social engineering schemes, underscoring that these defenses are not perfect.
The safest mindset is to assume that anything you tell ChatGPT could eventually become accessible to others. If you would not publish a piece of information on a public website or share it freely with third parties, it does not belong in an AI chat window.
