162 Days of Insight

Day 9: What Not to Tell AI About Your Health (And How to Ask Safely)

You don’t need to hand over your identity to understand your body.

Every health conversation with AI creates a digital trail that could follow you forever.

 

The question isn’t whether to use these tools, it’s how to use them without leaving breadcrumbs that lead back to your medical history.

Yesterday, in Day 8 of the series, we explored how to ask AI better questions about our health by using the FRAME-H method. That structure helps unlock smarter, safer answers. Today, we shift our lens toward a different kind of clarity: knowing what needs to be shared to achieve the result.

Because the truth is, most people overshare. Not maliciously. Not foolishly. Just instinctively.

AI feels intimate. It’s fast. It’s responsive. It mimics empathy. But the more it feels like a conversation, the more we forget we’re leaving a digital trail. The result? People reveal more than they realize — names, dates, identifiers, even diagnostic documents.

“Who controls and owns the data you’re providing?”

That question is more important than ever. With every prompt, you’re offering a piece of your personal health narrative. Depending on the platform, that information may be stored, analyzed, or used to further train AI models. Understanding where your data goes, who can access it, and how it’s protected should be part of every interaction you have with health AI tools.

Today is about reclaiming control. Learning how to use AI as an ally without exposing yourself in the process.

Why This Matters More Than You Think

Privacy isn’t just about keeping information private—it’s about protecting your future opportunities and choices.

Consider what’s at stake when health data becomes searchable, sellable, or hackable.

You can’t control the terrain, but you can choose what you leave behind.

Insurance Implications

Life and disability insurance companies increasingly use alternative data sources for underwriting. A digital trail showing you’ve researched heart conditions, mental health symptoms, or genetic disorders could impact coverage decisions—even if those searches were hypothetical or for someone else.

Employment Risks

While HIPAA protects formal medical records, it doesn’t cover your voluntary AI conversations. Some employers and background check companies are beginning to incorporate “lifestyle risk assessments” that could include your digital health footprint.

Data Breach Reality

In 2023 alone, healthcare data breaches affected over 88 million Americans. When AI platforms store your health conversations, they become targets. The difference is that your prompts often contain more narrative detail than clinical records—making them easier to connect back to you personally.

Re-identification

Research shows that supposedly “anonymous” health data can be re-identified with surprising accuracy. Just three data points—age, gender, and zip code—can uniquely identify 87% of Americans. Add health patterns, and the number approaches 100%.

The goal isn’t to scare you away from these powerful tools. It’s to help you use them strategically, getting the insights you need while maintaining control over your personal health narrative.

What You Might Be Oversharing (Without Realizing It)

Let’s start with the hidden risks in casual prompts. Consider these real examples:

  • “Dr. Nabil at the Cleveland Clinic told me…”
  • “I had shoulder surgery on Jan 12, 2024 after a skiing accident in Colorado.”
  • “Here’s my MRI and blood panel from Mount Sinai, can you help interpret?”

Each of these includes identifying information that could be tied back to you. In isolation, they may seem harmless. But when combined — names, locations, timeframes, medical events — they form a digital fingerprint.

The examples above represent what privacy experts call Personally Identifiable Information (PII)—the same sensitive data that HIPAA laws were designed to protect in medical settings. The difference? Your AI conversations aren’t covered by HIPAA protections.

AI doesn’t need your identity to be useful. It needs your clarity. And that starts with a simple principle:

“You only need enough information to be understood—not identified.”

This becomes your North Star for every health AI conversation.

Understanding what not to share is only half the equation. The other half is structuring what you do share for maximum insight and minimum exposure.

FRAME-H: Privacy-First Prompting

Yesterday, we learned about FRAME-H. Today, let’s see how it becomes a privacy shield.

Here’s how FRAME-H can help you avoid oversharing:

  • Focus: Instead of saying, “I had a heart attack last March,” try “I experienced chest tightness during exercise.” Focus on the experience, not the timestamp.
  • Role: Avoid prompts like “Act as my cardiologist.” Instead, use a generalized role like “Act as a privacy-first health explainer.”
  • Ask: Rather than “Should I take this medication again?”, ask “What are general risks of resuming medication after a cardiac event?”
  • Method: Don’t say “Here’s my full blood test, analyze it.” Instead, try “Explain what high LDL levels might indicate in general.”
  • Edges: If you find yourself writing “Review my MRI file attached,” revise it to “Don’t review personal medical files.” Clearly set what’s off limits.
  • Human: Even if omitted before, it’s key to include something like, “I will review this with my healthcare provider.” Let AI know you’re not acting on its answer alone. 

FRAME-H isn’t just a structure for asking better questions — it’s a privacy shield when used wisely and consistently.

Privacy protection doesn’t require deception, it simply requires precision. Here’s how to share enough without sharing too much.

How to Obfuscate Without Lying

Protecting your privacy doesn’t mean being dishonest. It means being precise without being revealing.

Privacy isn’t about hiding, it’s about sharing just enough to be understood.

Here are three anonymization tactics that work:

1. Generalization

You can always generalize specific people or places without loss of meaning:

  • Before (Risky): “Dr. Li at McGill’s Immunology Center said my ANA test came back positive at 1:640 on March 8th, 2023”
  • After (Safe): “A specialist mentioned that blood work showed elevated autoimmune markers about a year ago”
  • Why this works: Preserves the essential medical information while removing identifying timestamps, locations, and specific test values.

2. Conversion

To eliminate identifying details about procedures or exams, convert dates to relative timeframes:

  • Before (Risky): “I had my colonoscopy on March 15th, 2024 at St. Mary’s Hospital and they found two polyps. My next screening is scheduled for January 10th, 2027.”
  • After (Safe): “A routine screening about 8 months ago revealed some growths that need monitoring, with follow-up scheduled in a couple of years.”
  • Why this works: Converts specific dates and locations to relative timeframes while preserving the essential medical timeline and monitoring requirements.

3. Profiling

As a best practice, describe or construct profiles rather than provide specific diagnoses:

  • Before (Risky): “I was diagnosed with Type 2 diabetes in 2019 and take 1000mg metformin twice daily. My last A1C was 7.2% and my endocrinologist Dr. Patel wants me to add insulin.”
  • After (Safe): “I’m managing blood sugar issues with medication and monitoring key markers. Recent labs suggest my current treatment may need adjustment, and my specialist is considering additional options.”
  • Why this works: Creates a health profile focused on patterns and management strategies rather than specific diagnoses, medications, and provider names, while maintaining all the medically relevant context.

Obfuscation is like painting in broad strokes. The picture is still clear enough to interpret, but the details don’t betray your identity.

Advanced Privacy Techniques for Health AI

Beyond basic obfuscation, here are four advanced strategies for maintaining privacy while maximizing insight:

When no one frame holds the whole picture, privacy becomes the design.

1. The Hypothetical Shield 

Frame your questions as scenarios for “someone you know”:

  • Instead of: “I’ve been having panic attacks lately”
  • Try: “A friend mentioned they’re experiencing sudden anxiety episodes with physical symptoms”

This creates psychological distance while preserving the essential details needed for useful responses.

2. Composite Profiles 

Combine elements from multiple real situations to create a realistic but untraceable scenario:

  • Blend your symptoms with timing from a different health issue
  • Mix your age bracket with someone else’s activity level
  • Combine your family history with different geographic markers

3. Session Segmentation 

Break complex health situations across multiple, unconnected AI sessions:

  • Session 1: Ask about symptom A in isolation
  • Session 2: Explore treatment B without mentioning symptom A
  • Session 3: Research lifestyle factors separately

This prevents AI platforms from building comprehensive profiles of your health situation.

4. Browser Privacy Hygiene

  • Use incognito/private browsing for health AI conversations
  • Clear cookies between sessions
  • Consider using different browsers or devices for health-related AI queries
  • Some users rotate between different AI platforms to avoid pattern recognition

The key principle: No single conversation should contain enough information to reconstruct your complete health picture. Think of it like distributing pieces of a puzzle across different boxes—each piece is useful, but the full picture remains protected.

Platform-Specific Privacy Considerations

Not all AI platforms handle your data the same way. Here’s what to consider about the major health AI platforms based on publicly available information:

ChatGPT (OpenAI)

  • Privacy Policy: https://openai.com/privacy/
  • Check their current privacy policy for data use and training practices
  • Review chat history settings and deletion options in your account
  • Consider using temporary sessions for sensitive health queries

Claude (Anthropic)

  • Privacy Policy: https://www.anthropic.com/privacy
  • Generally positions itself as more privacy-focused in public communications
  • Review current data retention and usage policies
  • Check available privacy controls in your account settings

Google Gemini

  • Privacy Policy: https://policies.google.com/privacy
  • Consider how this integrates with your broader Google account and data ecosystem
  • Review Google’s overall privacy settings and data controls
  • Be aware of potential data sharing across Google services

Microsoft Copilot

  • Privacy Policy: https://privacy.microsoft.com/
  • Policies may vary between consumer and enterprise versions
  • Check current terms for your specific version (free vs. paid)
  • Review available privacy controls and data deletion options

Important Disclaimers

  • Privacy policies change frequently — always verify current terms before engaging in sensitive discussions
  • Enterprise/paid versions typically offer stronger privacy protections than free versions
  • When in doubt, treat every platform as if conversations could be permanent

Review each platform’s current privacy policy before using it for health-related queries, and consider using the privacy techniques outlined in this article regardless of which platform you choose.

The higher the privacy, the stronger the shield. Know where your data stands.

Making Platform Choices

Given the complexity and frequency of privacy policy changes, I can’t provide a definitive ranking of platforms by privacy protection.

Instead, when choosing a platform for health-related queries:

  • Read the current privacy policy for any platform you’re considering
  • Look for clear data deletion options in account settings
  • Check whether conversations are used for model training and if you can opt out
  • Consider enterprise or paid versions which typically offer stronger privacy protections
  • Apply the privacy techniques in this article regardless of which platform you choose

Remember: Your privacy protection comes primarily from how you structure your prompts, not just which platform you use. The obfuscation and anonymization techniques outlined above are your first line of defense on any platform.

Smart Disclosure = Better AI Output

Ironically, sharing less often yields more insightful AI responses.

Take two hypothetical prompts:

Prompt A (Risky):

				
					Dr. Kline diagnosed me with psoriatic arthritis in 2019 after I had skin flare-ups on my elbows, lower back pain, and knee swelling. I’m on 25mg of Methotrexate weekly and get bloodwork every 3 months.
				
			

Prompt B (Better):

				
					I’ve been experiencing joint pain, skin irritation on my elbows, and recurring lower back stiffness. I’m currently managing the symptoms and monitoring inflammation levels. What are common management strategies for these patterns?
				
			

Prompt B gives the AI a pattern to interpret, not a case file to review. And that’s exactly the terrain where AI excels.

Prompt C (Even Better):

				
					I'm experiencing joint stiffness, skin changes on extensor surfaces, and inflammatory back pain that improves with movement. These symptoms seem connected and have been gradual onset over months. What body systems typically interact to create this pattern, and what questions should guide further evaluation?
				
			

Why Prompt C works best: It focuses on patterns and relationships rather than isolated symptoms, giving AI the context it needs to provide systemic insights while maintaining complete anonymity.

Prompt Debugger: A Privacy Checklist

Before pressing send, ask yourself:

  • ✅ Does this contain names, emails, dates, locations, or anything uniquely identifiable?
  • ✅ Did I assign the AI a clear but generalized role?
  • ✅ Did I describe symptoms rather than diagnoses?
  • ✅ Have I set boundaries around what the AI should not do?
  • ✅ Did I clearly state that a human will make the final call?

If the answer to all five is yes, your prompt is following privacy-first principles.

Take Control of Your Health AI Conversations

The techniques in this article aren’t just about privacy—they’re about getting better results. When you frame health questions strategically, remove identifying details, and set clear boundaries, something interesting happens: AI provides more useful, generalizable insights that you can actually apply.

Remember: the goal isn’t to hide from technology—it’s to engage with it on your terms. When you control what you share, you control the conversation. And that’s where real empowerment begins.

Start with tomorrow’s next health question. Use the privacy checklist. Try one obfuscation technique. See how it changes not just your privacy, but the quality of the response.

“Your health data is yours. Keep it that way.”

Tomorrow, we’ll explore how to read AI health insights with discernment, protect your mental peace, and avoid the spiral of search-induced fear.

See you in the next insight.

Share:

Related Posts