Blog Post
AI in Healthcare: Powerful Tools, Human Hearts
Artificial intelligence is showing up in healthcare settings more and more — from drafting internal communications to summarizing lengthy regulations. For those working in healthcare, the opportunity is real. So is the responsibility. Before leaning into these tools, it is worth understanding where the line is, what to protect, and how to use AI in ways that genuinely improve care without putting clients or staff at risk.
AI in Healthcare: Powerful Tools, Human Hearts
The HIPAA problem nobody talks about enough
Most AI platforms used today — ChatGPT, Claude, Gemini, Copilot — operate under privacy policies that say your conversations are not shared with other users. That is true. But here is what many people do not realize: the models themselves may still be trained on the data you send them, depending on the platform, the account tier, and the settings you have enabled.
This means that if you type a client's name, diagnosis, or treatment history into a chat window, that information may become part of a training dataset used to improve the model for everyone. That is a HIPAA violation — not because someone read your message, but because protected health information left a controlled environment.
What not to share with AI tools
When using any AI platform in a healthcare context, never include the following:
- Client names, nicknames, or initials combined with any identifying detail
- Dates of birth, ages, or addresses
- Diagnoses, medications, or treatment histories
- Insurance information or Medicaid/Medicare IDs
- Case numbers or file references tied to identifiable individuals
- Progress notes, assessments, or session content — even paraphrased
- Any combination of details that could identify a specific person even without a name
The rule of thumb: if you would not say it in a crowded elevator, do not type it into a chat window. When in doubt, anonymize everything. Replace names with "a client" or "a person I support." Strip out dates, locations, and any clinical specifics. What remains can often still get you the help you need.
Where AI genuinely helps
Used responsibly, AI can take a serious load off healthcare workers in areas that do not require PHI at all.
Drafting communications
Writing emails, letters, flyers, and newsletters takes time. AI can draft a family communication about an upcoming event, write a reminder notice about medication pickups, create a program flyer, or help polish a grant narrative. You provide the structure and key details — AI handles the language. Then you review, edit, and personalize it before it goes out.
Summarizing regulations and policies
Regulations in healthcare are long, technical, and constantly updated. AI can take a dense regulatory document and give you a plain-language summary in seconds. This does not replace reading the source material, but it helps you understand what you are looking at before you dig in. It is especially useful for new staff getting oriented to compliance requirements or for supervisors preparing training content.
Researching clinical supports and evidence-based practices
AI can help you quickly survey what research says about a particular intervention, behavioral strategy, or diagnostic approach. Ask it to summarize what is known about a specific population, a treatment modality, or a support strategy. Use it to build a reading list or to orient yourself before consulting a specialist. Treat the output as a starting point, not a final answer.
Brainstorming activities and direct support ideas
Some of the most valuable uses of AI in direct support settings have nothing to do with documentation. Staff can use AI to brainstorm activity ideas tailored to a person's interests, abilities, and goals — things to do together, conversation starters, sensory-friendly outings, creative projects, or community inclusion ideas. This keeps the human relationship at the center while AI handles the ideation legwork. The result is more time spent actually doing things with people rather than trying to think of what to do.
A moving target: the tech changes fast
One of the most disorienting things about working with AI tools right now is how quickly they change. A feature that did not exist last Tuesday may ship this Thursday. A model that was slow or inconsistent six months ago may now be best-in-class. Policies around data retention and training are also evolving — and sometimes changing without major announcements.
This matters practically. The privacy settings on a platform you are using today may be different from what they were when you first set up your account. It is worth revisiting those settings regularly, checking the platform's terms of service when they update, and staying connected with colleagues who are tracking these changes.
Healthcare organizations would do well to designate someone — or a small team — to stay current on AI developments and translate them for the rest of the staff. Not because everyone needs to be an expert, but because the field is moving too fast to navigate by memory alone.
AI will not replace the humans in healthcare
Let's be direct about this: AI is not going to replace direct support professionals, nurses, therapists, case managers, or clinicians. The work of healthcare is fundamentally relational. People receiving care need to be seen, known, and supported by other humans. That will not change.
What AI can do is reduce the administrative friction that pulls healthcare workers away from the people they serve. Less time drafting, searching, and formatting means more time present. That is the real value proposition — not replacement, but relief.
There is also the matter of diagnosis. No matter how capable a model becomes at pattern recognition or clinical language, a diagnosis must come from a licensed medical professional. AI can surface possibilities, flag relevant literature, or help a clinician think through a differential — but the determination and the responsibility remain with the human. This is not a limitation to work around. It is a feature of a system designed to protect people.
A growing field, not a shrinking one
One of the more interesting effects of AI in healthcare may be that it grows certain roles rather than eliminates them. When AI helps catch documentation errors, flag medication interactions, or identify patterns in behavioral data, it creates more work to follow up on — not less. More leads to investigate means more need for skilled humans to interpret findings, make decisions, and provide care.
The same is true at the systems level. As AI tools become embedded in clinical workflows, organizations will need people who understand both the technology and the care context — people who can train staff, audit outputs, manage compliance, and advocate for the clients whose data flows through these systems.
Healthcare is not shrinking because of AI. It is getting more complex. And complexity in a field this important will always need more thoughtful humans, not fewer.
The bottom line
AI in healthcare is a real and growing opportunity. Use it to draft, summarize, research, and brainstorm. Keep PHI out of it entirely. Stay current because the tools will keep changing. And hold onto the truth that what makes healthcare work — trust, presence, human judgment — is exactly what no model can replicate.
The tools are getting better. The people using them matter more than ever.