Demystifying AI: What Every Primary Care Clinician Should Know
For a very long time, primary care has strained under the weight of too many patients, too little time, and technology not built for the work we do. But now, with the emergence of artificial intelligence (AI), we stand on the brink of a new era, and this era may finally offer solutions that enable primary care to flourish. AI is more than just a buzzword or a flashy gadget–it is a powerful capability that can allow primary care clinicians to reclaim their time, focus on patients, and drive healthcare improvement.
AI is new, and it is unfamiliar to most of us. It requires a new literacy, and we don’t yet know what we don’t know. This not knowing can have grave consequences in healthcare, and frontline clinicians are conditioned to fiercely protect the safety of their patients from those unknowns. In this guide, we’ll help you navigate the world of AI in a way that’s practical, approachable, and tailored to the realities of primary care. The priority, however, is not how AI can make us go faster and do more, but instead how AI can augment, rather than substitute, the human factor in healthcare – an approach perhaps more important in the work of primary care than anywhere else in medicine.
Understanding the Basics: Key Terms You Should Know
AI is a jargon-filled field, but you don’t need a computer science degree to speak the lingo. Perhaps you’re at a dinner party, and someone starts mentioning terms like “machine learning” or “large language models.” A little knowledge about these concepts can help even the least technologically in tune build confidence for these discussions. Here’s a quick primer:
-
Artificial Intelligence (AI): Think of AI as machines doing tasks that usually require human intelligence, like recognizing patterns or generating text.
-
Machine Learning (ML): This is how AI gets smarter over time, during the period that a new AI model or product is being developed. It learns from data to improve its performance, much like how humans learn from experience. However, once a new model has been produced, the AI generally does not continue to learn and improve in real time for individual users (despite what many AI users may believe).
-
Large Language Model (LLM): These are advanced AI systems trained on massive amounts of text to understand and generate human-like language. ChatGPT and other chatbot products are built on an LLM core, which gives them the powerful functionality many of us are starting to experience in daily life.
-
Prompt: This is the input you give an AI—whether it’s a question, a command, or a paragraph to summarize.
-
Training: Think of training an AI model like teaching a child. You provide the child with numerous examples and experiences (i.e. data). Through these, they learn patterns, relationships, and how to respond in different situations. The legality of using data to train AI remains hotly debated, especially when the data involves “owned inputs”: any data protected by copyright, privacy, or intellectual property laws.
-
Inference: Once an AI model has been trained, it enters the “inference” stage. This is where it applies what it already knows to a new request or prompt that it hasn’t seen before. It then uses the patterns and relationships it learned during training to generate outputs or make predictions.
-
Ambient Documentation: This is a game-changer for clinicians. It’s technology that listens to your conversations with patients and drafts clinical notes so you don’t have to.
-
Temperature: Most LLMs have a built-in setting called “temperature” that controls creativity. A higher temperature means more variety and less predictability; a lower temperature means more consistent and conservative output. If consistency is important (like in a template), some tools allow you to adjust settings to make responses more stable.
With this terminology in your back pocket, let’s tackle some common questions.
FAQs: What Primary Care Teams Want to Know
Will AI replace doctors or medical assistants?
AI is here to do whatever we wield it to do, but we believe healthcare is very human. AI’s role should be to support you, not replace you. Think of it as an extra set of hands for the tasks that eat up your time and your cognition, like charting, scouring dense clinical records, or crafting routine and repetitive portal messages. This frees you to focus on what you can do. AI can also serve up evidence-based clinical decision support right in your workflows, sparing you from researching a complex topic or relying on outdated information. This holds the potential to dramatically improve the practice of medicine, by design.
How accurate is AI-generated documentation?
It’s improving daily, but it’s not perfect. AI can draft a solid starting point for your notes and even tee up draft orders based on ambient listening to your patient conversations, but it still needs your review to ensure accuracy and completeness. Think of it as a helpful assistant, not the final author.
Is AI HIPAA-compliant?
That depends on the vendor. Always confirm that any AI tool you use is designed to protect personal health information (PHI) by checking the terms of service, terms of use, or the business associate agreement (BAA) you have with your vendor. If in doubt, don’t input PHI into an AI tool.
Do I need patient consent to use an ambient scribe?
Legal and ethical guidelines related to the use of AI in healthcare are rapidly developing. While specific regulations solely addressing AI scribe consent are still evolving, the principles of informed consent embedded in existing HIPAA regulations, state laws, and professional ethical guidelines strongly advocate for clear disclosure and obtaining patient consent for the use of ambient AI scribe tools. Whether you are required to obtain consent every time, just once, or ever currently depends on your state’s regulations and your local policies. Some practices document consent in the patient’s chart, while others include it in their intake forms or in every encounter note.
Can AI give medical advice?
In its current state of development, AI can support decision-making by summarizing guidelines or generating draft care plans, but remains unreliable to exclusively trust with clinical judgment. For now, clinical guidance should be curated for and adapted to the clinician’s understanding of the patient in front of her.
Why does an AI tool sometimes give different answers to the same question?
LLMs are designed to generate responses based on probability—not exact lookup. That means there isn't one single answer in its “brain”, but rather a wide range of possible, reasonable responses.
Risks with AI & How to Manage Them
-
Hallucinations: Hallucinations in AI mirror the human phenomenon: AI makes up something that sounds convincing but is false. These can be hard to predict. Always verify the output to ensure it is factual.
-
Bias: Because AI relies on machine learning to train, just as humans learn from personal experience, AI trained on biased healthcare data may reflect that bias. Careful human oversight is key to ensure equity in healthcare decision making and delivery.
-
Over-reliance: In some cases, it may be convenient or seductive to depend too much on AI. It’s important to set guardrails around this that ensure we use AI as a tool, not a crutch.
-
Privacy: Ensure your AI tools that handle PHI are HIPAA-compliant. When in doubt, ask.
-
Data Horizon Limitation: AI’s knowledge is cut off after a specific point in time, so it doesn’t know any new information that came out after that time. This can impact AI in health tech in a variety of manifestations, wherever the most recently relevant information occurs after the data horizon limitation. It’s important to keep this in mind and always verify the most recent updates to things like medications and test results, for example.
Agentic AI and an Agent-to-Agent World
Agentic AI refers to a type of artificial intelligence system that goes beyond simply responding to specific prompts and commands. Instead, it shows some autonomy and goal-directed behavior. It has a sense of “agency”, so instead of you having to prompt it for every single step, it can “figure things out” on its own to achieve a higher, open-ended goal with less human oversight.
Agent-to-agent work in healthcare could have many useful applications. Consider, for example, a set of AI agents with the job of securing prior authorization for a new medication prescribed by the physician. Various agents might be responsible for initiating the request to the insurance company, communicating the individual patient’s coverage and insurance rules, gathering clinical documentation, filling out the necessary forms, and submitting the request to the insurance company for review. Throughout this process, these agents have the higher goal of collaboratively completing the prior authorization process, communicating and exchanging information with each other along the way, independent of human oversight.
Embracing AI as a Partner in Healthcare
As we navigate this new era of AI in primary care, it's crucial to remember that, as with any technology, if it distracts from rather than enhances the human connection at the heart of medicine, we’re doing it wrong. By understanding the basics, asking the right questions, and remaining vigilant about potential risks, we can harness the power of AI to streamline workflows, improve decision-making, and ultimately, provide even better care for our patients. The future of primary care is not about humans versus machines, but about humans and machines working together to create a more efficient, more effective, more accessible, and more compassionate healthcare system.