Personal health records (PHRs) are being redefined by generative AI. While PHRs have traditionally offered patients a consolidated view of their health, AI is turning them into conversational, always-on interfaces that promise to translate clinical records and wellness data into personalized guidance. ChatGPT and Claude are at the forefront of healthcare-specific applications, though patients could theoretically use any large-language model (LLM) to analyze their health and get answers to medical questions.
But introducing AI to electronic medical records introduces new complexities and risks. Questions around data completeness, clinical safety, governance, privacy, and cybersecurity remain unresolved—particularly as many tools operate across fragmented data sources and outside established regulatory frameworks. While APIs may exist for patients to connect these AI platforms to their patient portals, data access is typically limited.
To explore the opportunities and risks, Guidehouse convened a roundtable of healthcare technology experts to discuss how AI-powered PHRs are being defined, adopted, and governed—and what health system leaders should do now to prepare for their impact. Our roundtable discussion includes:
Jubak: From my perspective, Apple Health, MyChart®, and ChatGPT Health all fit into the PHR bucket. The core patient record is your EHR, and a portal like MyChart is a patient-facing extension. The additional layers to the PHR bucket are platforms like Apple Health that consist more of data inputs from patients from their day-to-day health, whether that be from a wearable, the phone itself, or direct input. That data needs to be sequenced back to the EHR and can do so through most portals.
When we think about agentic AI-powered PHRs like ChatGPT Health, we’re getting into a broader category that leverages existing apps and data to frame a patient’s unique healthcare needs. Which app patients use is going to depend on exactly how they’re looking to interact with their patient record, but in order for an app like ChatGPT to be a one-stop shop, it’s going to need a full picture of your health and your entire patient record from all of your providers, including things like visit summaries, labs, and imaging results—not just what the patient inputs or what’s currently available through APIs. Comprehensive data aggregation has been a massive challenge, even within traditional EHRs.
Ciccolone: PHRs traditionally have been a passive data repository—a digital filing cabinet with no interpretation layer. Patients have had access to data but not access to insight. The major fundamental shift with AI is that this passive repository can become an intelligent interface. These systems now can synthesize structured data, unstructured notes, and longitudinal trends and translate them to content.
That being said, these new AI-powered platforms need to be designed to support patient understanding without compromising patient safety. They're generating responses based on learned patterns versus clinical judgment, and that can lead to dangerous results. They need to be paired with strong governance, clinical validation, oversight, and security.
Holston: I’m okay with Apple Health serving as an intermediary, collecting data about my wellness and feeding it to my EHR, but we have to better understand what GenAI tools are going to accomplish with access to our medical records.
In my clinical practice, I’ve been exploring how LLMs can support physician reasoning—strictly without entering patient-identifiable information. When you ask an LLM for a differential diagnosis, there are immediately clear gaps that could put patients at risk—so we still need doctors to do their own analysis and think through all of the possibilities and tests that should be ordered. It’s not good for a simple, “my chest hurts, what could it be?” It can be very useful, however, in diagnosing rare genetic conditions—because it’s very good at taking in a lot of data, looking for patterns, and suggesting possibilities.
Holston: In theory, the industry could agree that we want a free flow of data so that patients can be informed about their care. In reality, there are sincere risks with doing that, and there are economic drivers that have long prevented it.
Obstacles to data access are both a safeguard and a challenge. AI shouldn’t be using data to make a differential diagnosis, as it could be missing key data points. But if it’s making recommendations on diet, lifestyle, and wellness based on lab data it has access to, that’s a nice benefit. Some people need to be reminded about the right choices for their health. An AI assistant can do that in a consistent and conversational manner.
Jubak: AI is just as limited as patients are in its ability to take actions like scheduling a visit—whether it’s an EHR vendor’s chatbot or a third-party platform. If a patient isn’t able to accomplish a task via their patient portal, an AI bot isn’t going to be able to do it either. That’s particularly important with administrative functions like scheduling. If you’re only able to schedule with your established primary care physician but not with physicians outside of your existing care team, AI will be just as limited. So we first need to think about how we can expand functionality and self-service options within the EHR and patient-facing websites—and open up appointment availability so that clinics can meet the expanded demand that comes when access is streamlined. This is important for new patient growth, but also retention—the more that patients can do themselves quickly and efficiently, the better chance you’ll keep them within your system. This process of mapping an open-access philosophy with upgraded features to optimize the EHR is something we call “EHR orchestration”—and it’s something we’re working on with a lot of clients.
Ciccolone: It’s not just about having the full breadth of data. One of the most significant clinical and technological challenges we have to overcome is, can these AI bots distinguish between important information and “noise” in the record? We don’t know if AI is going to fully crawl a record and interpret scanned docs, data from retired legacy systems, outdated problem lists, or diagnoses that haven’t been reconciled. AI systems are now being asked to apply temporal weighting, context analysis, and problem list reconciliation to distinguish an active condition from historical artifacts—but as Dr. Holston said, the technology shouldn’t be making those judgments alone. There needs to be a clinician involved, because you may create more risk if AI amplifies irrelevant details instead of clarifying what truly matters.
Ciccolone: These platforms expand the cyberattack surface dramatically. Health data is extremely valuable, and these PHRs are aggregating clinical records, claims, wearable data, and sensitive behavioral health information all in one place. That is super attractive to someone with nefarious motivations.
AI vendors introduce risk related to data leakage and manipulation, and consumer-facing PHRs are not bound by HIPAA, so they may not be as well-protected. Ultimately, if a PHR is borrowing data from your health system’s EHR and a cyber breach occurs, you could be looking at reputational damage to your health system—even if you didn’t build the platform. Trust is foundational in healthcare, and it’s very difficult to rebuild once compromised.
Holston: I don’t think it’ll change things much for experienced doctors. Patients are always coming in and referencing Google, conversations with friends, and other unverified sources. This is just one more tool in the patient’s toolbelt to engage in meaningful conversation with their physicians. The challenge for clinicians is that GenAI sounds authoritative, so patients often find the content credible. If I were to develop a better AI bot for medical care, I would make sure it ends every sentence with, “But please speak with your physician about this.”
Ciccolone: These apps can both strengthen and strain relationships with providers. Clinicians may have to spend additional time responding to questions that arise from conversations with AI—even with a bot that is developed by a health system. Positioning is going to be key. Clinicians and health system leaders need to emphasize that while these tools are fantastic for patient wellness, they’re not a substitute for clinical judgment.
Health systems absolutely should not be ignoring these applications; they should look for opportunities to inform their development. If you ignore AI, you risk losing influence over governance, content standards, and data integrity.
Jubak: Don’t sit and wait. Start establishing a plan if you don’t already have one—not just for PHRs but for AI in general. How is your health system going to continue to incorporate AI as it evolves every day? As I’ve said, a lot of the important work that needs to be done here is turning on existing functionality within your EHR and scheduling systems to allow patients to do more via AI platforms. Not every type of appointment should be self-scheduled, but for those that can and should, it’s time to turn on that feature. You need to build the foundation so you can be prepared—and competitive—as AI’s reach expands.
Holston: Over the next six months, health system leaders need to understand their plan for meaningful and mindful integration of GenAI tools within their healthcare IT ecosystem. I think patient education and patient engagement—what we’ve focused on today—is often overlooked. What is your current patient touchpoint when they come in? Are you educating patients on how to effectively and meaningfully interact with GenAI and other search tools so they can have a thoughtful and meaningful conversation with their provider? We give patients a lot of information, but what if we empowered them to use GenAI and our patient portal? Now is the time to engage with your EHR vendors and system integrators to ask these questions and figure out your next steps to enabling meaningful use of GenAI in patient engagement.
Ciccolone: Innovation in healthcare must be anchored in governance or it won’t be sustainable. If a health system is going to partner with an AI bot or open up its data to an AI vendor, they need to start with discipline and governance, mapping out AI oversight, compliance, privacy, security, and clinical standards. That also includes analysis of HIPAA risk, API cybersecurity exposure, and governance of these AI models. Define patient use cases before scaling, and measure safety, trust, and impact. Don’t pursue AI innovation just for the optics—pursue it thoughtfully.
Guidehouse is a global AI-led professional services firm delivering advisory, technology, and managed services to the commercial and government sectors. With an integrated business technology approach, Guidehouse drives efficiency and resilience in the healthcare, financial services, energy, infrastructure, and national security markets.