Exploring how adaptive AI can support clinicians after patient visits by learning from feedback and integrating into existing workflows.

Background
Healthcare systems should enable faster and more informed decisions, not add more steps.
Clinicians manage follow-up care after every patient visit. What should take three minutes often takes fourteen. In that gap, many necessary follow-ups are missed, preventable readmissions increase, and clinician burnout intensifies.
The issue is not the absence of information. It is that information is fragmented across systems, decisions are delayed, and actions are often missed.
What was asked
What if AI did more than surface information?
What if it understood the decisions being made, acted autonomously when appropriate, deferred when uncertain, and learned from every override?
What was realised
This is not just a UX problem. It is a decision-making system problem, and solving it requires a fundamentally different design approach.
Challenge
When I mapped post-visit follow-up workflows, I identified three types of opportunities for AI intervention.

The Core Problem
Current systems require clinicians to manually manage all three types of decisions. They navigate more than six systems, rely on mental checklists, and often skip the gap-detection step due to its repetitive nature.
Result
Care quality often depends on how efficiently a clinician can work on a given day, rather than on actual clinical need.
Opportunity
What if AI could handle routine decisions autonomously, suggest actions when needed, and surface relevant context within the existing workflow?
That is where intelligent automation becomes a form of invisible assistance.

Approach
Sense → Shape → Steer: Designing AI as a Trusted Colleague
I followed a Sense, Shape, and Steer methodology, a three-phase AI-first design approach that ensures AI complements human expertise rather than replacing it. The goal was to design a system that behaves like a thoughtful colleague, not an autopilot.

Design Stages
/Phase 1 - SENSE
Defining Decision Boundaries and Data Realities
Mapping the Ecosystem
I observed real workflows and interviewed clinicians to understand how decisions are made in practice.
Clinical : Strategic decisions such as whether to refer or escalate a case.
AI: Routine decisions such as protocol matching and gap detection.
Both: Routine decisions such as protocol matching and gap detection.
Assessing the Data Landscape
AI systems rely on the quality of the data they learn from. I audited data sources across EHRs, lab systems, and clinical protocols, evaluating their quality, accessibility, and reliability.

Finding
The required data exists, but it is fragmented, inconsistently updated, and not always fully trusted.
Clinical Workarounds
To manage this, clinicians had developed their own informal systems:

Insight
Clinicians understand what needs to be done but lack the tools and time to do it consistently.
Where AI Fits
I mapped potential AI interventions and prioritized three for the MVP, balancing clinical value with technical feasibility.
Protocol Matching
Recognizes diagnoses and surfaces relevant care protocols.
Example: Type 2 Diabetes → A1C check due 3 months after medication start.
Gap Detection
Compares what should be scheduled with what is currently scheduled.
Example: Labs ordered but no appointment scheduled, gap identified.
Autonomous Task Creation
Generates structured tasks for the appropriate team members.
Example: Nurse to finalize lab order. Admin to contact the patient and confirm.
Defining Autonomy Boundaries
I defined when the AI should act, suggest, or defer based on confidence levels and the criticality of each decision.


Principle: Automate only what's predictable, suggest what's contextual, and surface what's complex.
/Phase 2 - SHAPE
Measurement, Governance, and Evolution
Designing trustworthy behavior was only part of the challenge. The real test began when the AI was introduced into live clinical workflows.
As the prototype moved into real use, the focus shifted from design to measuring performance, governance, and ongoing improvement. The system needed to learn safely, strengthening where it performed well and stepping back when confidence was low.
Defining the AI’s Character
I designed the AI to function as an expert colleague rather than a passive assistant. It:
Acts decisively on routine tasks
Suggests actions on complex decisions
Explains its reasoning clearly
Communicates uncertainty when confidence is lower
Learns from overrides and adapts over time
IMMEDIATE
Post-Visit Sumamry
After a clinician closes a visit, the AI reviews the note and surfaces relevant follow-up actions.
“Based on the diagnosis, these are the next steps. Add or remove?”


REAL TIME
Care Gap Detection
Detects unscheduled follow ups:
"Follow-up appointmnet already scheduled - no actions needed"
"Care gap detected - recommend scheduling diabetes check."
ONGOING
Learning from Patterns
The system adapts based on clinician feedback over time.
“This practice does not follow X in Y cases. I will stop suggesting it.”

Transparency by Design
Each recommendation includes:
Confidence level, for example 84% confidence
Reasoning behind the suggestion, including diagnosis, data sources, and protocol references
Data provenance, such as EHR sections, lab dates, and medication history
Principle: Show your work. Clinicians trust what they understand.
Handling Uncertainty
When confidence is low, the system defers to the clinician and prompts for review.
“Cannot verify medication list. Recommend manual check.”
“Low confidence at 62 percent. Please review manually.”
“Data conflict detected. Diabetes diagnosis present but no recent labs. Clarify before proceeding.”
Human Control and Oversight
To ensure clinicians remain in control, the system allows them to:
Reject or edit suggestions, for example “We do not follow this in our practice”
Define practice patterns, such as never referring to a specific specialty or always using a set interval
Adjust autonomy levels from conservative to balanced to aggressive
Pause or resume the system when needed, especially during review or high workload periods
Adaptive Learning
The AI continuously evolves through four feedback loops:

/Phase 3 - STEER
Measurement, Governance, and Evolution
Designing trustworthy behavior was only part of the challenge. The real test began when the AI was introduced into live clinical workflows.
As the prototype moved into real use, the focus shifted from design to measuring performance, governance, and continuous improvement. The system needed to learn safely, strengthening where it was effective and stepping back when confidence was low.
Success Metrics
A three-level framework was defined to evaluate system performance across accuracy, usage, and clinical impact.
Level 1: AI Accuracy
Protocol match accuracy. Whether identified protocols are correct
Gap detection accuracy. Whether flagged gaps are valid
Confidence calibration. Whether stated confidence aligns with actual accuracy
Level 2: User Behavior
Acceptance rate. Percentage of clinicians accepting suggestions
Override patterns. Identifying frequently rejected suggestions
Usage frequency. Number of interactions per clinician per day
Level 3: Clinical Outcomes
Follow-up completion. Percentage of recommendations completed
Care gap closure. Number of gaps successfully addressed
Readmission rate. Reduction in preventable readmissions
Patient outcomes. Improvement in coordinated care results
Time saved. Reduction in time spent on follow-up coordination
Governance: Alert Escalation Framework
A five-level governance model was defined to monitor and manage system behavior in production. Each level included clear triggers and escalation paths to ensure that no incorrect suggestion goes unnoticed.

Continuous Refinement Cycle
Post-launch learning was treated as an extension of the design process. Over an eight-week refinement cycle, I used structured observation, analytics, and clinician feedback to guide controlled iterations.

Why this approach works
Unlike traditional UX efforts that end at launch, this project required ongoing design governance. The value of the system depended not only on its initial version, but on how responsibly it evolved over time.
By incorporating safeguards, feedback loops, and defined learning thresholds, I ensured the system could improve in a controlled and clinician-trusted manner, increasing precision without compromising patient safety.

Impact & Takeaways
Beforeyougo,herearesomeofthethingsachievedbytheproject.
Not finding what you need?
Reach out anytime. I'm happy to answer any questions about the project or process :)
Clinicians Began Treating the AI as “a Colleague Who Never Forgets”
By learning quietly from feedback and respecting human judgment, the system surfaced insights only when necessary, allowing clinicians to rely on it without feeling overridden.
A Shift From Automation to Collaboration
Instead of merely completing tasks, the system supported decision-making within existing workflows, enabling professionals to stay in control while AI assisted in the background.
Less Mental Overhead in Complex Workflows
Information was organized and surfaced at the right moments, helping teams navigate complex processes without constantly switching tools or holding excess information in memory.
Sense–Shape–Steer Approach
Insights from this project later informed a framework for designing adaptive AI systems that evolve through feedback, build trust gradually, and integrate naturally into team environments.






