Tackling IBM's challenge to design trust and transparency for AI-powered healthcare

Timeline

Nov. 2024 - May. 2025 (8 months)

My role

Team lead, UX Designer

Team

2 UX Researcher

3 UX Designer

Context

Mitigating bias in AI healthcare is one of the biggest task faced worldwide

During the 2025 Student service design challenge, IBM challenged teams to design equitable services that help human navigate bias. After secondary research, we chose to tackle the challenge in the complex space of AI healthcare, exploring and paving the way toward equitable and unbiased future.

$4.9 trillion

spend in U.S. healthcare, creating pressure for more efficient, tech-enabled solutions

> 40% underdiagnosis

are experienced by underserved populations

Among 11 high-income countries…

U.S. ranked last in health equity, reflecting deep disparities in care

Key achievements

Led the design of Echo, awarded 2nd place out of 200+ international teams in the IBM Design Challenge

As team lead, I managed project planning and timelines, balanced critical decisions, and guided research and design efforts to keep the team aligned and collaborative, successfully meeting each milestone throughout the 8-month design challenge

Collaborated across disciplines by engaging clinicians, researchers, and AI experts, I facilitated cross-disciplinary conversations to surface pain points and align diverse perspectives toward equitable AI solutions

Proposed scalable design solutions by leading workshops with patients, clinicians, and AI engineers to balance stakeholder needs with technical feasibility

Summary

ECHO, a digital platform that fosters transparent communication between speech-language pathologists (SLPs) and families around the use of AI, addressing critical gaps in AI trust, cultural responsiveness, and explainability within pediatric speech therapy.

Problem

Families and SLP clinicians lacked a clear way to communicate about how AI was being used in pediatric speech therapy, creating mistrust, misdiagnoses, and anxiety.

Outcome

An AI powered platform that captures family context and makes AI use transparent, fostering trust and culturally responsive care.

The application is streamlined online and includes dedicated section about AI usage information.

Problem framing

From clinical research to Speech-language pathologist

To deeply understand the problem space, I spoke with diverse range of professionals working in the healthcare system:

  • Seattle Clinical researchers focused on recruitment and trial design

  • UW Health equity experts studying access & bias

  • Speech-Language Pathologists (SLPs) working in various settings (hospitals, schools, private clinics)

  • AI/ML engineers building healthcare tools

  • Patients and caregivers with lived experience in speech therapy

Key insights

AI-specific challenges

1

Transparency Gap in AI Decision-Making

72% Patients demanded transparency in how healthcare AI makes decisions.

2

Clinician-as-Interpreter

80% Patients were comfortable with AI in care, but only if clinicians understood and explained it

3

Privacy Risks

48.4% Clinicians agreed utilization of AI tools can increase the risk to patient privacy

I mapped out the system map of stakeholder relations in the clinical research context to understand the interactions and values exchange in between

Problem statement

How might we create a single source of transparency that helps patients communicate clearly, and support clinicians in using AI ethically and efficiently despite time constraints, cultural gaps, and limited system guidance?

Design objectives

I emphasized 3 clear compasses that guide our design direction

Transparency

Make AI’s role explainable and visible in care decisions

Voice

Ensure families can share nuanced cultural and linguistic context

Reflection

Support clinicians in reviewing, adapting, and communicating AI outputs

Ideation & Workshops

I make ideas spark through complexity

I organized brainstorming sessions where we explored human-AI collaboration models, balancing automation with clinician oversight. While we had many great ideas, I made sure to prioritize concepts that would best bring out our design objectives of transparency, voice, and reflection.

Ideas brainstormed based on critical user needs

We didn't implement all of them because…

Could stigmatize patients by labeling them based on race or diagnosis, reducing trust instead of building it.

Information overload in waiting rooms could confuse patients and overwhelm clinicians.

Low technical feasibility, creating friction for patients and providers instead of integrating seamlessly into existing workflows

Service model

Connecting stakeholders' values and business needs

Service blueprint of the service

Patient side features

AI Explanation Layer

Simple visuals and language showing what the AI analyzed and why

Voice of the Patient

A channel for patients to share feedback on AI-driven results

Research opportunity

Voluntary option for patients to contribute data to strengthen AI models

wireframe of patient side phone screen

Clinician side features

Bias Flags

Alerts for potential dataset mismatches (e.g., bilingual cases)

Feedback Loop

Patient feedback integrated into clinicians’ workflows for continuous improvement

Insight community

A collaborative space for clinicians to access research insights from the community to facilitate diagnosis

Wireframe of clinician side screen

Co-Design & Iteration

Validate designs with real-world use case

I conducted a co-design session with a speech-language pathologist, who confirmed the value of contextual intake and AI explanation features. I iterated based on their feedback, simplifying workflows and clarifying client-facing language.

1

Adding summary on critical information of community resources helps clinicians utilize it quickly and effectively.

2

Showing personalized bias considerations of patient's overall file is more useful than listing out general bias considerations of AI tools.

3

Adding features like communication log and action guide facilitates clinicians navigate medical communication customized to different levels of patient AI concern level.

Final design and Key features

More transparent and equitable experience for patients/caregivers.

1

Context-rich intake form

Collects detailed patient information to ensure AI considers personal, cultural, and contextual factors

2

AI-usage preference

Document's patients comfortability with AI usage to facilitate a personalized interaction with the clinician throughout the treatment

Detailed patient information and AI integration guidance that makes clinicians feel confident.

1

Context-rich intake view

Provides clinicians with a comprehensive patient profile to guide accurate, personalized decisions

2

Clinician network board

Connects clinicians to shared research, cases, and AI insights across their professional network

Community research resources that boost engagement and continuous improvements in the healthcare system.

1

Research resource board

Central hub for accessing datasets, guidelines, and tools to design, monitor, and refine AI systems

Impact metrics

How I would evaluate impact

Increased satisfaction of application process

Parent users submit enriched intake form that enables them to add contextual and cultural details and compare the perceived usability or satisfaction score with previous form.

Reduction rate in clinician prep time

Time spent preparing for sessions, by using Echo’s AI-supported summaries and context views which flag potential biases.

Improved percieved transparency of AI use

Parents’ understanding on how AI is (or isn’t) used in their child’s care, as measured by a post-visit survey & clinicians’s perceived transparency after using Echo Insights for guidance.

Learnings

0 to 1 is a process of deconstruct and rebuilding the mindset

Although this was not my first time working in the healthcare domain, leading a team to explore an unfamiliar space pushed me to be even more proactive. A key challenge I faced was the need to quickly adjust our project direction based on emerging research. To keep the team aligned, I organized debriefing sessions after each interview and reflection meetings after every round, which were effective approaches that fostered shared understanding and agility.

I’m proud that through hard work and the courage to step outside our comfort zones, we achieved a well-deserved accomplishment!

The amazing Healthcare and Humanity team! What a journey we've been through together!!

Thanks for stopping by;)

Glad I was able to share a piece of me with you.

The story goes on…


@ Janet Chen, 2025

Thanks for stopping by;)

Glad I was able to share a piece of me with you.

The story goes on…


@ Janet Chen, 2025

Thanks for stopping by;)

Glad I was able to share a piece of me with you.

The story goes on…


@ Janet Chen, 2025