Tackling IBM's challenge to design trust and transparency for AI-powered healthcare

Timeline

Nov. 2024 - May. 2025 (8 months)

My role

Team lead, UX Designer

Team

2 UX Researcher

3 UX Designer

Context

Mitigating bias in AI healthcare is one of the biggest task faced worldwide

During the 2025 Student service design challenge, IBM challenged teams to design equitable services that help human navigate bias. After secondary research, we chose to tackle the challenge in the complex space of AI healthcare, exploring and paving the way toward equitable and unbiased future.

$4.9 trillion

spend in U.S. healthcare, creating pressure for more efficient, tech-enabled solutions

> 40%

underdiagnose rate by AI diagnostic tools are experienced by underserved populations

Among 11 high-income countries…

U.S. ranked last in health equity, reflecting deep disparities in care

Key achievements

Led design of Echo that came in second place in the IBM design challenge amongst 200+ international teams!

The design is featured on the official website!

As team lead, I managed project planning and timelines, balanced critical decisions, and guided research and design efforts to keep the team aligned and collaborative, successfully meeting each milestone throughout the 8-month design challenge

Collaborated across disciplines by engaging clinicians, researchers, and AI experts, I facilitated cross-disciplinary conversations to surface pain points and align diverse perspectives toward equitable AI solutions

Proposed scalable design solutions by leading workshops with patients, clinicians, and AI engineers to balance stakeholder needs with technical feasibility

Summary

ECHO, a digital platform that fosters transparent communication between speech-language pathologists (SLPs) and families around the use of AI, addressing critical gaps in AI trust, cultural responsiveness, and explainability within pediatric speech therapy.

Problem

Families and SLP clinicians lacked a clear way to communicate about how AI was being used in pediatric speech therapy. This created mistrust, misdiagnoses, and anxiety, especially for immigrant families.

Outcome

An AI powered platform that captures family context and makes AI use transparent, fostering trust and culturally responsive care.

The application is streamlined online and includes dedicated section about AI usage information.

Problem framing

From clinical research to Speech-language pathologist

To deeply understand the problem space, I spoke with diverse range of professionals working in the healthcare system:

  • Seattle Clinical researchers focused on recruitment and trial design

  • UW Health equity experts studying access & bias

  • Speech-Language Pathologists (SLPs) working in various settings (hospitals, schools, private clinics)

  • AI/ML engineers building healthcare tools

  • Patients and caregivers with lived experience in speech therapy

1

72% Patients demanded transparency in how healthcare AI makes decisions.

2

80% Patients were comfortable with AI in care, but only if clinicians understood and explained it

3

48.4% Clinicians agreed utilization of AI tools can increase the risk to patient privacy

I mapped out the system map of stakeholder relations in the clinical research context to understand the interactions and values exchange in between

Problem statement

How might we create a single source of transparency that helps patients communicate clearly, and support clinicians in using AI ethically and efficiently despite time constraints, cultural gaps, and limited system guidance?

Design objectives

Setting clear compass that guides design direction

Transparency

Make AI’s role explainable and visible in care decisions

Voice

Ensure families can share nuanced cultural and linguistic context

Reflection

Support clinicians in reviewing, adapting, and communicating AI outputs

Ideation & Workshops

I make ideas spark through complexity

I initiated brainstorming sessions where we explored human-AI collaboration models, balancing automation with clinician oversight. While we had many great ideas, I made sure to prioritize concepts that would best bring out our design objectives of transparency, voice, and reflection.

Patient side features

AI Explanation Layer

Simple visuals and language showing what the AI analyzed and why

Voice of the Patient

A channel for patients to share feedback on AI-driven results

Research opportunity

Voluntary option for patients to contribute data to strengthen AI models

wireframe of patient side phone screen

Clinician side features

Bias Flags

Alerts for potential dataset mismatches (e.g., bilingual cases)

Feedback Loop

Patient feedback integrated into clinicians’ workflows for continuous improvement

Insight community

A collaborative space for clinicians to access research insights from the community to facilitate diagnosis

Wireframe of clinician side screen

Service blueprint of the service

Co-Design & Iteration

Validate designs with real-world use case

I conducted a co-design session with a speech-language pathologist, who confirmed the value of contextual intake and AI explanation features. I iterated based on their feedback, simplifying workflows and clarifying client-facing language.

1

Adding summary on critical information of community resources helps clinicians utilize it quickly and effectively

2

Showing personalized bias considerations of patient's overall file is more useful than listing out general bias considerations of AI tools

3

Adding features like communication log and action guide facilitates clinicians navigate medical communication customized to different levels of patient AI concern level

Final design and Key features

Future healthcare is where trust and AI power "echos"

-

More transparent and equitable experience for patients/caregivers.

1

Context -rich intake form

2

AI-usage preference

Detailed patient information and AI integration guidance that makes clinicians feel confident.

1

Context-rich intake view

2

Clinician network board

Community research resources that boost engagement and continuous improvements in the healthcare system.

1

Research board

Impact metrics

How I would evaluate impact

Increased satisfaction of application process

Parent users submit enriched intake form that enables them to add contextual and cultural details and compare the perceived usability or satisfaction score with previous form.

Reduction rate in clinician prep time

Time spent preparing for sessions, by using Echo’s AI-supported summaries and context views which flag potential biases.

Improved percieved transparency of AI use

Parents’ understanding on how AI is (or isn’t) used in their child’s care, as measured by a post-visit survey & clinicians’s perceived transparency after using Echo Insights for guidance.

Learnings

0 to 1 is a process of deconstruct and rebuilding the mindset

Although this was not my first time working in the healthcare domain, leading a team to explore an unfamiliar space pushed me to be even more proactive. A key challenge I faced was the need to quickly adjust our project direction based on emerging research. To keep the team aligned, I organized debriefing sessions after each interview and reflection meetings after every round, which were effective approaches that fostered shared understanding and agility.

I’m proud that through hard work and the courage to step outside our comfort zones, we achieved a well-deserved accomplishment!

The amazing Healthcare and Humanity team! What a journey we've been through together!!