
Team: Lead Product Designer (that's me!), Project Manager, 4-5 Developers, QA
Tools Used: Figma, Maze, Confluence, Jira, GSuite, Slack
SonderMind is a mental health platform that connects individuals with licensed therapists and mental health professionals. Clinical questionnaires (CQs), also known as clinical assessments, are a key component in this process, playing a crucial role in diagnosing conditions, developing personalized treatment plans, and tracking progress. SonderMind leverages these assessments to provide clients with tailored care and to equip therapists with essential information for delivering effective treatment.
The Problem
Although Clinical Questionnaires (CQs) are a proven tool that improves treatment outcomes and benefits both clients and therapists, our data indicates significant underutilization -- only 66% of SonderMind clients complete their initial Baseline CQs, and only 24% complete subsequent Ongoing CQs. Additionally, only 29% of therapists regularly review their clients' CQ results.
Our Goal
Increase % client & therapist engagement with CQs by making substantial UX/UI improvements to touchpoints throughout the CQ journey.
My Process
1. Conducted an in-depth analysis of recent company research and current data on clinical assessments, and created artifacts to present key findings to stakeholders. 🕵️
2. Facilitated in-depth stakeholder discussions of the current clinical assessment user journey with Product, Clinical, and C-Suite to gather additional insights 🤿
3. Analyzed the competitive landscape of companies utilizing clinical assessments 🏞️
4. Developed high-level recommendations to address identified pain points for users and providers throughout the CQ journey 💡
In a nutshell, clinical questionnaires are standardized self-assessments that clients may complete at the beginning and throughout treatment to measure different mental health symptoms.
For example, one assessment called "GAD-7" is a seven-question assessment measuring Generalized Anxiety Disorder symptoms with questions like "Over the last two weeks, how often have you had trouble relaxing?"
These self-assessments are a quick, easy, and important way for clients to share how they’re really doing with their therapists.
What are Clinical Questionnaires (CQs)?
Mapping User Journeys to Identify Pain Points and Plan Development
To gain insight into current pain points and identify opportunities for improvements at various touchpoints, I developed several key artifacts, including user journey maps which were crucial for facilitating stakeholder discussions and achieving alignment.
Impact:
-
A shared understanding of how each part of the CQ experience affected the rest of the system.
-
Clearer prioritization by breaking the work into phases, from quick wins to more involved follow-up efforts.
-
Alignment on scope and priorities, resulting in a clear roadmap and product brief.

Timeline of notifications and assessment dispatch in the current client CQ journey with key pain points

Comparing current client CQ journey with proposed MVP and later phase improvements
Improvements throughout the CQ journey
Although our proposal encompassed numerous individual updates, the following examples highlight some of our key recommended solutions for the project:
PRIOR TO CQ COMPLETION:
Improved dynamic email notifications and messaging to demonstrate value and increase buy-in for clients
We previously sent several redundant email notifications (e.g. three different emails for three different assessments). The email content was outdated and visually boring.
To reduce reported email fatigue, I proposed new batching logic to consolidate relevant (or remove irrelevant) email content. I also outlined how to increase the value of these emails by incorporating more engaging visuals, psychoeducation, and personalized progress updates.


COMPLETING THE CQ
Increasing client trust & sense of safety
A user survey revealed that concerns about data privacy and discomfort with personal questions were contributing to drop-off during assessments.
To address this, I designed an in-context modal that provides clear explanations for why each question is asked. By giving people the option to pause and understand the purpose behind sensitive questions, the experience feels more transparent and respectful, helping build trust and encouraging users to continue with the assessment.
CQ EXPERIENCE
Create a more supportive environment for completing clinical questionnaires
Survey feedback showed that 61% of respondents missed clinical questionnaires simply because they forgot to complete them, not because they were unwilling. Rather than treating this as a compliance issue, I reframed it as a timing and context problem.
To address this, I proposed and designed a virtual “waiting room” that opens 15 minutes before a session, allowing clients to complete their questionnaires at a moment when they are already mentally preparing for care. This approach reduced reliance on reminders, respected clients’ real-world constraints, and made completion feel like a natural part of the care experience rather than an added burden.


POST- CQ
More intelligent and comprehensive score interpretations
Previously, clients received little context or value from their assessment results, which limited understanding and reduced long-term engagement with clinical questionnaires.
I addressed this gap through a multi-pronged approach focused on insight, action, and shared understanding:
1) I designed more personalized result interpretations and next-step recommendations, using AI to surface patterns and suggest meaningful, actionable follow-ups rather than static scores.
2) I introduced a way for clients to flag results for discussion with their therapist, directly addressing survey feedback that CQ results were often not reviewed during sessions and helping integrate assessments into real care conversations.
3) I designed a new holistic results view that brought all CQ dimensions into a single visualization, replacing a fragmented, one-metric-at-a-time experience with a more comprehensive picture of mental health over time, invaluable for both clients and providers.

Measuring success
To evaluate the impact of these changes over time, I partnered with stakeholders to define clear success signals across both client and therapist experiences. This included designing separate satisfaction surveys for clients and therapists, as well as identifying key behavioral metrics such as CQ completion rates and provider review of results to understand whether the improvements were meaningfully supporting care.

Preview of the final client satisfaction survey I created in Maze
Outcomes & Future Opportunities
This work spanned roughly five months and evolved significantly as new insights emerged from research, stakeholder input, and shifting business priorities. Navigating that ambiguity required close collaboration across design, product, clinical, and engineering partners, and I’m proud of how the team adapted while continuing to move the experience forward in thoughtful, user-centered ways.
Although the initiative was ultimately deprioritized following a round of layoffs, several of the UX and design patterns introduced through this work have since been adopted by other teams. Even without full impact metrics, the project helped establish clearer principles for how clinical questionnaires can feel more supportive, transparent, and integrated into care. I see this work as a strong foundation for future roadmap efforts and a meaningful step toward improving how clients and providers engage with clinical data.

