CASE STUDY: SHAPING AI EXPERIENCES
CASE STUDY: SHAPING AI EXPERIENCES
Context
Omada is an app that helps people manage chronic health conditions. The company was exploring how AI could support members in real time in two key areas:
– Nutrition Education: Answering questions with evidence-based guidance
– Motivational Interviewing: Supporting behavior change through conversation
The problem
Support at the time was spread across lessons, resources, and coaching. It worked in some ways, but when members had immediate questions, they often either waited for a coach or turned to Google or social media for faster but less reliable answers.
The opportunity
Wit AI, we could give members help in the moment, not just static content. We could also ground those experiences in Omada’s clinical knowledge and expertise to differentiate our AI from other options in the space.
Goals
From a metrics standpoint, our goal was to increase engagement and retention. From a product perspective, we aimed to make AI experiences feel genuinely useful, grounded in Omada’s clinical expertise, and available whenever members needed support.
My role and process
I led content design across both AI experiences. I defined how the systems should communicate and, in partnership with clinical leads, how they should behave to deliver the right care to members.
I partnered with the team on system prompts and evaluation frameworks to assess quality, safety, and brand fit.
I also helped create the in-app entry points for both experiences, which involved mapping out the flow and writing UI content (headers, value props, etc.).
Step 1
I started by grounding both AI experiences in a shared voice. Omada's existing brand voice was defined by four characteristics:
For brand consistency, I wanted to adopt these voice characteristics, but I made a key decision to not stop there. These two use cases needed noticeably different tones depending on intent. Building on the foundation of the voice characteristics, I added tone guidance for the two experiences:
– Nutrition Education: More casual and upbeat in tone, sometimes light or clever, with direct, structured answers for quick understanding
– Motivational Interviewing: More serious, respectful, and formal in tone, using open-ended questions and gentle prompts to support user-led exploration
This gave us consistency at the brand level, while still allowing each experience to feel appropriately unique, without defaulting to a generic AI personality.
I documented these voice and tone decisions to align cross-functional partners and ultimately translate them into system prompts that shaped the AI experience.
Step 2
Next, I focused on defining how the AI should behave across interactions, including:
Step 3
I then moved into writing system prompts for both experiences, reviewing outputs to understand how the AI behaved in practice.
The engineering team set up a space in LangSmith where I could input a prompt, quickly test the results in a simulated experience, and make adjustments.
This iterative process was key, and the prompt evolved rapidly as I tested and made refinements.
I ran this loop across both Nutrition Education and Motivational Interviewing, using output review as the main way to test and improve system behavior.
Step 4
Because clinical safety and brand consistency were extremely important, the team used a second behind-the-scenes LLM to judge the output of the user-facing LLM. To train this LLM judge, I created a rubric for grading AI output. The clinical team created a similar rubric to grade the clinical quality of the AI output. The LLM used these rubrics to judge the output of the member-facing AI.
I also partnered closely with clinical and legal teams to design how the AI handled guardrail moments. I created the messaging for these scenarios, ensuring responses were clinically appropriate, legally sound, and still clear and supportive for members.
We iterated against both sets of criteria until the system consistently scored highly, giving us confidence to release the experiences to members.
Step 5
I also created the in-app content members saw before starting a session, setting expectations and clearly communicating key value props upfront.
Here is the Nutrition Education experience:
And here is the Motivational Interviewing experience:
Impact
This work helped move both AI experiences from concept to launch as the company’s first major step into AI ahead of its IPO.
For Nutrition Education, over half of members who logged a meal engaged with the experience, contributing to an overall 4% engagement lift attributed to the AI.
Motivational Interviewing was more challenging: only 43% of members who started a chat completed a full session. Unlike Nutrition Education, which was embedded in the food tracker, Motivational Interviewing was surfaced as a home screen tile, a surface that probably didn’t fit the timing or headspace needed for a reflective conversation.
Bonus content
The legal team wanted users to understand that human care team members could view their chats. The original version (left) framed this in a way that could feel scary or invasive. The revised version (right) reframed it as a benefit, since care team members can follow up and provide more in-depth support.
Hearing from members
It was very rewarding to see members use the features and share their experiences.