emma
All projects

Case Study 03 · Stanford LAWxLLM Hackathon, 2nd place, 2025

Voice AI legal assistant

36 hours, voice first, what we'd ship next.

Role
Product and GTM
Event
Stanford LAWxLLM 2025
Outcome
2nd place, jury award
Modality
Voice-first, chat fallback
Emma with teammates on stage at Stanford LLMxLaw Hackathon, runner-up announcement on screen behind them

Problem

Lorem ipsum placeholder. Personal-injury intake is broken at the human edge: a client calls a clinic in distress, often after an accident, sometimes with limited English, and the lawyer’s capacity to take that first call is the binding constraint. Chatbots fail this user. Real copy to come.

Hypothesis

A voice-first assistant beats a chatbot for first-touch legal intake on three axes that matter: accessibility, emotional register, and completion rate. Voice isn’t novelty here, it’s the right modality for this user in this moment. Lorem ipsum continues.

User flow

Five phases: greeting, incident capture, witness and insurance, urgency triage, lawyer handoff. Each phase had an explicit voice script and a chat fallback. Lorem ipsum follows.

Voice vs chat: the decision

Chat-first felt safer to build but failed our test users. Voice-first with a chat escape hatch let the distressed user complete intake without losing the user who hates voice. Asymmetric beat symmetric. Lorem ipsum continues.

What I would ship next

A clinic-ready beta with the structured-summary handoff as the core feature and voice as the entry point. The product is the brief; the voice is the invitation. Lorem ipsum closes the section.

Result

2nd

Out of 50+ teams. Jury feedback emphasized the user-flow and modality decisions over the underlying model choice, which is the right read. The product is the design.