Access Required
This case study is protected due to confidentiality agreements and is intended only for professional viewing.
To request access, please email leahyangmsx@gmail.com.
2025
JUNO AI
I lead the end-to-end design on the agentic intake workflow for litigation — transforming intake from a passive upload step into an active, agentic workflow.
Role
Design Owner
Team
1 product manager
1 designer
Juno Engineering Team
Skills
Design System
Competitive Analysis
Prototyping
Agent Interaction Design
Branding/Visual Design
Duration
6 months
Official Website
junolaw.ai
Where this information comes from?
Is it accurate?
What to do next?

1


3
1
2
MENTAL MODEL
The mental model of this experience is built around the review journey: starting with selecting a document, moving through understanding its content, and ending with validating AI-extracted data associated with that document.
By grounding the design in this flow—Navigate → Read → Validate—we provide users with a clear sense of progression while keeping the complexity of AI-assisted review intuitive and manageable.

“3-Step” Structure
Step 1 of 3: Uploading
Document upload takes time, and edge cases—like failed uploads—require a dedicated space where users can track progress.

“3-Step” Structure
Step 2 of 3: Verifying
At the same time, we need to surface the auto-filled questionnaire alongside its sources, so users can review documents while verifying the information.

“3-Step” Structure
Step 3 of 3: Completing
Finally, a summary at the end of the intake process provides users with a clear sense of completion.

FEEDBACK
When tested with lawyers,
they said this flow reflects their real workflows, the panel arrangement is just like how they arrange their monitors. Everything is transparent and explainable when verifying. However, a lawyer points out he didn't know why tasks were generated when it is only at the end. Usually, most of their tasks come up when they review 1 or more files.
When delivered to engineering team,
while category-specific extraction offered a more structured solution, implementing it reliably across document types would require significant time and engineering investment.
3
Final Workflow — bring questionnaire back, and merge tasks into review.
After the feedback, I made two design decisions:
Merge Tasks Into Review
I integrated suggested tasks into the review step to reduce context switching and support action at the moment of insight.
Final Workflow
Merge Tasks Into Review
Task discovery happens naturally during reading, because users are most engaged with context at that moment. This reduces cognitive switching

Final Workflow
Questionnaire is back
As the previously-proved way to structure data, questionnaire gives users a familiar format instead of piecing together scattered extracted data.

Design Principles Learned
Transparency builds confidence.
Showing document sources and end-of-flow summaries makes AI results trustworthy.
Automation needs context.
Structured guidance helps AI output stay understandable and reviewable.
Balance ambition with capability.
Design vision must scale with real AI performance — ambitious but grounded.
Why This Matters - The Impact
Industry studies show that traditional litigation intake is highly time-consuming: a paralegal typically spends 5–7 hours reviewing, naming, categorizing, and extracting information from about 100 documents before a case can begin (ABA Legal Technology Survey, 2023; Thomson Reuters Legal Ops Benchmark, 2022).
With Juno AI Intake, the same workload is completed in under two minutes of automated processing, producing a structured, ready-to-review questionnaire. Even with 30–45 minutes of human verification, this represents an 85–90% reduction in onboarding time, turning what was once a full-day administrative task into a single, guided review session.
More importantly: we shifted AI from a passive tool to an active participant in legal work.


