Access Required

This case study is protected due to confidentiality agreements and is intended only for professional viewing.


To request access, please email leahyangmsx@gmail.com.

2025

JUNO AI

I lead the end-to-end design on the agentic intake workflow for litigation — transforming intake from a passive upload step into an active, agentic workflow.

Role

Design Owner

Team

1 product manager

1 designer

Juno Engineering Team

Skills

Design System

Competitive Analysis

Prototyping

Agent Interaction Design

Branding/Visual Design

Duration

6 months

Official Website

junolaw.ai

Legal AI today can answer questions and draft documents—but it can’t run a case.

When a personal injury case comes in, legal teams were spending 5–7 hours manually reviewing and structuring case documents before any meaningful work could begin.

But handing control to AI? Lawyers hesitate—“That might be too ambitious.”
The core challenge here is trust.

At a case intake stage, lawyers don’t just need answers. They need to understand:

The core challenge here is trust.

At a case intake stage, lawyers don’t just need answers. They need to understand:

Legal AI today can answer questions and draft documents—but it can’t run a case.

When a personal injury case comes in, legal teams were spending 5–7 hours manually reviewing and structuring case documents before any meaningful work could begin.

But handing control to AI? Lawyers hesitate—“That might be too ambitious.”
The core challenge here is trust.

At a case intake stage, lawyers don’t just need answers. They need to understand:

1

1

Where this information comes from?

2

2

Is it accurate?

3

3

What to do next?

This created a key design question:

This created a key design question:

How might we design an AI intake flow that is autonomous enough to save time, but transparent enough to be trusted in high-stakes legal work?

How might we design an AI intake flow that is autonomous enough to save time, but transparent enough to be trusted in high-stakes legal work?

How might we design an AI intake flow that is autonomous enough to save time, but transparent enough to be trusted in high-stakes legal work?

Design Evolution — From Extraction to Collaboration
Design Evolution — From Extraction to Collaboration

1

Early Prototype

Early Prototype

This is what the UI looked when I first joined the early project.

This is what the UI looked when I first joined the early project.

2

2

“3-Step +3-Panel” Structure

“3-Step +3-Panel” Structure

Context-aware, document-driven workflow.

Context-aware, document-driven workflow.

3

Integrated, Agentic Workflow

Integrated, Agentic Workflow

Synthesizes reviewing into a scalable and adaptive layout.

Synthesizes reviewing into a scalable and adaptive layout.

1

Early Prototype — a linear flow implemented through sequential modal windows.
Early Prototype — a linear flow implemented through sequential modal windows.

SO WE LEARNED

Automation without transparency creates mistrust. Users need to see and understand how AI reaches conclusions.

This flow introduced three major usability flaws:

This flow introduced three major usability flaws:

No transparency

No transparency

Users saw questionnaire filled without knowing where data came from, all hided in a "black box".

Users saw questionnaire filled without knowing where data came from, all hided in a "black box".

Disconnected workflow

Disconnected workflow

Documents and extracted data were separated, preventing users from validating information in context and forcing constant back-and-forth.

Documents and extracted data were separated, preventing users from validating information in context and forcing constant back-and-forth.

Constrained interface

Constrained interface

Modal-based UI makes it impossible to review real documents, users have to scroll and switching content.

Modal-based UI makes it impossible to review real documents, users have to scroll and switching content.

SO WE LEARNED

Automation without transparency creates mistrust. Users need to see and understand how AI reaches conclusions.

2

“3-Step +3-Panel” Structure — context-aware, document-driven. Bring the process out of the "black box".
“3-Step +3-Panel” Structure — context-aware, document-driven. Bring the process out of the "black box".

MENTAL MODEL

The mental model of this experience is built around the review journey: starting with selecting a document, moving through understanding its content, and ending with validating AI-extracted data associated with that document.

By grounding the design in this flow—Navigate → Read → Validate—we provide users with a clear sense of progression while keeping the complexity of AI-assisted review intuitive and manageable.

SOLUTION: 3-PANEL STRUCTURE

With AI assisting in extracting and structuring information, the opportunity emerged to design a workspace where navigation, reading, and validation could coexist seamlessly.

By bringing these steps into a full page, integrated view, the design reduces context switching and enables a continuous review experience where users can verify information efficiently and with confidence.

SOLUTION: 3-PANEL STRUCTURE

With AI assisting in extracting and structuring information, the opportunity emerged to design a workspace where navigation, reading, and validation could coexist seamlessly.

By bringing these steps into a full page, integrated view, the design reduces context switching and enables a continuous review experience where users can verify information efficiently and with confidence.

“3-Step” Structure

Step 1 of 3: Uploading

Document upload takes time, and edge cases—like failed uploads—require a dedicated space where users can track progress.

“3-Step” Structure

Step 2 of 3: Verifying

At the same time, we need to surface the auto-filled questionnaire alongside its sources, so users can review documents while verifying the information.

“3-Step” Structure

Step 3 of 3: Completing

Finally, a summary at the end of the intake process provides users with a clear sense of completion.

FEEDBACK

When tested with lawyers,

they said this flow reflects their real workflows, the panel arrangement is just like how they arrange their monitors. Everything is transparent and explainable when verifying. However, a lawyer points out he didn't know why tasks were generated when it is only at the end. Usually, most of their tasks come up when they review 1 or more files.

When delivered to engineering team,

while category-specific extraction offered a more structured solution, implementing it reliably across document types would require significant time and engineering investment.

3

Final Workflow — bring questionnaire back, and merge tasks into review.

After the feedback, I made two design decisions:

Merge Tasks Into Review

I integrated suggested tasks into the review step to reduce context switching and support action at the moment of insight.

Bring questionnaire back

I reintroduced the questionnaire to provide a scalable structure for information while enabling faster delivery.

Bring questionnaire back

I reintroduced the questionnaire to provide a scalable structure for information while enabling faster delivery.

Final Workflow

Merge Tasks Into Review

Task discovery happens naturally during reading, because users are most engaged with context at that moment. This reduces cognitive switching

Final Workflow

Questionnaire is back

As the previously-proved way to structure data, questionnaire gives users a familiar format instead of piecing together scattered extracted data.

Design Principles Learned

Transparency builds confidence.

Showing document sources and end-of-flow summaries makes AI results trustworthy.

Automation needs context.

Structured guidance helps AI output stay understandable and reviewable.

Balance ambition with capability.

Design vision must scale with real AI performance — ambitious but grounded.

Why This Matters - The Impact

Industry studies show that traditional litigation intake is highly time-consuming: a paralegal typically spends 5–7 hours reviewing, naming, categorizing, and extracting information from about 100 documents before a case can begin (ABA Legal Technology Survey, 2023; Thomson Reuters Legal Ops Benchmark, 2022).

With Juno AI Intake, the same workload is completed in under two minutes of automated processing, producing a structured, ready-to-review questionnaire. Even with 30–45 minutes of human verification, this represents an 85–90% reduction in onboarding time, turning what was once a full-day administrative task into a single, guided review session.

More importantly: we shifted AI from a passive tool to an active participant in legal work.