UX Research App
An AI-powered UX research assistant with 6 specialized tools—from personas to accessibility audits—built with Next.js 16, Vercel AI SDK, and Google Gemini.
Hi! I can help with UX research. Describe your product and I'll generate personas, empathy maps, and more.
Create a persona for a fitness app targeting busy parents
Calling createPersona...
Generating structured output...
Sarah Chen, 34
Marketing Manager • Denver, CO
"I need workouts that fit between school drop-off and my first meeting."
Overview
The Problem
UX research is expensive and time-consuming. Small teams skip it entirely, leading to products built on assumptions rather than user understanding. Professional research tools cost thousands per year.
The Solution
A chat-based AI assistant with 6 specialized research tools that generates personas, empathy maps, user flows, pain point analyses, accessibility audits, and heuristic evaluations from natural language descriptions.
My Role
Sole AI engineer and frontend developer — designed the system prompt, built 6 research framework functions, implemented tool orchestration with streaming, and created 6 rich card components to visualize results.
Key Deliverables
6 AI tools with Zod-validated schemas, 6 visual card components, a streaming chat interface with tool-aware message routing, and a companion MCP server for IDE integration.
Architecture & Design Decisions
Every decision was driven by one goal: make professional UX research accessible through a simple chat interface.
Key Technical Decisions
Pure Functions Over API Calls
Each research tool is a pure TypeScript function that takes structured input and returns structured output. No external API calls — the AI model does the creative work, the framework enforces the structure.
Tool Orchestration
The AI decides which tool(s) to call based on the user's message. It can chain tools naturally — a persona leads to an empathy map, which reveals pain points to analyze.
Streaming with Rich Cards
Tool results render as visual cards (not raw JSON) while text streams token-by-token. The polymorphic message router matches tool names to card components in real-time.
Polymorphic Message Routing
A TOOL_CARDS record maps each tool name to its React component. When a tool result arrives, the router looks up the component and renders the correct card — no switch statements needed.
Zod Schema Contracts
Every tool has a Zod-validated input schema that constrains what the AI can pass. This creates a type-safe contract between the AI model and the framework functions.
Engineering Principles
Let the AI Create
The model generates creative content (names, bios, insights); the framework enforces structure
Show, Don't Tell
Rich visual cards communicate research findings more effectively than walls of text
Chain Naturally
Tools suggest logical next steps — a persona leads to a user flow, pain points lead to an accessibility audit
Validate at Boundaries
Zod schemas at the tool boundary ensure type safety between the AI model and framework code
The 6 Research Tools
Six specialized tools the AI calls based on your description — converting natural language into structured UX research artifacts.
createPersona
User Persona Generator
createPersona({ product: "fitness app", targetUser: "busy parent", goals: [...], frustrations: [...] })
Inputs
- •product — the product or service name
- •targetUser — type of user (e.g., 'busy parent')
- •goals — what the user wants to achieve (2-4)
- •frustrations — what frustrates them (2-4)
Returns
Full persona with name, age, occupation, bio, quote, goals, frustrations, behaviors, motivations, and tech comfort level
When Used
When a user describes a product and target audience. The AI extracts goals and frustrations from natural language to populate the input.
createEmpathyMap
Says/Thinks/Does/Feels Mapper
createEmpathyMap({ userType: "new user", context: "onboarding flow", observations: [...] })
Inputs
- •userType — the type of user being studied
- •context — scenario being mapped
- •observations — user quotes, behaviors, thoughts (6-10 items)
Returns
Empathy map with categorized Says, Thinks, Does, Feels quadrants plus actionable insights and user needs
When Used
After a persona is created or when a user describes observed behaviors. Often chained after createPersona.
mapUserFlow
User Journey Mapper
mapUserFlow({ userGoal: "complete checkout", startingPoint: "product page" })
Inputs
- •userGoal — what the user is trying to accomplish
- •startingPoint — where the journey begins
- •steps — specific steps in the flow (optional)
Returns
Journey map with touchpoints, emotional states (positive/neutral/frustrated/confused/anxious), pain points, and opportunities per step
When Used
When mapping a specific user task or workflow. Steps are auto-generated if not provided, making it easy to use with minimal input.
analyzePainPoints
Pain Point Prioritizer
analyzePainPoints({ painPoints: [...], product: "e-commerce site" })
Inputs
- •painPoints — list of pain points from research
- •product — the product being analyzed
Returns
Prioritized list with severity x frequency matrix, category breakdown, impact descriptions, and top recommendation
When Used
After user flows reveal pain points or when a user lists known issues. Categories include usability, performance, accessibility, trust, content, navigation, onboarding, and pricing.
auditAccessibility
WCAG 2.2 Accessibility Auditor
auditAccessibility({ uiDescription: "...", userGroups: [...], features: [...] })
Inputs
- •uiDescription — description of the UI layout and elements
- •userGroups — groups to consider (e.g., 'screen reader users')
- •features — UI features to audit (e.g., 'navigation menu')
Returns
Issues mapped to WCAG criteria with severity, compliance checklist (perceivable/operable/understandable/robust), inclusive design suggestions, and overall score
When Used
When evaluating a design for accessibility. Covers visual, motor, cognitive, and auditory disability types with A/AA/AAA level compliance.
evaluateHeuristics
Nielsen's 10 Heuristics Evaluator
evaluateHeuristics({ uiDescription: "...", evaluationFocus: ["navigation", "error handling"] })
Inputs
- •uiDescription — description of the UI being evaluated
- •systemContext — context about the product type (optional)
- •evaluationFocus — specific areas to focus on (optional)
Returns
Score for each of Nielsen's 10 heuristics, violation details with severity (cosmetic/minor/major/critical), and top recommendations
When Used
For expert UX review of a design or prototype. Evaluates visibility of system status, error prevention, consistency, and 7 more heuristics.
The Tool Pipeline
From natural language to visual research artifacts in 5 steps — each tool follows the same pipeline.
User Describes
The user describes a product, user type, or design in natural language. No forms, no structured input required.
Why: Natural language lowers the barrier — you don't need to know UX methodology to use the tools.
AI Interprets
The AI reads the description and decides which tool(s) to call. It extracts goals, frustrations, and context to build the tool input.
Why: The system prompt guides the AI to pick the right tool and ask clarifying questions if needed.
Framework Executes
The pure TypeScript framework function takes the Zod-validated input and produces structured output — a persona, empathy map, or audit report.
Why: Pure functions make each tool independently testable with Vitest.
Structured Output
Each tool returns a typed object (PersonaOutput, EmpathyMap, UserFlow, etc.) that matches the card component's expected props.
Why: TypeScript interfaces ensure the framework output always matches what the card expects.
Card Renders
The TOOL_CARDS router maps the tool name to its React component, rendering a rich visual card with the structured data.
Why: Polymorphic routing means adding a new tool only requires adding one entry to the record.
Tools in Action
Three scenarios showing how the AI reasons through requests—from single tool calls to multi-tool chaining.
Single Tool Call
A straightforward request — the AI picks the right tool, extracts structured inputs from natural language, and renders a persona card.
User: "Create a persona for a meditation app
targeting stressed college students"
Step 1 → createPersona({
product: "meditation app",
targetUser: "stressed college student",
goals: ["reduce anxiety", "build consistency",
"improve sleep"],
frustrations: ["no time", "apps feel generic",
"hard to stay motivated"]
})
Step 2 → Persona Card renders:
"Alex Rivera, 21 — Psychology Student"
"I know meditation helps, but I can never
stick with it past day 3."Tool Chaining
The AI chains two tools in sequence — generating a persona first, then using its details to create a matching empathy map.
User: "Help me understand users of a budget
tracking app for freelancers"
Step 1 → createPersona({
product: "budget tracking app",
targetUser: "freelancer",
goals: ["track irregular income", ...],
frustrations: ["unpredictable cash flow", ...]
})
→ Persona Card: "Jordan Kim, 28"
Step 2 → createEmpathyMap({
userType: "freelance designer",
context: "managing irregular income",
observations: [extracted from persona...]
})
→ Empathy Map Card renders
Step 3 → "Want me to map the user flow for
how Jordan tracks a new payment?"Clarification Before Action
When the request is too vague, the AI asks 1-2 clarifying questions before calling a tool — following the system prompt's instructions.
User: "Audit my app"
Step 1 → Response (NO tools called):
"I'd love to help! To run a useful audit,
could you tell me:
• What kind of app is it?
• Which part of the UI should I focus on?
• Are there specific user groups you're
concerned about (e.g., elderly users,
screen reader users)?"
User: "It's an e-commerce checkout page,
worried about screen reader users"
Step 2 → auditAccessibility({
uiDescription: "e-commerce checkout page",
userGroups: ["screen reader users"],
features: ["form fields", "payment flow",
"order summary"]
})Development Process
System Prompt Design
Designed the system prompt with tool descriptions, behavior rules (when to call tools vs. ask questions), chaining guidance, and tone instructions. Iterated to balance helpfulness with precision.
Framework Engineering
Built 6 pure TypeScript framework functions with typed inputs and outputs. Each function enforces structure (e.g., severity x frequency matrix for pain points, WCAG criteria for accessibility) while letting the AI fill in creative content.
Tool Orchestration
Defined 6 Vercel AI SDK tools with Zod input schemas. Connected each to its framework function via execute callbacks. Configured streamText with Google Gemini for real-time tool calling.
Card Components & UI
Built 6 specialized card components (PersonaCard, EmpathyMapCard, etc.) with the TOOL_CARDS polymorphic router. Added streaming chat with useChat from @ai-sdk/react and deployed to Vercel.
Key Technical Features
Five capabilities that make the system extensible, type-safe, and visually rich.
6-Tool Orchestration
The AI dynamically selects from 6 specialized tools based on the user's message. It can chain tools in sequence — a persona naturally leads to an empathy map or user flow.
Rich Visual Cards
Each tool renders a dedicated React card component with structured data. The TOOL_CARDS polymorphic router maps tool names to components without switch statements.
Streaming Chat
Text streams token-by-token while tool results render as complete cards. Built with Vercel AI SDK's streamText and useChat for real-time interaction.
Type-Safe Pipeline
Zod schemas validate tool inputs at the AI boundary. TypeScript interfaces enforce the contract between framework functions and card components — no runtime type errors.
MCP Server Companion
A companion MCP server (ux-research-mcp) exposes the same 5 core tools to any MCP client — bringing the research capabilities to IDEs and other AI agents.
Tech Stack & Architecture
Four layers working together—from the AI model to the visual output.
AI / Model Layer
- •Google Gemini 2.5 Flash
- •Vercel AI SDK v6 (streamText)
- •System prompt with tool guidance
- •Natural language to structured input
Tool Layer
- •6 typed tools with Zod schemas
- •Pure function frameworks
- •Typed inputs and outputs
- •Independently testable with Vitest
Component Layer
- •6 visual card components
- •TOOL_CARDS polymorphic router
- •Structured data visualization
- •Tool-aware message rendering
Frontend Layer
- •Next.js 16 App Router
- •useChat from @ai-sdk/react
- •Tailwind CSS styling
- •Deployed on Vercel
Learnings & Outcomes
What I Learned
- Tool orchestration is the core skill — deciding which tool to call and how to chain them is more important than any single tool's implementation
- Pure framework functions make AI tools testable — separating structure from creativity lets you unit test the framework independently
- Zod schemas at the boundary between AI and code eliminate an entire class of runtime errors and make the contract explicit
- Visual cards communicate research findings 10x better than text — showing a persona card is more impactful than describing one
- System prompt design is iterative — getting the AI to chain tools naturally took multiple rounds of prompt refinement
- Building both the web app and MCP server showed me how the same tool logic can serve different interfaces and contexts
Skills Demonstrated
Explore the UX Research App
Chat with the AI research assistant or explore the source code.