.png)
Karen has no patience.
If a button is disabled without explanation, she gets annoyed.
If an empty state looks like an error, she assumes the system is broken.
If a loading spinner doesn’t explain what’s happening, she asks for the manager.
Karen isn’t a real person.
She’s a synthetic user.
And she might be one of the most useful ways I’ve found to stress-test a design before putting it in front of real users.
What Is a Synthetic User?
A synthetic user is a constrained AI decision agent embedded in a controlled simulation framework.
It is not just a profile. It is a structured behavioral model with:
- Identity (role + expertise)
- Intent (clear objective)
- Limits (constraints + forbidden assumptions)
- Logic (behavioral and abandonment rules)
- Boundaries (strict evaluation scope)
- Accountability (structured output requirements)
It operates only within what is defined and cannot compensate for ambiguity, missing signals, or structural gaps in the interface.
A synthetic user is not:
- A fictional persona or a storytelling device
- A predictive AI that guesses user preferences
- An intelligent assistant that fixes unclear design
A synthetic user interacts strictly with what is visible in the interface and nothing more. It does not infer intent, fill gaps, or compensate for ambiguity. When the path forward is unclear, it hesitates. That hesitation is not failure. It is the signal that reveals structural friction.
What a Synthetic User Needs to Work
.png)
If you want this to be more than “ChatGPT pretending to be someone,” you need structure. You must define:
- Functional Role: Who this user is in operational terms (Operations Manager reviewing trip segments).
- Domain Expertise Level: How much they understand the subject matter (6 months in logistics, still learning edge cases).
- Technical Proficiency: How comfortable they are with software (Uses dashboards daily, avoids advanced filters).
- Explicit Objective: What they must accomplish in this session (Confirm whether a trip contains excursions).
- Success Criteria: What level of certainty is required to consider the task complete (Needs explicit confirmation, not inference from a map).
- Motivations: What they prioritize when making decisions (Speed over exploration).
- Constraints: Operational limits that shape behavior (Low tolerance for ambiguity, under time pressure).
- Behavioral Rules: How they interpret and act on information (If unclear after 3 seconds, move to another visible option).
- Abandonment Rules: When they stop the flow (If the same friction appears twice, they exit).
- Forbidden Assumptions: What they cannot infer or mentally “fix” (Cannot assume disabled filters require prior calculation unless explicitly stated).
- Evaluation Scope: What part of the experience they are allowed to simulate (Only the “Segments” tab, not the full dashboard).
- Structured Output Format: How the simulation must report results (Step → Action → Clarity → Doubt → Reason → Highest friction).
What I Learned About Using Synthetic Users
Synthetic users don’t validate whether something “works.” What they actually do is expose where a design forces users to interpret instead of confirming things explicitly. They surface structural ambiguity that often goes unnoticed in internal reviews and help distinguish between friction that affects everyone and friction that only impacts less experienced users.
In practice, they make design discussions more concrete because you’re no longer debating opinions, you’re observing constrained behavior. They don’t replace usability testing, but they significantly improve how prepared you are before running it.
How to Start Using Synthetic Users
If you want to try it today:
- Define a synthetic user with strict rules
- Write a clear objective
- Declare your "forbidden assumptions"
- Provide the flow step-by-step
- Force a structured output
If the synthetic user never hesitates, your constraints are too weak
I’ve pulled together the exact resources I use:
- Download the template and prompts
- Read the full framework, with a step-by-step explanation of how to apply it in practice
This Is Still Early
Agent-based simulation is not a new idea.
What is still underdeveloped is how to apply it in a structured, practical way inside UX workflows. There is no widely adopted standard yet. No clear implementation pattern most teams follow.
What I’m sharing here is not an academic breakthrough. It’s a working implementation.
It can evolve. It can scale into automation.
But even in its current form, it has helped me detect structural friction before running formal usability testing, that alone makes it worth exploring.

.avif)


