For the past year I've been designing AI assistants and GenAI products. I got to thinking... why does designing these cutting-edge products feel different and does it have to be different really? How are traditional design processes evolving? Are we adapting traditional user-centered design to keep up with designing for AI?
Good UX design takes a lot of collaboration. Whenever I'm working on a design initiative, even when it's a cutting-edge AI product, my focus always remains on the user. Building a common understanding of the user needs in the team is crucial. Now, more than ever, we need to collaborate to understand: who are we designing for? Why?
Below I'll describe my go-to design processes (as used on an initiative to build an enterprise AI Assistant) sprinkled with recommendations on how to tweak standard methods to accommodate the nuances of designing for AI. It consists of 4 parts:
1. Discovery: expert interviews, user interviews.Â
2. Definition: user journey map, use cases, ideation, prioritization.
3. Develop: design principles definition, UX Design.
4. Deliver: usability testing, insight synthesis and analysis.
Taking into account that this process doesn't always live up to the speed of fast-moving AI development, that's why at the very end, I describe an alternative design process where the steps are slightly inverted:
Starts with developing a POC
Test it.
This approach saves time, but can be used in later stages in the product life cycle, when there has been a lot of previous research and confidence is high. Product discovery can run in parallel to development.
My go-to approach
Step 1: Discovery
A study (2024 State of Marketing AI Report by Marketing AI Institute) shows that directors and managers have better understanding of AI than mid-entry level roles. Interestingly, a high rate of AI understanding went hand-in-hand with AI confidence levels. That's why in my research I spoke to both.
To begin, I ran interviews with leaders who are responsible for implementing AI strategy and adoption in their organizations to understand how they were planning to adopt AI in their work processes.
Â
At the same time, I interviewed mid-entry level hands-on specialists (creatives, media planners, strategists) to understand their day-to-day workflows and to get a feel for their goals, needs and pains.
Â
Step 2: Definition
User journey
I presented the interview findings in the form of a user journey, where I presented a common workflow as seen currently from the point of view of each role.

Aside from usual user journey swimlanes (steps, actions, emotions, needs, pains), I include a Quotes swimlane. Here I paste direct quotes from user sessions in correspondence to each step. This allows anyone who’s reading the user journey map to step into the user’s shoes, promoting empathy.Â
As an improvement of this template to accommodate evolving processes of designing for AI – I added a data swimlane. Here I described which data will be used to generate output and in which form it exists now, its source. This is crucial as the solution depends directly on the type, quality, availability and accessibility of the data.Â
It’s crucial to socialize the user journey map with the team and use it as a tool for building a common understanding of the needs we are solving.
Ideation
Some parts of the journey map will stand out particularly. These are the steps in the user's workflow where something is not working, a solution is missing or is inadequate to solve the user’s pain points. These are the opportunities we need to tackle.
In these steps I'll add two more swimlanes.
How Might We’s. Here, I distill the actual problem, by piecing together the information from the swimlanes above (step, action, emotion, need, pain, touchpoint, data).Â
Ideation. This is the place for collaboration in ideation sessions. Ideation sessions should be the 1st time so far when we start talking about AI.
AI is a solution, not a user need. People don’t need a drill, they need to make a hole in the wall to hang a painting. People don’t need a vase, they need to expose their flowers.
Here are some examples of what parts of the user journeys ended up looking like:
Role: strategist
Step: pre-project
Need: find similar past projects, case studies in archives
Pain: don't know where to look
Data: Sharepoint, Gdrive, G-slides
How might we: help the user easily find relevant information
Ideation: RAG search would allow searching by use case, question or topic.
Role: data scientist
Step: research
Need: understanding data quickly, getting insights from large amounts of data
Pain: manual, tedious labor with large amounts of data
Data: internal data, 3rd party data sources
How might we: help data scientist find datasets and learn insights quicker
Ideation: multi step AI agent to search for data and then to summarize it
Role: project manager
Step: prepare timelines or exec summaries
Need: birds eye view to quickly understand project status delivery times
Pain: too many people and comments involved
Data: Google sheets constantly updated, project management tools, Teams
How might we: help project managers get a real-time view of how project is doing
Ideation: Agent based solution which would find relevant resources plus work with dynamically changing data and be integrated with project management tools.
Step 3: Design & develop
Before designing, my team and I defined some design principles based on the type of most recurring themes I discovered in research:
Discoverability – making sure the user understands what AI can do and what is possible
Utility and relevance – making sure the AI is actually solving a real problem and providing relevant resultsÂ
Learnability – making the AI tool easy to learnÂ
Transparency – creating a feeling of trust by helping people understand how AI works
(More detail about these principles – in another post coming soon!)
Step 4: Deliver & deploy
Usability testing, insight synthesis and analysis. (More on testing coming soon!)
Alternative Approach
This alternative approach is slightly inverted and starts with developing a POC and then testing it with users.
Clickable prototypes in Figma are becoming less capable of simulating the real experience a user might have with an AI product. Output is unpredictable and so much depends on data that powers it – we can no longer test good old clickable prototypes in Figma with semi-realistic content.
Sure, we can still assess usability of the user interface, but we need the real functioning product (or at least a POC) to do that. Therefore, the design sprint inverts from ideate-prototype-test-build to ideate-build-test-iterate.
We might use this approach when we've done enough discovery and testing (maybe through other initiatives) and have enough confidence in this feature. This approach works if the team is open to iteration and wants to move quickly. Additional product discovery can run in parallel to development and inform the next iteration.
At the end of the day...
It’s easy to get so caught up in this AI craze, where stakeholders are pushing for AI to be injected into products in any shape or form, that sometimes teams forget that ultimately AI should solve a need. Designers must remain advocates for user needs. Now, more than ever, is the time we have to stand by the core principle of human centred design: a product must solve a user need.
A user need is – always – a verb, not a noun. A human doesn’t need an AI product. A human needs to stay organized, search better, get their work done faster, come up with more diverse ideas. An AI product/service/feature is only one of the multitude of possible solutions.
Comments