top of page

Creative Studio Agent Builder

Driving the agentic AI transformation at WPP

Overview

This case study

As the Senior Product Designer, I led the end-to-end UX strategy, research, design, and validation for the Creative Studio, a launchpad that brings AI agent building and usage directly into the daily workflows of 130,000 global marketing professionals. Over a 6‑month period, we onboarded 48,000+ monthly active users and saw 50,000 agents created, driving the AI transformation across the organization.

 

AI products: Agent Builder (configure & publish custom AI agents), Agent Evals & analytics, AI Canvas (visual orchestration of agents, steps, and data), Artifact Creator (generate reusable templates), AI App Creator (generate reusable tools), Multi-agent Chat (chain-of-thought reasoning, tool use, and document handling), and AI Inbox (centralized hub for AI-generated alerts and outputs).

UX for AI: AI-specific UX principles (transparency, control, learnability, feedback); adapting traditional processes to AI product development.

My role

Senior Product Designer

Year

July 2024 – present

The client

World’s largest advertising company

GenAI Suite.png

This case study

This case study

This case study

This case study

Product

The platform is a daily destination for the company’s 130k employees and their clients. It consolidates data, services and products, helping break down silos and foster seamless collaboration between teams. 

 

In this case study, we’ll focus on Creative Studio, which is a launchpad for all things AI within the enterprise platform:

Chat. Interact with publicly available models using chain-of-thought reasoning, tools, multilingual support, and document attachments.

AI tools. Prompt templates tailored to common marketing agency workflows

AI canvas. Multi-user environment with artifact and workflow generation

Problem

Problem

Problem

AI adoption. Enterprise users were already experimenting with consumer AI agent builders outside of work, but lacked a streamlined, integrated way to leverage AI Agents where their work is. 

Steep learning curve. Existing AI Agent builders were either too technical (requiring coding) or too generic (lacking business context), making it difficult for non-technical marketers, strategists, and creatives to build AI agents.

Inconsistent user experiences. Different departments put together ad-hoc automations, resulting in wildly varying user experiences.

Custom workflows. Off-the-shelf AI Agents didn’t always map to internal workflows (e.g. previous campaign analysis, strategic research, creative ideation, content approvals), users had to manually translate between external tools and their daily tasks.

Goal

​​

The company needed to unlock the power of AI Agents for 130K employees. Our goal was to empower business-savvy enterprise users to build custom AI Agents where they already do their daily work, without ever leaving the enterprise ecosystem.​

Challenge

Challenge

  1. Balancing complexity & usability. The builder had powerful features (model settings, data use, steps, reasoning, evals, feedback gathering and much more). The challenge was to design one coherent intuitive flow.

  2. Designing for novice & power users. How do we make sure the agent builder is easy to use for non-technical users, but powerful enough for advanced users?

  3. Lack of established AI patterns. There are very few commonly established patterns and guidelines that prioritise learnability, explainability and trustworthiness of AI interfaces in an enterprise context. How do we design new patterns that fit users’ mental models and stand the test of time?

  4. Tight deadlines & POC-first releases. Features were implemented as proofs-of-concept before full UX validation, requiring on-the-fly adjustments.

  5. UX concerns. When it comes to designing for AI, learnability and explainability, transparency and trust are primary concerns.

My responsibilities

 

  • Collaborated daily with product and engineering teams

  • Ran expert interviews to align on capabilities & constraints

  • Led use-case workshops with business stakeholders

  • Ran competitor and internal solutions analysis 

  • Conducted user interviews to map mental models & expectations

  • Synthesized insights into journey maps with real quotes, pain points & HMWs

  • Developed information architecture and user flows to align disparate stakeholder visions

  • Designed high fidelity visual mockups 

  • Built interactive prototypes in Figma to validate UX concepts

  • Ran usability testing (remote interviews & surveys) to gather user feedback​

Process (1).png

Discovery

  • Conducted 20+ expert interviews (AI engineers, data scientists) and user interviews (marketers, strategists, creatives) to map mental models and pain points.

  • Benchmarking against Microsoft Copilot Studio, Zapier, Make, OpenAI Custom GPT, and internal solutions to identify gaps.

Discovery

  • Personas & use cases: Defined two primary roles: Agent Authors (builders) and Agent Consumers (end users).​

  • User journey maps: Captured steps, emotions, needs, and direct quotes (e.g., “I need to test my agent before publishing.”), distilled into How‑Might‑We questions.​​​

IA Agent builder.png

Prototypes

  • Information Architecture. Aligned stakeholders on a cohesive sitemap covering Agent Builder, Marketplace, AI Canvas, Chat.

  • Interactive Prototypes. In Figma, I designed the first iteration of the Agent Builder. I decided on this layout because it aligned with existing platform patterns and common market standards: left-hand side for chat setup and settings, right-hand side for preview.

Agent builder - Very first iterations (1).png

Design

I refined the design, prioritizing a simple, uncluttered look and this layout because it aligned with existing platform patterns and common market standards.

Agent builder - Iteration.png

Launch &

Usability Testing

  • Released 1st version of the Agent Builder for pilot teams.

  • I ran remote moderated sessions and surveys with pilot teams.​

  • Users said it was “Intuitive, easy to set up, quick to use.”

  • There was a need for better guidance: templates, examples to build faster.

  • Users valued adding their own context, which improved relevance.

  • Requests: integration with internal tools (e.g. SharePoint), web browsing, audio input/output.

Impact: outputs from the Agent Builder were used directly in client-facing decks, giving teams a competitive edge and positioning the platform as a differentiator against competing agencies.

Usability Insights Agent Builder.png

Iteration

Based on user feedback, we started working on introducing additional functionality: capabilities, settings, integrations, feedback mechanisms, and evals. 

Adding steps, tasks, workflows 

  • Problem. Agents were limited to one action and not robust enough to encode real business processes.

  • Goal. Enable users to create reusable multi-step workflows with inputs, sequences of actions, checkpoints (LLM or human) and defined outputs, making agents truly agentic.

  • Design challenge. Granularity: what defines a step, task, agent and workflow?

  • My approach. I mapped out the anatomy of steps, tasks, agents and workflows with examples. Then I designed wireframes to explore 2 approaches:

  • Dedicated task builder. Create granular tasks to reuse across agents.

  • Task creation inside the agent builder, define tasks directly inside agent builder.

Delivery

  • Solution. ​Prototypes showed that tasks and agents had many identical attributes: inputs, capabilities, actions, artifacts. Allowing users to add tasks directly in agent builder was the best solution as it avoided duplication during setup. 

    • In the design, this was represented by assembling cards into a workflow and deep-diving into each card for setup (step name, instruction, output artifact).

  • Impact. ​Agents evolved from performing single actions into handling multi-step workflows. Users could now codify complex business processes into reusable, repeatable flows.

Updates continued shipping weekly:

  • Expanded integrations (e.g., SharePoint, Outlook via MCP)

  • Added more artifact types (podcasts, videos, music) as they became available.

  • As functionality expanded, the design also iterated: setup now occupied two thirds of the screen, while the preview was reduced to one third on the right.

Key UX Challenges Solved​

 

Learnability

Designed dynamic hints, simplified terminology, and intuitive workflows to onboard users with no AI background.

Hint text based on agent role selected.png

Dynamic hints based on agent type selected

Trust

Embedded feedback loops, transparent evaluations, and logic visibility.

Evaluation - actions.png

Mental Models

Introduced visual orchestration (canvas) for building workflows.

Agent builder - Reorder.png

Transparency

Showed visibility of system status, such as steps plan, step and sub-step progress, tool calls, queries to other agents and their outputs.

Steps.png

Scalability

Created reusable UX patterns and naming systems to guide future AI tools.

Instructions - Artifacts copy.png

Takeaways

Next steps

  • Enabling agent collaboration on the AI Canvas

  • Scheduled agent tasks 

  • Integrate third-party data sources and partner APIs through MCP 

  • Advanced analytics dashboards for agent performance monitoring

  • Quantitative survey to measure productivity gains and adoption

Takeaways

Adapt classical UX. Core frameworks (journey maps, HMWs) remain invaluable but must be tailored to AI’s unique challenges.

Iterate and ship fast. With new AI capabilities emerging weekly, our team learned to ship fast, validate quickly, and iterate on the product in real-time. 

🔮 Key learning: Shipping rough drafts of features to early adopters actually helped validate ideas before investing in a full visual design and feature improvement. ​

Test the real thing. Real-environment usability testing uncovers issues prototypes can’t.

Design for transparency. Explainability isn’t a nice-to-have. It’s essential for users to learn AI and become part of the AI transformation.

Outcomes & Impact

  • 48K+ MAU on the platform within 6 months 

  • 50K+ agents built, enabling business processes at scale 

  • Weekly feature releases and improvements 

  • Positive user feedback:

    • “It is intuitive.”

    • “Easy to use and set up.”

    • “It’s faster to build the agent than to assemble the knowledge base.”

Screenshot 2025-10-10 at 16.57.48.png
Screenshot 2025-10-10 at 17.28.30.png
Screenshot 2025-10-10 at 17.24.32.png
Screenshot 2025-10-10 at 17.24.40.png
Screenshot 2025-10-10 at 17.24.19.png
Screenshot 2025-10-10 at 17.25.10.png
Screenshot 2025-10-10 at 17.25.26.png

© 2026 by Irina Nalivaiko 🇺🇦

bottom of page