The agentic age does not merely extend the designer's remit. It replaces the surface on which design has always operated. For three decades, design leadership meant mastering screens, flows, and interfaces. That era is ending. The systems now being built act autonomously, coordinate with other agents, persist memory across sessions, and transact on behalf of humans who are not present. These systems require eight new design capabilities that have no precedent in traditional UX practice. This essay names them, defines them, and maps each to the AXD frameworks that make them operational.
I. Introduction: The Capability Gap
Something has shifted beneath the design profession, and most design leaders have not yet felt the ground move. For thirty years, the discipline of design has been organised around a stable set of capabilities: information architecture, interaction design, visual design, user research, content strategy, and service design. These capabilities were built for a world of screens, flows, and interfaces. They assumed a human user, present and attentive, navigating a designed surface.
That world is ending. Not slowly, and not at the margins. The systems now being deployed by Google, OpenAI, Shopify, Mastercard, Stripe, and dozens of others are agentic AI systems that act autonomously, coordinate with other agents, persist memory across sessions, and transact on behalf of humans who are not present. At Shoptalk 2026, six of the most powerful companies in commerce made structural commitments to agentic infrastructure in a single week. The question is no longer whether agentic systems will reshape commerce and experience. The question is whether design teams are equipped to shape them.
The answer, for most organisations, is no. A recent Salesforce analysis of enterprise AI deployments found that failures are not caused by inadequate models but by inadequate architecture - systems designed without the structural capabilities that autonomous agents require. The same pattern holds for design. The gap is not in talent or ambition. It is in capability. Design teams trained to create interfaces are being asked to design relationships between humans and autonomous systems. The skills required are fundamentally different.
This essay identifies eight new capabilities that every Head of Design should begin building now. These are not extensions of existing UX competencies. They are new disciplines, each with its own vocabulary, its own methods, and its own design questions. Together, they constitute the capability foundation of Agentic Experience Design.
II. Why Traditional Design Skills Are Not Enough
The distinction matters because it determines how organisations invest. If agentic design is merely an extension of UX, then existing teams can absorb the work with modest upskilling. If it is a fundamentally new set of capabilities, then organisations need to build new competencies, hire differently, and restructure how design teams operate. The AXD Institute's position, articulated in the founding manifesto, is that the latter is true.
Consider the core assumptions of traditional UX. The user is present. The interface is visible. The interaction is synchronous. The designer controls what appears on screen. Every major UX methodology - from user-centred design to design thinking to jobs-to-be-done - is built on these assumptions. Now consider an agentic system. The user is absent. The interface may not exist. The interaction spans hours, days, or weeks. The designer specifies outcomes, not screens. The assumptions that underpin thirty years of design practice simply do not hold.
This is not a theoretical concern. As the AXD Institute documented in Beyond the Interface, the age of screen-based UX is giving way to a new paradigm where the most consequential experiences happen when no one is watching. The Invisible Layer essay demonstrated that the best agentic experience may be no visible experience at all. And the Designing for AI Autonomy essay established that creating systems that act without us requires design methods that traditional practice never developed.
The eight capabilities that follow are the AXD Institute's answer to this gap. Each addresses a design challenge that has no precedent in traditional UX. Each maps to one or more of the Institute's twelve practice frameworks. And each is already being demanded - explicitly or implicitly - by organisations deploying agentic systems at scale.
III. Capability 1: Intent Architecture
What it means: Designing from goals and delegated outcomes, not just tasks and screens.
In traditional design, the designer organises content for human navigation. The user browses, selects, and acts. In agentic design, the user delegates. They express an intent - "find me a flight under five hundred pounds that arrives before noon" - and an agent acts on that intent autonomously. The design challenge is no longer "what should appear on screen" but "how should intent be captured, structured, constrained, and verified."
Intent architecture is the capability of designing the systems through which humans express goals to autonomous agents. It encompasses outcome specification - telling agents what you want without telling them how - and delegation scope - the grammar of authority that defines what an agent is permitted to do. The AXD Institute's Intent Architecture Framework provides the structural foundation, defining how intent flows from human expression through agent interpretation to autonomous execution.
The practical implication is significant. Design teams accustomed to wireframing screens must learn to design intent schemas - structured representations of what a human wants, under what constraints, with what fallback conditions. This is closer to contract design than interface design. It requires understanding intent engineering - the discipline of encoding organisational and individual purpose into forms that agentic systems can understand and optimise toward.
IV. Capability 2: Orchestration Design
What it means: Shaping how agents, tools, humans, and services coordinate.
Agentic systems do not operate in isolation. A shopping agent may coordinate with a payment agent, a loyalty agent, a delivery agent, and a human approval step - all within a single transaction. The design of how these participants discover each other, communicate, negotiate, and resolve conflicts is orchestration design. It is the choreography of autonomous action.
The AXD Institute's Multi-Agent Orchestration Visibility Model provides the framework. It addresses how multiple agents coordinate without creating opacity - how the human principal maintains visibility into what is happening, who is acting, and why decisions are being made across a network of autonomous participants. The agentic AI protocols essay documented the communication infrastructure - MCP for tool access, A2A for inter-agent communication, ACP for commercial transactions - that makes orchestration technically possible.
For design leaders, orchestration design means moving beyond single-user, single-interface thinking. It means designing the coordination patterns between agents, the handoff protocols between agents and humans, and the visibility mechanisms that allow humans to understand what a network of agents is doing on their behalf. Salesforce's March 2026 analysis of agentic enterprise architecture identified unified observability across agent actions, reasoning, context, governance, and business outcomes as a critical design requirement. The design team that cannot think in terms of multi-agent coordination will struggle to design for the systems now being built.
V. Capability 3: Trust and Intervention Design
What it means: Deciding when the system should act, ask, explain, pause, or escalate.
This is the capability at the heart of trust architecture - the AXD Institute's founding concept. In traditional design, trust is largely implicit. The user sees the interface, evaluates it, and decides whether to proceed. In agentic design, the user is absent. Trust must be designed into the system's behaviour - its decisions about when to act autonomously, when to pause and ask, when to explain its reasoning, and when to escalate to a human.
The AXD Institute's Trust Calibration Model provides the theoretical foundation. It defines how trust is established, maintained, calibrated, and recovered across the lifecycle of a human-agent relationship. The Interrupt Pattern Library provides the practical toolkit - a catalogue of designed moments where an agent breaks the silence to re-engage the human. The Interrupt Frequency essay explored the calculus of when to break the silence - too often and the agent becomes a nuisance; too rarely and the human loses confidence.
For design leaders, trust and intervention design is perhaps the most consequential of the eight capabilities. It determines the fundamental character of the human-agent relationship. An agent that never interrupts is opaque. An agent that always interrupts is useless. The design of the intervention threshold - the conditions under which an agent should pause, explain, or escalate - is a design decision with direct commercial and ethical consequences. As the Consumer Trust Ceiling essay demonstrated, consumer willingness to delegate has a structural limit. Trust and intervention design determines where that limit falls.
VI. Capability 4: Context and Memory Design
What it means: Determining what should persist, what should be temporary, and how context should be used safely.
Agentic systems remember. Unlike traditional interfaces where each session begins fresh, agents accumulate history - past preferences, prior transactions, learned patterns, relationship context. This memory is what makes agents useful over time. It is also what makes them dangerous. An agent that remembers everything may act on outdated preferences. An agent that remembers nothing cannot build a relationship. The design of what persists, what expires, and how context is used safely is a new capability with no precedent in traditional UX.
The AXD Institute's Agent Memory and Context Continuity Framework addresses this directly. It defines the architecture of agent memory - what should be stored, for how long, under what conditions it should be surfaced, and how the human can review, correct, or delete what an agent remembers. The Relational Arc essay explored how human-agent relationships evolve over time, demonstrating that memory design is not a technical concern but a relationship design concern.
For design leaders, context and memory design raises questions that traditional UX never confronted. Should an agent remember that a user searched for engagement rings three months ago? Should it use that context when the user's partner is present? Should memory persist across different agents operated by the same principal? The Know Your Human essay introduced the concept of authority drift - the gradual divergence between a human's current state and the delegation they originally granted. Memory design must account for the fact that humans change, and delegations that were appropriate yesterday may not be appropriate today.
VII. Capability 5: Human Override Design
What it means: Creating meaningful review, correction, and stop controls.
Every agentic system needs a way for humans to intervene - to review what an agent has done, correct its course, or stop it entirely. This sounds straightforward. It is not. The challenge is not building a stop button. The challenge is designing override mechanisms that are meaningful - that give the human enough context to make an informed decision, enough control to change the outcome, and enough confidence that the override will actually take effect.
The AXD Institute's Failure Architecture Blueprint provides the framework for designing systems that degrade gracefully when things go wrong. The Failure Architecture essay argued that graceful degradation is the highest form of agentic design - that the true test of an agentic system is not how it performs when everything works but how it behaves when something fails. Human override design extends this principle to the human's ability to intervene.
The practical challenge is what Codebridge's March 2026 analysis of regulated AI workflows called the "approval loop problem" - designing human intervention points that are neither so frequent that they defeat the purpose of automation nor so rare that they provide only the illusion of control. The Operational Envelope essay defined the boundaries that make autonomy safe - the constraints within which an agent operates freely and beyond which human override is triggered. Design leaders must build the capability to define these boundaries, design the override interfaces, and ensure that human control is genuine rather than performative.
VIII. Capability 6: Explainability by Design
What it means: Making reasoning, provenance, and next steps understandable enough to trust.
An agent that cannot explain itself cannot be trusted. This is the central insight of the AXD Institute's Explainability and Observability Design Standard. Explainability is not a feature to be added after the system is built. It is a design capability that must be embedded from the beginning - shaping how agents communicate their reasoning, how they surface the provenance of their decisions, and how they make their next steps legible to the humans they serve.
The Agent Observability essay explored the paradox at the heart of this capability: making autonomous action legible without making it visible. The goal is not to show the human everything the agent does - that would be overwhelming and would defeat the purpose of delegation. The goal is to make the agent's reasoning available on demand, in a form that supports informed trust. When the agent recommends a particular hotel, the human should be able to understand why - what criteria were applied, what alternatives were considered, what trade-offs were made.
For design leaders, explainability by design requires a new kind of content strategy - one that designs not for human browsing but for human understanding of autonomous decisions. It requires designing explanation interfaces that are proportionate to the stakes involved. A low-stakes recommendation needs a brief rationale. A high-stakes financial transaction needs a detailed audit trail. The EU AI Act, reaching full applicability in August 2026, will make this capability a regulatory requirement for many organisations. Design teams that treat explainability as an afterthought will find themselves retrofitting systems that should have been designed for transparency from the start.
IX. Capability 7: Multi-Surface Continuity
What it means: Designing coherent experiences across chat, voice, apps, notifications, and service layers.
Agentic experiences do not live on a single screen. A user might initiate a delegation via voice, receive a notification on their phone, review options in a chat interface, approve a transaction through a banking app, and receive a confirmation via email - all within a single agentic workflow. The design of coherent experience across these surfaces - maintaining context, preserving trust, and ensuring the human always knows where they are in the process - is multi-surface continuity.
The Composable Interfaces essay explored what happens when agents build the experience - when the interface itself is assembled dynamically based on context, capability, and user state. Multi-surface continuity extends this concept across channels. It is not merely responsive design (adapting layout to screen size) but experience continuity design (maintaining coherent meaning across fundamentally different interaction modalities).
For design leaders, this capability requires thinking beyond individual touchpoints. It requires designing the thread that connects a voice command to a push notification to a chat response to an app screen. It requires ensuring that an agent's explanation of its reasoning is coherent whether delivered as text, speech, or a visual summary. And it requires designing for the transitions between surfaces - the moments when a user moves from one channel to another and expects the experience to follow them seamlessly. This is service design elevated to the agentic layer, where the service is not delivered by humans but by autonomous systems operating across every channel simultaneously.
X. Capability 8: Governance-Aware Experience Design
What it means: Embedding policy, compliance, and safeguards into the interaction model.
The final capability is perhaps the most unfamiliar to design teams, yet it is becoming the most urgent. Governance-aware experience design is the practice of embedding regulatory requirements, organisational policies, and ethical safeguards directly into the interaction model - not as constraints imposed after design is complete but as design materials that shape the experience from the beginning.
The Regulatory Reckoning essay documented the convergence of five jurisdictions toward mandatory agent identification for agentic commerce. The Consent Horizon essay explored the design of permission for systems that never stop acting. The Autonomous Integrity essay examined what happens when agents must act against their principal's wishes to comply with regulation or ethical constraints. Each of these concerns is a governance design problem.
For design leaders, governance-aware design means treating compliance not as a checklist but as a design material. It means designing experiences where regulatory requirements are met through the interaction itself - where consent is captured naturally, where audit trails are generated as a byproduct of good design, and where policy constraints shape agent behaviour in ways that feel helpful rather than restrictive. IBM's March 2026 blueprint for building an agentic trust framework, developed in partnership with Salesforce, argued that governance must move beyond static compliance to dynamic, runtime trust verification. Design teams must build the capability to embed this governance into the experience layer.
XI. The Capability Map
The following table maps each capability to its definition, its corresponding AXD practice framework, and the key design question it addresses.
| Capability | What It Means | AXD Framework | Key Design Question |
|---|---|---|---|
| Intent Architecture | Designing from goals and delegated outcomes, not just tasks and screens | Intent Architecture Framework; Outcome Specification | How does the human express what they want? |
| Orchestration Design | Shaping how agents, tools, humans, and services coordinate | Multi-Agent Orchestration Visibility Model | How do participants find, coordinate, and resolve conflicts? |
| Trust & Intervention Design | Deciding when the system should act, ask, explain, pause, or escalate | Trust Calibration Model; Interrupt Pattern Library | When should the agent break the silence? |
| Context & Memory Design | Determining what should persist, what should be temporary, and how context should be used safely | Agent Memory & Context Continuity Framework | What should the agent remember, and for how long? |
| Human Override Design | Creating meaningful review, correction, and stop controls | Failure Architecture Blueprint; Absent-State Audit | How does the human regain control? |
| Explainability by Design | Making reasoning, provenance, and next steps understandable enough to trust | Explainability & Observability Design Standard | Can the human understand why? |
| Multi-Surface Continuity | Designing coherent experiences across chat, voice, apps, notifications, and service layers | Onboarding & Capability Discovery Framework | Does the experience hold together across channels? |
| Governance-Aware Design | Embedding policy, compliance, and safeguards into the interaction model | Ethical Constraint & Value Alignment Architecture | Is compliance a design material or an afterthought? |
XII. Conclusion: The Design Leader's Mandate
The eight capabilities described in this essay are not aspirational. They are operational requirements. Organisations deploying agentic systems today - and after Shoptalk 2026, that includes most major retailers, financial institutions, and technology platforms - need design teams that can work in intent architecture, orchestration design, trust calibration, context management, human override, explainability, multi-surface continuity, and governance-aware design. These are not optional specialisations. They are the baseline competencies of the agentic age.
The mandate for design leaders is clear. First, assess your team's current capabilities against these eight dimensions. Where are the gaps? Which capabilities exist in nascent form, and which are entirely absent? Second, begin building. Not all eight capabilities need to be mature simultaneously. Start with the ones most relevant to your organisation's agentic ambitions - for most, that will be intent architecture and trust design. Third, connect to the emerging body of knowledge. The AXD Institute's twelve practice frameworks, sixty-five research essays, and sixty-four canonical terms provide the theoretical and practical foundation for each capability.
The design profession has reinvented itself before. It moved from print to digital. It moved from desktop to mobile. It moved from features to experiences. Each transition required new capabilities, new methods, and new ways of thinking about what design is for. The transition to agentic design is the most significant of these shifts, because it changes not just the medium but the fundamental relationship between the designer and the user. The user is no longer present. The interface is no longer visible. The experience is no longer synchronous. What remains is trust, delegation, and the design of autonomous relationships.
That is what these eight capabilities are for. They are the tools with which design leaders will shape the agentic age - or be shaped by it.
Sources
Salesforce, "8 Design Principles for the Agentic Enterprise," 23 March 2026.
Forbes, "Agentic AI Hits A Governance Wall: Are Product Leaders Ready for 2026 and Beyond?" 25 March 2026.
Harvard Business Review, "Create an Onboarding Plan for AI Agents," March 2026.
Harvard Business Review, "To Scale AI Agents Successfully, Think of Them Like Team Members," March 2026.
IBM/Salesforce, "Blueprint for Building an Agentic Trust Framework," March 2026.
UX Tigers, "Intent by Discovery: Designing the AI User Experience," March 2026.
UX Tigers, "The Capability Maturity Model for AI in Design," March 2026.
Codebridge, "Human in the Loop AI: Approval Loops for Regulated Workflows," March 2026.
Elementum AI, "Human-in-the-Loop vs Human-on-the-Loop: Enterprise Patterns," March 2026.
EW Solutions, "Explainable AI: The Executive Case for Transparent AI," March 2026.
SpiralScout, "AI Agent Governance: Architecture vs. Policy," March 2026.
Ada/NewtonX, "Agentic CX in 2026: What Most Enterprises Miss," March 2026.
