AXD Concept
Agent Legibility
The design requirement that autonomous AI agents must be readable - to humans, to institutions, and to other agents.
Definition
Agent legibility is the design principle that an autonomous AI agent's identity, capabilities, constraints, actions, and reasoning must be machine-readable and human-interpretable. It is the prerequisite for trust in agentic systems: an agent that cannot be read cannot be trusted, and an agent that cannot be trusted cannot be delegated to. Agent legibility operates at three levels - identity legibility (who is this agent and who authorised it?), action legibility (what has it done and why?), and capability legibility (what can it do and what are its limitations?). In the context of Agentic Experience Design (AXD), legibility is not a feature to be added after the system works; it is a structural requirement that must be designed into the agent from the beginning.
What Agent Legibility Means in Agentic AI
The Three Dimensions of Agent Legibility
Machine Legibility: When Agents Must Read Other Agents
Agent Legibility and Trust Architecture
Designing for Agent Legibility in Practice
Frequently Asked Questions
What is agent legibility?
Agent legibility is the design requirement that an autonomous AI agent's identity, actions, reasoning, capabilities, and limitations must be readable by humans, institutions, and other agents. It operates at three levels: identity legibility (who authorised this agent?), action legibility (what has it done and why?), and capability legibility (what can and can't it do?). Legibility is the prerequisite for trust in agentic systems.
How is agent legibility different from AI explainability?
AI explainability focuses on why a model made a specific prediction or decision. Agent legibility is broader - it encompasses the entire agent: its identity and authority chain, its complete action history, its operational constraints, and its known limitations. An agent can be explainable (its model is interpretable) without being legible (its overall behaviour and authority are not readable). Legibility is the design requirement; explainability is one component of it.
What is machine legibility in agentic AI?
Machine legibility is the requirement that agents must be readable by other agents, not just by humans. In multi-agent systems, agents must verify each other's credentials, evaluate track records, and understand capabilities through structured, machine-readable formats. Machine legibility enables agent-to-agent trust without human intermediation. Infrastructure like Agent Registries and the Universal Commerce Protocol (UCP) address machine legibility at scale.
Why does agent legibility matter for trust?
Trust cannot be formed in an agent whose actions are opaque. Agent legibility enables the three components of trust architecture: competence trust (believing the agent can do what it claims), integrity trust (believing it will stay within scope), and benevolence trust (believing it acts in the human's interest). Without legibility, trust calibration is impossible - humans will either over-trust or under-trust their agents.
How do you design for agent legibility?
Design for legibility at three layers: identity (verifiable credentials, delegation chains, Agent Registry registration), action (audit trails, decision logs, reasoning traces, post-hoc explanations), and capability (operational envelope documentation, constraint specifications, limitation disclosures). The AXD Practice provides frameworks for each: the Explainability Standard, the Onboarding Framework, and the Multi-Agent Orchestration Visibility Model.