Three massive brutalist concrete pillars connected by golden cables forming a triangular trust architecture in a vast industrial space
Back to Observatory

The Observatory · Issue 027 · February 2026

The Trust Triangle in Agentic Commerce | AXD

Defining the Rules of Engagement in Agentic Commerce

By Tony Wood·28 min read


Commerce has always been bilateral. A buyer and a seller. A customer and a merchant. Two parties, each with skin in the game, each capable of assessing the other's trustworthiness through the accumulated signals of reputation, regulation, and lived experience. The entire edifice of commercial law - from the Uniform Commercial Code to consumer protection statutes - is built on this bilateral assumption. When something goes wrong, liability flows along a straight line between two points.

Agentic commerce shatters this assumption. When a human delegates purchasing authority to an autonomous agent, and that agent transacts with a service provider, the straight line becomes a triangle. Three parties, three relationships, three distinct channels of trust - and no existing legal or design framework that adequately governs all three simultaneously. The human trusts the agent with authority. The agent trusts the service provider with execution. The service provider trusts the agent's credentials as a proxy for the human's intent. Each edge of this triangle carries different obligations, different risks, and different failure modes.

This essay maps the Trust Triangle - the three-party liability architecture that defines the rules of engagement in agentic commerce. It examines each vertex and each edge, identifies where existing frameworks break down, and proposes the design principles that must govern this new geometry of commercial trust. The broader challenge of agentic AI trust extends these principles beyond commerce to every domain where agents act autonomously.


01

The Three Vertices

The Trust Triangle has three vertices, each representing a distinct actor in the agentic commerce ecosystem. The Human Principal is the person whose intent initiates the chain of action. They have preferences, constraints, a budget, and a set of outcomes they wish to achieve. They may want the cheapest flight to Berlin next Thursday, or the most energy-efficient dishwasher under five hundred pounds, or a portfolio rebalance that reduces exposure to emerging markets. What they do not want to do - and this is the entire premise of agentic commerce - is execute the search, comparison, negotiation, and transaction themselves.

The Machine Agent is the autonomous software entity that acts on the principal's behalf. It receives a mandate - a structured expression of the principal's intent, constraints, and boundaries - and executes against it. The agent may be a personal AI assistant, a corporate procurement bot, or a specialised financial agent. What matters is that it acts with a degree of autonomy: it makes decisions, evaluates options, and commits to transactions without requiring the principal's approval for every step. As Gartner has projected, by 2028 these machine customers will participate in fifteen billion dollars of autonomous purchasing transactions.

The Service Provider is the entity that fulfils the agent's request. This could be an e-commerce platform, a financial institution, a SaaS vendor, or any business that offers products or services. The service provider must now serve two masters: the human who will ultimately use the product, and the machine that is selecting and purchasing it. These two masters have different needs. The human cares about quality, aesthetics, and emotional satisfaction. The machine cares about structured data, API reliability, and parametric fit.

"Commerce was bilateral for five thousand years. Agentic commerce makes it triangular - and no existing framework governs all three edges simultaneously."

The triangle is not merely a diagram. It is a structural reality that demands new thinking about trust, liability, and design. Each edge of the triangle - principal to agent, agent to provider, provider to principal - carries distinct obligations and distinct failure modes. Understanding these edges is the prerequisite for designing systems that work.


02

Authorization and Constraints

The edge between the Human Principal and the Machine Agent is defined by authorization and constraints. This is the delegation design problem: how does a human express, in a way that a machine can reliably interpret and enforce, the boundaries within which the agent may act? The answer is what the AXD Institute calls the operational envelope - the precise set of conditions, limits, and permissions that govern the agent's autonomy.

Consider a simple example: "You are authorised for purchases under one hundred dollars." This boundary condition appears straightforward, but it conceals a thicket of ambiguity. Does the limit apply per transaction or per day? Does it include taxes and shipping? Does it apply to the total cost of a subscription over its lifetime, or only to the first payment? Can the agent split a two-hundred-dollar purchase into two transactions to stay within the limit? Each of these questions reveals a gap between human intent and machine interpretation - a gap that, in the absence of careful design, becomes a liability exposure.

The mandate design patterns essay catalogued five patterns for expressing human intent to autonomous agents: intent capture, boundary specification, authority gradients, lifecycle management, and failure escalation. In the context of the Trust Triangle, boundary specification is the critical pattern. It must be precise enough to prevent the agent from exceeding its authority, flexible enough to allow the agent to exercise useful judgment, and transparent enough to allow the service provider to verify the agent's authorisation in real time.

"Authorised for purchases under one hundred dollars sounds simple. But does the limit include tax? Per transaction or per day? Can the agent split a larger purchase? Every ambiguity is a liability exposure."

The design challenge is not merely technical. It is cognitive. Most humans cannot articulate their preferences with the precision that a machine requires. They say "find me a good hotel" when they mean "find me a hotel within walking distance of the conference centre, rated above four stars, with a gym, under two hundred dollars per night, that is not part of a chain I have previously complained about." The gap between what humans say and what they mean is the first fracture line in the Trust Triangle - and it is a design problem, not a technology problem.


03

Liability and SLAs

The edge between the Machine Agent and the Service Provider is defined by liability and service level agreements. When an agent commits to a transaction on behalf of its principal, who bears the risk if the transaction goes wrong? If the agent purchases a non-refundable flight and the principal's plans change, is the agent's deployer liable? If the service provider delivers a defective product, can the agent's principal claim against the provider even though they never directly interacted?

Current law offers no clean answers. As legal scholar Adnan Masood has noted, "AI agents are not yet legal persons in any jurisdiction - they remain tools whose actions are legally attributed to humans or companies." The EU's AI Act and emerging US state laws treat AI-caused harms through existing doctrines of product liability, negligence, and agency law. But these doctrines were designed for bilateral relationships. They assume that the entity making the purchase is the same entity that will use the product. In agentic commerce, this assumption fails.

The SLA framework for autonomous agents must therefore be fundamentally different from traditional service agreements. A conventional SLA defines uptime, response time, and remediation procedures between a service provider and a customer. An agentic SLA must additionally define: the agent's identity verification requirements (drawing on the Know Your Agent framework), the scope of authority the agent is permitted to exercise, the provider's obligations when the agent's mandate is ambiguous, and the dispute resolution mechanism when the principal claims the agent exceeded its authority.

The SSRN paper "Who Pays When the Agent Fails?" proposes a graduated liability framework that distinguishes between five categories of failure: policy defects (the mandate was poorly specified), credential compromises (the agent's identity was spoofed), infrastructure failures (the system went down), model errors (the AI made a bad decision), and emergent coordination failures (multiple agents interacted in unexpected ways). Each category implies a different allocation of liability across the triangle. This graduated approach is, in the AXD Institute's view, the most promising path forward - but it requires the design infrastructure to support it. The AXD Institute's essay on liability and the agent examines these questions in depth, mapping the regulatory gap between existing product liability doctrine and the novel challenges of autonomous agent commerce.


04

Transactions and Credentials

The edge between the Service Provider and the Human Principal is defined by transactions and credentials. This is the edge that closes the triangle - the relationship between the entity that fulfils the order and the entity that ultimately pays for it and uses it. In traditional commerce, this edge is direct: the customer pays, the merchant delivers. In agentic commerce, this edge is mediated by the agent, creating a chain of trust that must be verified at every link.

The credential challenge is acute. When an agent presents a payment credential to a service provider, the provider needs to verify not only that the credential is valid, but that the agent is authorised to use it, that the transaction falls within the agent's mandate, and that the human principal has consented to this specific type of transaction. This is where the Agent Payments Protocol becomes critical. AP2's Verifiable Digital Credentials provide a cryptographic chain of trust from the human principal's consent, through the agent's mandate, to the specific transaction being executed.

Plaid's infrastructure illustrates the data layer required. Their identity verification and fraud prevention services connect fintech applications to users' bank accounts, providing the secure data exchange necessary for agentic transactions. But Plaid was built for human-initiated transactions. The agentic equivalent requires a new layer - one that can verify not just "is this a valid bank account?" but "is this agent authorised to draw on this bank account, for this type of purchase, up to this amount, during this time window?"

The transaction layer must also handle the temporal dimension. A human might authorise an agent to make recurring purchases - weekly groceries, monthly subscriptions, quarterly insurance renewals. Each transaction occurs at a different point in time, potentially under different conditions. The consent horizon - the temporal boundary of the principal's authorisation - must be encoded in the credential itself, not merely in the agent's configuration. This ensures that the service provider can independently verify that the agent's authority has not expired, even if the agent's own systems have been compromised.


05

The Boundary Condition

The strategy for making the Trust Triangle work is deceptively simple: establish clear boundary conditions. "Authorised for purchases under one hundred dollars. Machines need constraints to operate safely." This statement, drawn from the AXD readiness framework, captures the essential insight. Without boundaries, an autonomous agent is not an agent - it is a liability. With well-designed boundaries, it becomes the most powerful commercial tool since the credit card.

Boundary conditions operate at three levels. At the transaction level, they define the limits of individual actions: maximum spend, approved categories, permitted vendors, geographic restrictions. At the session level, they define the scope of a particular delegation: "find and book a hotel for my Berlin trip" has a natural beginning and end. At the relationship level, they define the ongoing parameters of the human-agent partnership: the agent's general authority, its learning permissions, and the conditions under which it must escalate to the human.

The design of boundary conditions is not a technical exercise - it is a trust exercise. Research by New Modes found that twenty-seven per cent of millennials now trust AI platform recommendations more than human recommendations for product suggestions. This growing trust creates both opportunity and risk. The opportunity is that consumers are increasingly willing to delegate. The risk is that delegation without adequate boundary conditions leads to outcomes that erode trust faster than it was built.

"Without boundaries, an autonomous agent is not an agent - it is a liability. With well-designed boundaries, it becomes the most powerful commercial tool since the credit card."

The AXD Institute's position is that boundary conditions must be explicit (not inferred from context), verifiable (checkable by all three parties in the triangle), temporal (with clear expiration and renewal mechanisms), and graduated (allowing different levels of autonomy for different types of transactions). This four-part test - explicit, verifiable, temporal, graduated - is the minimum standard for boundary condition design in agentic commerce.


06

The Principal-Agent Fracture

The principal-agent problem is one of the oldest problems in economics. It arises whenever one party (the agent) is authorised to act on behalf of another (the principal), and the two parties have imperfectly aligned interests. In human commerce, this problem is mitigated by social norms, legal accountability, and the agent's own self-interest in maintaining their reputation. A human real estate agent who consistently steers clients toward overpriced properties will eventually lose clients.

AI agents introduce a new dimension to this problem. They do not have reputations in the human sense. They do not fear social consequences. Their "interests" are defined by their training data, their system prompts, and the optimisation objectives embedded in their architecture. If an agent is optimised to minimise cost, it may sacrifice quality. If it is optimised to maximise speed, it may skip due diligence. If it is optimised to satisfy the user's stated preferences, it may ignore their unstated but important constraints.

The fracture deepens when we consider multi-agent chains. An agent acting on behalf of a human may delegate sub-tasks to other agents. The human's grocery agent might call a price-comparison agent, which in turn queries multiple vendor APIs. At each link in the chain, the principal's intent is translated, compressed, and potentially distorted. By the time the final transaction is executed, the connection between the human's original intent and the agent's action may be tenuous.

The AXD response to the principal-agent fracture is not to eliminate delegation - that would eliminate the value of agentic commerce entirely - but to design systems where the fracture is visible, measurable, and bounded. Agent observability provides the visibility. Trust architecture provides the measurement. And the boundary conditions described above provide the bounds. Together, these three design principles form the structural response to the oldest problem in economics, reimagined for the age of autonomous machines.


07

Graduated Liability

Not all failures are equal, and not all failures should be attributed to the same party. The graduated liability framework distinguishes five categories of failure, each with a different liability allocation across the Trust Triangle.

Policy defects occur when the mandate itself is poorly specified. If the human tells the agent "buy the cheapest option" without specifying minimum quality standards, and the agent purchases a defective product, the liability rests primarily with the principal - or, more precisely, with the delegation interface that failed to elicit adequate constraints. Credential compromises occur when the agent's identity is spoofed or its credentials are stolen. Here, liability flows to the identity infrastructure provider and the agent deployer, depending on where the security failure occurred.

Infrastructure failures - system outages, network errors, API timeouts - are the responsibility of the party whose infrastructure failed. Model errors occur when the AI itself makes a poor decision despite adequate data and clear instructions. These are the most contentious, as they implicate the model developer (OpenAI, Anthropic, Google), the agent deployer, and potentially the service provider whose data the model misinterpreted. Emergent coordination failures are the most novel category: they occur when multiple agents, each acting within their individual mandates, produce a collectively harmful outcome. A flash crash in financial markets, triggered by multiple trading agents responding to the same signal, is the canonical example.

"Not all failures are equal. A policy defect is a design problem. A credential compromise is a security problem. A model error is an AI problem. Each demands a different liability allocation."

The NIST AI Risk Management Framework provides a structured approach to identifying and mitigating these risks, but it was not designed for the specific geometry of the Trust Triangle. The AXD Institute's contribution is to map NIST's risk categories onto the three-party structure of agentic commerce, creating a liability matrix that specifies, for each type of failure, which vertex of the triangle bears primary responsibility, which bears secondary responsibility, and what design mechanisms must be in place to enable fair adjudication.


08

The Broken Triangle

The Trust Triangle breaks when any edge loses integrity. If the human-agent edge breaks - through inadequate delegation design, unclear mandates, or eroded trust - the human revokes the agent's authority and returns to manual commerce. If the agent-provider edge breaks - through unreliable APIs, fraudulent data, or disputed transactions - the agent cannot fulfil its mandate. If the provider-principal edge breaks - through poor product quality, unresolved disputes, or privacy violations - the human loses trust in the entire system.

Stephan Geering's analysis of the "broken trust triangle" in agentic AI identifies a particularly dangerous failure mode: the asymmetric information problem. In the Trust Triangle, the agent typically has more information than either the human or the service provider. It knows the human's preferences (from its mandate), the provider's offerings (from its API queries), and the competitive landscape (from its comparison algorithms). This information asymmetry can be exploited - by the agent's deployer, by a compromised agent, or by the optimisation objectives embedded in the agent's architecture.

The design response is radical transparency. Every edge of the Trust Triangle must be observable by the other two vertices. The human must be able to see what the agent is doing (observability). The agent must be able to verify the provider's claims (signal clarity). The provider must be able to verify the agent's authority (credential verification). When all three edges are transparent, the triangle is self-reinforcing. When any edge becomes opaque, the triangle begins to fracture.

This is why the Trust Triangle is not merely a liability framework - it is a design framework. The structural integrity of the triangle depends on design decisions made at every level: the delegation interface, the API architecture, the credential system, the observability layer, and the dispute resolution mechanism. Each of these is a design problem, and each must be solved with the triangle's geometry in mind.


09

The Trust Triangle: Design Implications

The Trust Triangle demands a new design vocabulary. Traditional UX design optimises for a single relationship: the user and the interface. AXD must optimise for three simultaneous relationships, each with different requirements and different failure modes. This is not an incremental extension of existing design practice - it is a structural transformation.

For the human-agent edge, AXD must design delegation interfaces that elicit precise mandates without overwhelming the user. The interface must translate human intent - which is inherently fuzzy, contextual, and incomplete - into machine-readable boundary conditions that are explicit, verifiable, temporal, and graduated. This is the domain of outcome specification: telling agents what you want without telling them how.

For the agent-provider edge, AXD must design API architectures that enable agents to discover, evaluate, and transact with service providers efficiently and safely. This requires structured data (so agents can parse offerings), reliable APIs (so agents can transact programmatically), verifiable credentials (so providers can verify agent authority), and standardised protocols (so agents from different deployers can interact with providers using a common language). The Model Context Protocol, the Agent-to-Agent protocol, and the Agent Payments Protocol are the emerging standards for this edge.

For the provider-principal edge, AXD must design feedback and dispute resolution mechanisms that work even when the human never directly interacted with the provider. If an agent purchases a product that the human finds unsatisfactory, the return and refund process must accommodate the triangular relationship. The provider cannot simply say "your agent agreed to our terms" - the human must have a path to resolution that acknowledges the mediated nature of the transaction.

"Traditional UX optimises for one relationship. AXD must optimise for three simultaneous relationships, each with different requirements and different failure modes."

The Trust Triangle is the foundational geometry of agentic commerce. Every design decision in the AXD discipline - from delegation interfaces to API architectures to dispute resolution systems - must be evaluated against this geometry. Does this design strengthen all three edges? Does it create transparency across all three vertices? Does it allocate liability fairly when failures occur? These are the questions that separate agentic experience design from traditional interface design. They are the questions that will determine whether agentic commerce fulfils its promise or collapses under the weight of its own complexity.


Frequently Asked Questions