A brutalist concrete courtroom with geometric scales of justice and intersecting light beams  -  representing the liability architecture of agentic commerce
Back to Observatory

The Observatory · Issue 062 · March 2026

Liability and the Agent

When Delegation Design Determines Where Liability Falls

By Tony Wood·24 min read


When an AI agent purchases the wrong product, overpays for a service, or commits to a contract the human principal never intended - who is liable? The human who delegated the authority? The agent platform that executed the transaction? The merchant who accepted the order? This question - the liability question - is the unresolved foundation of agentic commerce. Every other design decision in the AXD framework - delegation design, trust architecture, consent architecture - has liability implications. And yet liability in agentic commerce remains largely unaddressed by law, regulation, and industry practice.

This essay argues that liability in agentic commerce is not primarily a legal problem. It is a design problem. The clarity of the delegation mandate, the robustness of the consent architecture, the transparency of the agent’s decision-making, and the quality of the failure architecture - these design decisions determine where liability falls when things go wrong. Organisations that treat liability as an afterthought - something for the legal team to sort out after the product launches - will discover that their design decisions have created liability exposure they cannot manage. Organisations that treat liability as a design constraint from the beginning will build systems where liability allocation is clear, fair, and enforceable.

I. The Liability Gap in Agentic Commerce

Traditional commerce has a clear liability chain. The customer makes a purchase decision. The merchant fulfils the order. If the product is defective, the merchant is liable under product liability law. If the merchant misrepresents the product, the merchant is liable under consumer protection law. If the customer changes their mind, the customer bears the cost (subject to return policies and cooling-off periods). The liability chain is linear: customer → merchant → supplier.

Agentic commerce introduces a new actor - the agent - and the liability chain becomes a triangle. The human principal delegates authority to the agent. The agent evaluates merchants and executes a purchase. The merchant fulfils the order. If something goes wrong, the liability question has three possible answers: the principal (who delegated the authority), the agent platform (which exercised the authority), or the merchant (which accepted the order). And the answer depends on where the failure occurred.

Consider four failure scenarios. First, the mandate violation: the agent purchases a product category the principal did not authorise. Liability falls on the agent platform, which failed to constrain the agent within its delegation mandate. Second, the merchant misrepresentation: the agent purchases a product based on the merchant’s data, but the product does not match the description. Liability falls on the merchant, as in traditional commerce. Third, the evaluation failure: the agent selects a merchant with poor reliability when better options were available. Liability is ambiguous - the agent platform may be liable for poor evaluation, but the principal may have contributed by setting constraints that limited the agent’s options. Fourth, the mandate ambiguity: the principal’s delegation was vague, the agent interpreted it reasonably, but the outcome was not what the principal intended. Liability is shared - the principal wrote the ambiguous mandate, but the agent platform should have flagged the ambiguity.

The liability gap is the space between these scenarios and the current legal framework, which was designed for the linear customer → merchant chain and does not account for the agent as an intermediary with decision-making authority.

II. Consumer Protection in Agent-Mediated Transactions

Consumer protection law assumes a human consumer. The UK Consumer Rights Act 2015 grants rights to “consumers” - defined as individuals acting for purposes outside their trade, business, craft, or profession. The EU Consumer Rights Directive provides similar protections. These frameworks assume that the consumer directly interacted with the merchant: they saw the product description, they agreed to the price, they clicked the purchase button.

When an agent mediates the transaction, these assumptions break down. The human principal did not see the product description - the agent processed it. The human did not agree to the specific price - the agent evaluated it against the mandate’s constraints. The human did not click a purchase button - the agent executed the transaction. Does the human still qualify as the “consumer” for the purposes of consumer protection law? Almost certainly yes - the human is the beneficial party to the transaction. But the protections may not function as intended.

Consider the right to information. Consumer protection law requires merchants to provide clear information about the product before the purchase. When the customer is a human, this means displaying the information on a product page. When the customer is an agent, the information must be provided in a machine-readable format that the agent can process. If the merchant provides accurate human-readable information but inaccurate machine-readable data, and the agent makes a purchase based on the inaccurate data, is the merchant liable for misleading the “consumer”? The law does not clearly answer this question.

Consider the right to cancel. The EU Consumer Rights Directive provides a 14-day cooling-off period for distance contracts. This right exists because the consumer could not physically examine the product before purchase. When an agent makes the purchase, the human principal could not examine the product either - but the agent may have processed detailed specifications, images, and reviews. Does the cooling-off period apply differently when the purchase decision was made by an agent with access to comprehensive product data? These questions will require legislative clarification or judicial interpretation.

III. Regulatory Frameworks for Autonomous Agents

The EU AI Act, which entered into force in 2024, is the most comprehensive regulatory framework for artificial intelligence. It classifies AI systems by risk level and imposes requirements proportional to the risk. Agentic commerce systems - AI agents that make autonomous purchasing decisions on behalf of humans - are likely to be classified as high-risk under the Act, particularly when they involve financial transactions or affect consumer rights.

The AI Act’s requirements for high-risk systems include: transparency (the user must be informed that they are interacting with an AI system), human oversight (the system must allow for human intervention), data governance (the system must use high-quality training data), and record-keeping (the system must log its decisions for audit). These requirements align well with AXD principles - transparency maps to consent architecture, human oversight maps to interrupt design, and record-keeping maps to accountability surfaces.

However, the AI Act was not designed specifically for agentic commerce. It does not address the unique liability questions that arise when an AI agent acts as a purchasing intermediary. It does not define the agent platform’s liability for agent decisions. It does not specify how consumer protection rights apply when the consumer’s agent, not the consumer, interacts with the merchant. Sector-specific regulation for agentic commerce is likely to emerge in the coming years, and the AXD Institute argues that this regulation should be informed by delegation design principles - because the quality of the delegation determines the quality of the agent’s decisions, and therefore the allocation of liability.

IV. Dispute Resolution When the Agent Acted

When a human customer has a problem with a purchase, they contact the merchant. They explain the issue, provide evidence, and negotiate a resolution. The dispute resolution process assumes a human customer who can articulate the problem, provide context, and exercise judgment about acceptable outcomes.

When the purchase was made by an agent, dispute resolution is more complex. The human principal may not have detailed knowledge of the transaction - they delegated the purchase to the agent and may not know which merchant was selected, what alternatives were considered, or what terms were agreed. The agent has this information, but the agent is a software system, not a party to the dispute in the legal sense.

Three dispute resolution models are emerging. Agent-mediated dispute resolution means the agent that made the purchase also handles the dispute. The agent contacts the merchant, presents the issue, and negotiates a resolution on behalf of the principal. This is efficient but raises questions about the agent’s objectivity - the agent may have an incentive to minimise the dispute rather than maximise the principal’s outcome. Platform-mediated dispute resolution means the agent platform provides a dispute resolution service that arbitrates between the principal, the agent, and the merchant. This provides a neutral third party but adds complexity and cost. Automated dispute resolution means disputes are resolved algorithmically based on pre-agreed rules - if the product does not match the specification, the merchant automatically issues a refund; if the delivery is late beyond the SLA, the merchant automatically provides compensation.

The AXD Institute argues that automated dispute resolution, governed by pre-agreed rules embedded in the checkout contract, is the most scalable and fair model. It removes the need for human intervention in routine disputes, ensures consistent outcomes, and creates clear expectations for all parties. Complex disputes that cannot be resolved algorithmically should escalate to platform-mediated resolution with human arbitration.

V. Insurance and Risk Transfer Models

Insurance is the traditional mechanism for managing liability risk. Product liability insurance protects merchants against claims for defective products. Professional indemnity insurance protects service providers against claims for negligent advice. In agentic commerce, new insurance products are needed to cover the novel risks that arise from agent-mediated transactions.

Agent platform liability insurance would cover the agent platform against claims arising from agent decisions - mandate violations, evaluation failures, and transaction errors. The premium would be calibrated to the platform’s track record: platforms with better delegation design, stronger constraint enforcement, and lower error rates would pay lower premiums. This creates a market incentive for platforms to invest in AXD quality.

Delegation insurance would cover the human principal against losses arising from agent-mediated purchases. The premium would be calibrated to the delegation scope: broader mandates (more agent discretion) would carry higher premiums than narrower mandates (less agent discretion). This creates a market incentive for principals to write clear, well-constrained mandates - because clearer mandates reduce risk and therefore reduce insurance costs.

Transaction guarantee insurance would cover specific transactions against failure - the product does not arrive, the product does not match the description, the merchant goes bankrupt before fulfilment. This is analogous to existing buyer protection schemes (PayPal Buyer Protection, credit card chargeback rights) but extended to cover the specific risks of agent-mediated transactions. The emergence of these insurance products will be a signal that agentic commerce has matured from an experimental technology to a mainstream commercial channel.

VI. Liability as a Design Problem

The central argument of this essay is that liability in agentic commerce is a design problem, not merely a legal problem. The design decisions made by agent platforms, merchants, and principals determine where liability falls when things go wrong. And these design decisions can be made well or badly.

Delegation design is liability design. A well-designed delegation mandate with clear boundaries, explicit constraints, defined escalation triggers, and unambiguous authority creates clear liability allocation. The agent is liable for actions within the mandate. The principal is liable for the mandate’s scope. The merchant is liable for its representations. A poorly designed mandate with vague boundaries and no escalation triggers creates liability ambiguity - and when things go wrong, the parties dispute who was responsible.

Consent architecture is liability architecture. A well-designed consent architecture ensures that the principal has explicitly authorised the agent’s actions - either through a specific approval (staged checkout) or through a standing mandate that clearly covers the transaction. A poorly designed consent architecture allows the agent to act without clear authorisation, creating liability exposure for the agent platform.

Accountability surfaces are liability evidence. A well-designed accountability surface logs the agent’s decisions, the data it processed, the alternatives it considered, and the reasoning behind its choice. When a dispute arises, this log provides the evidence needed to determine where the failure occurred and who is liable. A system without accountability surfaces creates a black box - and when things go wrong, no one can determine what happened or why.

The organisations that build agentic commerce systems with liability as a design constraint from the beginning will create systems where liability allocation is clear, disputes are resolvable, and trust is maintainable. The organisations that treat liability as an afterthought will discover that their design decisions have created exposure they cannot manage, disputes they cannot resolve, and trust they cannot recover. Liability is not the legal team’s problem. It is the design team’s problem. And it must be addressed at the point of design, not the point of failure.

Frequently Asked Questions