Non-Human Economic Actors - autonomous AI systems participating in markets as economic participants, illustrating the AXD design challenge of trust-governed delegation in the machine economy
Back to Observatory

The Observatory · Issue 052 · March 2026

Non-Human Economic Actors

When Software Becomes a Market Participant: Identity, Authority, and Trust in the Age of Autonomous Economic Agency

By Tony Wood·28 min read


In March 2026, Santander and Mastercard completed Europe’s first live end-to-end AI-executed payment. An autonomous system, acting with delegated authority, initiated and completed a regulated financial transaction without direct human involvement at the point of execution. The significance of this event extends far beyond the technology. It marks the moment when software crossed a threshold that economics, law, and design have been approaching but have not yet fully confronted: the emergence of non-human economic actors.

A non-human economic actor is not a tool. It is not a calculator that computes a price, a database that stores a transaction, or an interface that presents options to a human decision-maker. A non-human economic actor is an autonomous system that participates in economic activity - transacting, negotiating, purchasing, committing resources, evaluating trade-offs, and making decisions that affect market outcomes. It is the machine customer that buys on behalf of a consumer. It is the procurement agent that commits millions in organisational spending. It is the algorithmic negotiator that agrees terms with another agent. It is the autonomous market-maker that sets prices based on real-time demand signals.

This essay examines what it means for software to become a market participant - not metaphorically, but operationally. It traces the five dimensions of non-human economic agency, the design challenges each dimension creates, and why Agentic Experience Design must expand its framework to address the most consequential category of autonomous action: economic participation.

01 - The Threshold

There is a distinction in economics between a tool and an actor. A hammer is a tool - it extends human capability but has no agency of its own. A corporation is an actor - it participates in markets, enters contracts, bears liability, and makes decisions that affect other participants. For centuries, economic theory has assumed that all actors are either natural persons or legal entities created by natural persons. Software was always a tool.

That assumption is breaking. Not because AI has achieved consciousness or legal personhood - it has achieved neither - but because AI systems are now performing the functions of economic actors. They are making purchasing decisions. They are negotiating contract terms. They are evaluating suppliers and committing resources. They are setting prices and responding to market signals. They are, in every functional sense, participating in markets.

Gartner projects that by 2028, AI agents will outnumber human sellers by tenfold and command $15 trillion in B2B purchasing decisions. Bain & Company estimates that AI agents could account for 15–25 percent of U.S. e-commerce sales by 2030 - a market worth $300–$500 billion. The World Economic Forum projects that agentic AI could deliver $3 trillion in corporate productivity gains globally over the next decade. These are not projections about better tools. They are projections about new participants entering markets at scale.

The threshold is not technological. It is ontological. When software moves from executing instructions to making economic decisions - when it moves from processing transactions to initiating them - it crosses from tool to actor. And that crossing demands a design response that neither traditional UX nor traditional economics has prepared for.

02 - What a Non-Human Economic Actor Actually Is

The term “non-human economic actor” (NHEA) captures a category broader than machine customer, more precise than “AI agent,” and more operationally grounded than “artificial economic agent.” An NHEA is any autonomous system that participates in economic activity with a degree of independent decision-making that affects market outcomes.

The definition has four necessary conditions. First, economic participation: the system must engage in activities that have economic consequences - purchasing, selling, negotiating, pricing, allocating resources, or committing capital. Second, autonomous decision-making: the system must exercise judgement within its delegated scope, not merely execute predetermined rules. Third, delegated authority: the system must act on behalf of a principal - a human or organisation - who has granted it the authority to transact. Fourth, market impact: the system’s decisions must affect other market participants, whether through price signals, resource allocation, or contractual commitments.

This definition deliberately excludes simple automation. A thermostat that adjusts temperature is not an NHEA. A recommendation engine that suggests products is not an NHEA. A chatbot that answers customer questions is not an NHEA. These systems may be sophisticated, but they do not participate in economic activity with autonomous decision-making authority. The NHEA is the system that decides to buy, negotiates the price, and commits the resources - all within the scope of authority its principal has delegated.

The California Management Review’s analysis of the “non-human enterprise” identifies a taxonomy of agent types that maps onto this definition: goal-based agents that pursue specific economic objectives, utility-based agents that optimise for the best possible outcomes, collaborative agents that coordinate with other agents or humans, and hierarchical agents that operate in layered authority structures. Each type represents a different mode of non-human economic participation - and each creates distinct design challenges for trust architecture and delegation design.

03 - Five Dimensions of Non-Human Economic Agency

Non-human economic agency is not a single phenomenon. It manifests across five interconnected dimensions, each of which creates distinct design challenges and demands specific responses from the AXD framework.

Dimension One: Identity and Authentication. How does a non-human economic actor establish its identity in a market? When a human walks into a bank, identity is established through documents, biometrics, and institutional records. When an AI agent initiates a transaction, identity must be established through different mechanisms: cryptographic credentials, provenance chains, principal attestations, and verifiable delegation records. The identity problem is not merely technical - it is foundational to every other dimension. Without reliable identity, there can be no accountability, no trust calibration, and no governance.

Dimension Two: Delegation and Authority. What scope of economic authority has been granted to the NHEA? Delegation design in the context of non-human economic actors requires specifying not just what the agent can do, but what it should do, under what conditions, with what constraints, and with what escalation paths when it encounters situations outside its delegated scope. The authority problem is that delegation must be both precise enough to constrain harmful action and flexible enough to enable autonomous economic participation.

Dimension Three: Legal and Regulatory Status. NHEAs are not legal persons. They cannot form legal intent, owe duties, or bear liability. When an AI agent misprices a product, discriminates in purchasing decisions, or breaches a contract, responsibility must attach to the deploying entity. This creates a fundamental asymmetry: the entity that makes the decision is not the entity that bears the consequences.

Dimension Four: Market Participation. NHEAs participate in markets in multiple roles: as buyers (machine customers in agentic commerce), as sellers (automated pricing and inventory systems), as negotiators (agent-to-agent commerce in B2B contexts), as intermediaries (matching platforms and brokers), and as market-makers (algorithmic systems that set prices and provide liquidity). Each role creates different trust requirements, different governance challenges, and different design demands.

Dimension Five: Trust and Governance. The trust architecture for NHEAs must address a challenge that has no precedent in human economic interaction: the agent’s decision-making process is opaque, its reasoning is probabilistic, and its behaviour emerges from the interaction of training data, system prompts, and real-time context rather than from conscious intention. Governance cannot rely on the agent’s “understanding” of rules - it must be embedded in the architecture of the system itself.

04 - The Identity Problem

Every economic system depends on identity. Markets function because participants can identify each other, assess counterparty risk, and hold each other accountable. When the participant is human, identity is grounded in physical existence - a person has a face, a history, a reputation, and a legal identity that persists across transactions. When the participant is software, none of these anchors exist.

The identity problem for NHEAs has three layers. The first is agent identity - establishing what the agent is. This includes its software provenance, its capabilities, its version, and its operational parameters. The second is principal identity - establishing who the agent represents. This requires a verifiable link between the agent and its human or organisational principal, including the scope and constraints of the delegation. The third is transactional identity - establishing the agent’s identity within a specific transaction, including its authority to commit resources, its spending limits, and its approved counterparties.

Mastercard’s Agent Pay framework and the emerging Know Your Agent (KYA) protocols represent early attempts to solve this problem. They propose credential systems that bind agent identity to principal identity, enabling counterparties to verify not just that an agent is authorised to transact, but what it is authorised to do and on whose behalf. But these solutions are nascent, fragmented, and far from standardised.

The design challenge is that identity for NHEAs must be portable. Unlike human identity, which is inherently tied to a physical person, agent identity must travel across platforms, services, and jurisdictions. An agent that purchases raw materials from a supplier in Germany, arranges shipping through a logistics platform in Singapore, and manages customs clearance through a regulatory system in the United Kingdom must carry its identity - and the authority it represents - across all three contexts. Current identity infrastructure was not designed for this.

05 - The Authority Problem

The authority problem is the central design challenge of non-human economic agency. It is the question of how humans grant, constrain, monitor, and revoke the economic authority of autonomous systems - and it is the question that delegation design was built to address.

Traditional governance assumes relatively static systems. Policies are written, controls are configured, and access rights are granted within defined boundaries. NHEAs operate differently. They move across services, call external tools, reuse context, and interact with multiple systems within a single workflow. As the Retail Banker International analysis of Europe’s first AI-executed payment observes: “The rules that define what the agent is allowed to do need to travel with it as it moves across tools, services, and markets.”

This is the concept of contextual governance - the principle that governance signals must be embedded in the agent’s operational context, not stored in static policy documents that the agent may never consult. In practical terms, contextual governance includes spending limits, transaction scopes, approved counterparties, data access boundaries, temporal constraints (the agent can only transact during business hours), and situational constraints (the agent must escalate to a human if the counterparty is a new supplier).

The AXD framework addresses this through the concept of the operational envelope - the designed boundaries within which an agent is authorised to act. For NHEAs, the operational envelope must be economic: it must specify not just what actions the agent can take, but what economic commitments it can make, what financial risks it can accept, and what market positions it can establish. The operational envelope for a non-human economic actor is, in effect, a financial constitution - a designed document that governs the agent’s economic authority.

06 - The Legal Vacuum

Commercial law has traditionally assumed that economic activity is conducted by humans - or by legal entities acting through human agents. NHEAs test that premise. As The Fashion Law’s analysis of agentic commerce and legal readiness observes: “AI agents are not legal persons. They cannot form legal intent, owe duties, or bear liability in their own name. When harm occurs, responsibility must attach elsewhere.”

The legal vacuum has three dimensions. The first is contractual. Electronic contracting regimes - the Uniform Electronic Transactions Act (UETA) in the United States, the EU’s eIDAS regulation in Europe - already recognise that agreements may be formed through automated systems interacting with one another. Courts are likely to analyse many AI-executed transactions through existing doctrines of delegated authority and automated contracting. But these frameworks were designed for deterministic systems that execute predetermined rules, not for probabilistic systems that exercise judgement.

The second dimension is liability. When an NHEA causes economic harm - mispricing, discriminatory purchasing, breach of contract, market manipulation - who is liable? The deploying organisation? The developer of the AI model? The platform that hosted the agent? The principal who delegated authority? Current liability frameworks distribute responsibility based on control, foreseeability, and reasonable care. But when an agent’s behaviour emerges from the interaction of training data, system prompts, and real-time context, the chain of causation becomes difficult to trace.

The third dimension is regulatory. The EU AI Act introduces requirements for high-risk AI systems around risk management, traceability, and human oversight. The emerging Know Your Agent (KYA) regulatory framework proposes that financial institutions must verify the identity, authority, and operational parameters of AI agents before allowing them to transact. But regulatory frameworks are lagging behind deployment - NHEAs are already transacting in markets that have no specific regulatory framework for non-human participants.

The design implication is profound: because NHEAs cannot bear legal responsibility, the design of the system must compensate for the absence of legal accountability. The operational envelope, the audit trail, the delegation record, and the governance architecture must be designed to a standard that enables legal responsibility to be traced back to the human or organisational principal - even when the agent has made thousands of autonomous decisions across multiple jurisdictions.

07 - The Market Participation Problem

NHEAs do not participate in markets in a single way. They occupy multiple roles, and each role creates different design challenges.

As buyers, NHEAs are the machine customers that Gartner has been tracking since 2023. They research options, evaluate trade-offs, negotiate terms, execute purchases, monitor outcomes, and trigger switching decisions - all autonomously. The design challenge for buyer-NHEAs is outcome specification: how does the human principal specify what they want in terms that the agent can optimise against, without over-constraining the agent’s ability to find unexpected value?

As sellers, NHEAs are the automated pricing systems, inventory managers, and dynamic offer engines that respond to market signals in real time. The design challenge for seller-NHEAs is signal clarity: how does the selling organisation ensure that its products and services are legible to buyer-NHEAs, and that its pricing signals are interpreted correctly by autonomous purchasing systems?

As negotiators, NHEAs engage in agent-to-agent commerce - the domain where a buyer’s agent and a seller’s agent negotiate terms without direct human involvement. The design challenge for negotiator-NHEAs is trust between agents: on what basis does one agent trust another agent’s representations? What constitutes a “handshake” between autonomous systems? How are disputes resolved when both parties are software?

As intermediaries, NHEAs operate matching platforms, brokerage systems, and marketplace algorithms that connect buyers and sellers. The design challenge for intermediary-NHEAs is neutrality and fairness: how do we ensure that an autonomous intermediary does not systematically favour certain participants, create information asymmetries, or manipulate market outcomes?

The market participation problem is compounded by the fact that NHEAs in different roles interact with each other. A buyer-NHEA negotiates with a seller-NHEA through an intermediary-NHEA, creating a market in which no human is directly present at the point of transaction. This is the machine economy - and its design requirements are fundamentally different from any market that has existed before.

08 - The Trust and Governance Problem

Traditional security models focus on whether an entity has permission to perform a particular action. In agentic systems, that question alone is almost always insufficient. As the analysis of Europe’s first AI-executed payment observes: “Risk rarely emerges from a single action. It emerges from how a sequence of decisions unfolds.”

This is the concept of behavioural sequencing as risk surface. An NHEA might legitimately retrieve market data, call a pricing API, evaluate supplier options, and execute a purchase. Each step may be authorised and technically correct. The risk emerges when the agent reuses context from earlier steps, chains tools together in an unexpected order, or interprets its delegation in a way that leads it to combine authorised actions into an unauthorised outcome.

The trust architecture for NHEAs must therefore address not just what the agent does, but how it sequences its decisions. This requires a new form of observability - what the AXD framework calls agent observability - that makes the agent’s decision paths legible to human overseers without requiring those overseers to monitor every individual action.

The governance challenge is that NHEAs operate at speeds and scales that make human-in-the-loop oversight impractical for routine transactions. A procurement agent that processes thousands of purchasing decisions per hour cannot wait for human approval on each one. The governance model must therefore shift from approval-based to exception-based: the agent operates autonomously within its operational envelope, and human oversight is triggered only when the agent encounters situations that exceed its delegated authority or when its behavioural patterns deviate from expected norms.

This is the design of trust-governed autonomy - the principle that autonomy is not the absence of governance but the presence of governance that is embedded in the system’s architecture rather than imposed through external oversight. For NHEAs, trust-governed autonomy means that the agent’s operational envelope, its escalation triggers, its audit trail, and its governance constraints are all designed as integral components of the system, not as afterthoughts bolted on after deployment.

09 - Designing for Non-Human Economic Actors

The five dimensions of non-human economic agency - identity, authority, legal status, market participation, and trust governance - converge on a single design imperative: the infrastructure of economic participation must be redesigned for participants that are not human.

This is not a call for AI rights or machine personhood. It is a recognition that economic infrastructure - identity systems, payment rails, contract frameworks, regulatory regimes, and market protocols - was designed with the assumption that all participants are human or human-controlled. That assumption is no longer valid. The design challenge is to extend this infrastructure to accommodate non-human participants while preserving the accountability, fairness, and trust that human-centred systems were designed to ensure.

The AXD framework provides the conceptual foundation for this redesign. Trust architecture provides the structural model for how trust is established, calibrated, and recovered between humans and NHEAs. Delegation design provides the grammar for how economic authority is granted, constrained, and revoked. The operational envelope provides the boundary model for autonomous economic action. Outcome specification provides the method for encoding human intent into forms that NHEAs can optimise against.

But these concepts must be extended. Trust architecture must now address trust between NHEAs, not just between humans and agents. Delegation design must now address delegation chains that cascade through organisational hierarchies and across organisational boundaries. The operational envelope must now include economic constraints - spending limits, risk tolerances, market position limits - that have no parallel in non-economic agentic systems. And outcome specification must now address the challenge of specifying economic outcomes in markets that are themselves being reshaped by the presence of NHEAs.

The emerging protocols - Mastercard’s Verifiable Intent, Google’s Universal Commerce Protocol, the x402 payment protocol, and the A2A and MCP communication standards - represent the first generation of infrastructure designed for non-human economic participation. But they are infrastructure without a design discipline. They solve the how of agent-to-agent communication and transaction execution, but they do not address the what - the design of the trust relationships, delegation structures, and governance architectures that must govern non-human economic activity.

That is what AXD provides. And that is why the emergence of non-human economic actors is not a peripheral concern for the discipline but its most consequential application.

10 - What AXD Demands

The emergence of non-human economic actors demands six responses from the AXD discipline.

First, an expanded ontology. AXD must formally recognise NHEAs as a category of actor within its framework - distinct from tools, distinct from human users, and distinct from the organisations that deploy them. The NHEA is a new kind of entity in economic systems, and the design discipline must have a vocabulary for describing its properties, behaviours, and relationships.

Second, economic delegation patterns. The AXD framework must develop specific design patterns for economic delegation - patterns that address spending authority, risk tolerance, market position limits, counterparty restrictions, and temporal constraints. These patterns must be composable, so that complex economic delegations can be constructed from simpler components, and auditable, so that the delegation record can be reconstructed after the fact.

Third, inter-agent trust protocols. As NHEAs increasingly interact with each other rather than with humans, AXD must develop trust protocols for agent-to-agent relationships. On what basis does one NHEA trust another? How is reputation established, calibrated, and transferred between autonomous systems? What constitutes a breach of trust between agents, and how is trust recovered?

Fourth, economic observability standards. The AXD framework must define what economic observability means for NHEAs - what information must be captured, how decision paths must be logged, and what level of transparency is required for different categories of economic activity. The audit trail for an NHEA is not just a compliance requirement; it is the mechanism through which legal accountability is maintained in the absence of legal personhood.

Fifth, market design principles. As NHEAs become significant market participants, AXD must contribute to the design of markets themselves - the rules, protocols, and governance structures that ensure fair, transparent, and accountable economic activity when a growing proportion of participants are not human. This includes the design of anti-manipulation safeguards, fairness constraints, and market stability mechanisms for agent-populated markets.

Sixth, a human-centred anchor. Despite the focus on non-human actors, AXD must maintain its foundational commitment to human agency. NHEAs exist to serve human purposes. They act on behalf of human principals. Their authority derives from human delegation. The design of non-human economic participation must always preserve the human principal’s ability to understand, constrain, override, and revoke the agent’s economic authority. The machine economy must remain, at its foundation, a human economy - one in which non-human actors participate under human governance.

The emergence of non-human economic actors is not a future scenario. It is a present reality. Santander and Mastercard have demonstrated that AI agents can execute regulated financial transactions. Gartner projects that these agents will command trillions in purchasing decisions within two years. The machine economy is arriving - and the question is not whether it will be designed, but whether it will be designed well. That is the question AXD exists to answer.

Assess Your Readiness

Is Your Organisation Ready for Non-Human Economic Actors?

The AXD Readiness Assessment evaluates your organisation’s preparedness for the agentic transition across trust architecture, delegation design, technical infrastructure, and organisational capability. Take the 5-minute assessment to identify your gaps and priorities.

Frequently Asked Questions