In February 2026, the two largest payment networks on earth published competing answers to the same question. The question was not about processing volumes, interchange rates, or market share. It was about identity. Specifically: when an AI agent conducts a financial transaction on behalf of a human, what is the agent? Is it a new kind of entity - visible, registered, authenticated in its own right? Or is it an extension of the human - invisible, delegated, acting through borrowed credentials? Mastercard and Visa gave opposite answers. And in doing so, they revealed the deepest architectural fault line in agentic commerce.
Mastercard's answer is Agent Pay. The agent is a new participant in the payment ecosystem. It gets its own identity, its own token, its own registration. Every party in the transaction - issuer, acquirer, merchant - can see that an agent conducted the transaction. The agent is visible. It is a first-class citizen.
Visa's answer is Intelligent Commerce. The agent is an extension of the human. It acts using the human's tokenised credentials, safeguarded by authentication checks and issuer-managed permissions. The agent operates through the existing card infrastructure. It is not a new entity. It is a new capability of an existing one.
Both are technically sophisticated. Both use tokenisation to protect raw card data. Both implement scoped constraints - merchant categories, spending limits, time windows. Both require cardholder consent. And both are already processing live transactions in multiple markets. On the surface, they look like competing implementations of the same idea. They are not. They are competing philosophies of what an agent is - and that philosophical divergence has profound implications for trust architecture, delegation design, and the future of agentic commerce.
This essay examines both models through the lens of Agentic Experience Design. It argues that the identity question is not a technical detail but the foundational design decision of agentic payments - and that neither network has yet addressed the trust architecture that must sit above whichever identity model prevails.
The Question That Splits the Industry
Every payment system ever built has assumed that the entity initiating the transaction is human. The entire architecture of card payments - from the four-party model to PCI compliance, from 3D Secure to chargeback rights - was designed around a human cardholder who sees a price, decides to pay, and authenticates the decision. The card networks, the issuers, the acquirers, the merchants, the regulators - all of them built their systems on this assumption.
Agentic AI breaks this assumption. When an AI agent books a hotel, purchases cinema tickets, or orders groceries on behalf of a human, the entity initiating the transaction is not the cardholder. It is a software system acting under delegated authority. This creates a question that the existing payment infrastructure was never designed to answer: how does the payment network identify, authenticate, and govern a non-human transacting entity?
The question matters because identity determines accountability. In the current system, if a transaction goes wrong, the cardholder can dispute it. The issuer investigates. The merchant may receive a chargeback. Liability flows through a chain that starts with a known human. When an agent transacts, this chain is disrupted. Who is accountable? The human who delegated? The agent that executed? The platform that hosted the agent? The merchant that accepted the payment?
Mastercard and Visa have given fundamentally different answers to this question. And those answers are not just technical choices. They are design philosophies that will shape how billions of people experience agentic shopping in the years ahead.
Mastercard Agent Pay: The Agent as Entity
Mastercard's Agent Pay framework treats the AI agent as a visible, governed participant in the payment flow. The agent is not hidden behind the cardholder's credentials. It is registered, verified, and given its own identity within the payment ecosystem. Mastercard calls these identities Agentic Tokens - credentials uniquely tied to the AI agent itself, not merely to the cardholder whose authority it exercises.
The architecture works through a registration-first model. Before an agent can transact, it must be verified. The agent receives its own token - distinct from the cardholder's token - that identifies it as a specific agent operating under specific constraints. When the agent initiates a payment, every participant in the transaction chain can see that an agent conducted the transaction. The issuer sees it. The acquirer sees it. The merchant sees it. The agent is not invisible. It is a new entity type in a system that has only ever known humans.
On 17 February 2026, Mastercard completed New Zealand's first authenticated agentic transaction with Westpac. The transaction used a Westpac-issued debit card to purchase cinema tickets from Event Cinemas, processed through IPSI and completed using Maincode's large language model "Matilda." A second transaction booked accommodation at QT Hotels in Queenstown. These were not demonstrations. They were live, authenticated transactions where the agent was visible as a distinct participant in the payment flow.
The model has since expanded across the United States, Australia, the UAE, and Latin America. India has entered sandbox testing. Mastercard has established a Regional AI Centre of Excellence and dedicated agentic commerce teams across Asia Pacific. The pace of deployment suggests this is not an experiment. It is a strategic commitment to a specific architectural philosophy: the agent is real, the agent is visible, and the payment system must recognise it as such.
The Mastercard Trust Question
"Who is this agent, and can it be trusted?" - The trust model is built around the agent's own identity. Trust is established through registration, verification, and visibility. The agent earns trust by being known.
Visa Intelligent Commerce: The Agent as Extension
Visa's Intelligent Commerce takes the opposite architectural position. The agent is not a new entity. It is an extension of the human cardholder, acting through the human's existing credentials. Visa describes these as AI-ready credentials - secure, tokenised card details tied to a specific AI agent but derived from and governed by the human's existing card relationship.
The architecture is built on the Trusted Agent Protocol (TAP) - an open framework built on HTTP Message Signatures that enables merchants to recognise and verify trusted AI agents. But the verification is not of the agent's own identity. It is verification that the agent has been authorised by a known human to use their credentials within defined constraints. The agent's authority is borrowed, not native.
Visa Intelligent Commerce provides a comprehensive suite of integrated APIs and a partner programme. It includes AI-ready credentials protected by Visa's authentication checks, intent-driven safeguards, and issuer-managed permissions. The issuer - the bank that issued the card - retains control over what the agent can do. Spending limits, merchant categories, time windows, and transaction types are all governed through the existing issuer relationship.
On 16 February 2026, DBS Bank became the first institution in Asia Pacific to pilot Visa Intelligent Commerce. DBS validated AI-ready credentials, advanced authentication, and intent-driven transaction controls through real-world food and beverage transactions using DBS/POSB credit and debit cards. The pilot is expanding to online shopping and travel bookings. PayOS and BeyondStyle also completed transactions through the framework.
The philosophical position is clear: the payment system does not need a new entity type. It needs a new capability for existing entities. The human remains the cardholder. The agent is a tool the cardholder uses. Trust flows through the existing card infrastructure - through the issuer relationship, through tokenisation, through the authentication mechanisms that already govern card payments. The agent does not need its own identity because it is not acting on its own behalf. It is acting on the human's behalf, using the human's money, under the human's authority.
The Visa Trust Question
"Has this human authorised this agent to use their card?" - The trust model is built around the delegation chain. Trust flows from the human through existing credentials to the agent. The agent is trusted because the human who owns the credentials has granted permission.
The Architectural Comparison
The surface similarities between the two approaches obscure the depth of their divergence. Both use tokenisation. Both implement scoped constraints. Both require cardholder consent. Both protect raw card data from the agent. But the structural logic beneath these shared features is fundamentally different.
| Dimension | Mastercard Agent Pay | Visa Intelligent Commerce |
|---|---|---|
| Agent ontology | New entity type in the ecosystem | Extension of existing cardholder |
| Token model | Agent gets its own Agentic Token | Agent uses human's AI-ready credentials |
| Visibility | Agent visible to all parties | Agent acts through human's identity |
| Trust anchor | Agent registration and verification | Human's existing card relationship |
| Constraint scope | Constraints on the agent's own token | Constraints on the delegated credential |
| Infrastructure | Requires new agent identity layer | Leverages existing tokenisation rails |
| Core question | "Can this agent be trusted?" | "Has the human authorised this?" |
| AXD trust type | Mechanical trust in agent identity | Relational trust through delegation chain |
In AXD terms, Mastercard is building mechanical trust - trust in the agent itself, established through verification of the agent's own credentials. Visa is building relational trust - trust in the delegation chain from human to agent, established through the existing card relationship. These are not different implementations of the same trust model. They are different trust models entirely.
The distinction maps directly to the Five Levels of Agentic Commerce that Stripe described. At Levels 1-3, where agents assist, suggest, and execute under close human supervision, Visa's model is natural. The agent is an extension of the human's shopping behaviour. But at Levels 4-5, where agents negotiate, transact autonomously, and manage multi-step purchasing journeys without human involvement, Mastercard's model becomes more coherent. An agent operating autonomously is not meaningfully an "extension" of the human. It is an entity acting in the world.
What Visibility Actually Means
The visibility question is where the two models diverge most consequentially. In Mastercard's model, the agent is visible. Every party in the transaction can see that an agent - not a human - initiated the payment. This visibility creates the possibility of agent-specific risk models, agent-specific fraud detection, and agent-specific dispute resolution. If a merchant knows that a transaction was initiated by an agent, they can apply different risk thresholds, different verification requirements, and different return policies.
In Visa's model, the agent is functionally invisible at the transaction level. The payment appears to come from the cardholder's credentials. The merchant sees a tokenised card payment. The acquirer processes a standard transaction. The agent's involvement is governed upstream - through the Trusted Agent Protocol, through issuer permissions, through intent-driven safeguards - but it is not necessarily visible in the transaction itself.
From an AXD perspective, visibility is not merely a technical feature. It is a trust calibration mechanism. When the agent is visible, every participant in the transaction can calibrate their trust response to the actual entity conducting the transaction. When the agent is invisible, participants calibrate their trust response to the human cardholder - even though the human may be entirely absent from the decision.
Consider a practical scenario. An AI agent, operating under delegated authority, books a hotel room at a price the human would not have chosen. In Mastercard's model, the hotel knows an agent booked the room. It can flag agent-initiated bookings for different cancellation terms, different confirmation flows, different escalation paths. In Visa's model, the hotel sees a standard card payment. It has no mechanism to distinguish between the human choosing the room and the agent choosing the room. The trust response is identical for two fundamentally different situations.
This is not an argument that Mastercard's approach is correct and Visa's is wrong. It is an observation that visibility has design consequences. An invisible agent creates what we might call a trust attribution gap - a situation where the trust signals available to transaction participants do not reflect the actual entity making the decision. And trust attribution gaps are where failure architectures break down.
The Consent Architecture Divergence
Both models require cardholder consent. But the nature of that consent is architecturally different, and the difference matters for consent horizon design.
In Mastercard's model, the human consents to the agent's identity. The consent is: "I authorise this specific, registered agent to transact on my behalf within these constraints." The consent is tied to a known, verified entity. If the agent is updated, replaced, or compromised, the identity changes and the consent must be re-established.
In Visa's model, the human consents to credential delegation. The consent is: "I authorise the use of my card credentials by an AI agent within these constraints." The consent is tied to the credentials, not to a specific agent identity. The agent could change - could be updated, could be a different model, could be operated by a different platform - and the credential delegation could persist.
This distinction has profound implications for consent temporality. The Consent Horizon framework holds that consent in agentic systems is not a one-time event but a continuous, contextual, revocable state. In Mastercard's model, consent is anchored to a specific agent identity, which provides a natural revocation point - revoke the agent's token, and the consent is withdrawn. In Visa's model, consent is anchored to the credential delegation, which is more abstract. The human is not revoking trust in a specific agent. They are revoking a capability of their own credentials.
Neither model fully addresses what happens when consent should be partially withdrawn. The human who authorised grocery shopping may want to revoke authority for electronics purchases without revoking the entire delegation. Both models support scoped constraints, but neither provides a dynamic consent interface where the human can observe what the agent is doing and adjust permissions in real time. The consent is configured at setup and enforced at transaction time. The space between - the ongoing trust relationship - is undesigned.
Neither Solves Recovery
This is where both models reveal their shared limitation. Mastercard and Visa have each built sophisticated approaches to agent identity, authentication, and payment execution. Neither has published a framework for what happens when things go wrong.
In the current card payment system, recovery works through chargebacks. The cardholder disputes a transaction. The issuer investigates. The merchant may bear the cost. The process is slow, adversarial, and expensive - but it exists. It provides a recovery path.
In agentic payments, the chargeback model becomes strained. Did the human authorise the purchase? Yes - they delegated authority to the agent. Did the agent exceed its authority? Perhaps - but the constraints were met. Was the purchase what the human wanted? Not exactly - but the agent's interpretation of the mandate was technically correct. Who is liable? The human who delegated? The agent that executed? The platform that hosted the agent? The merchant that accepted the payment?
Mastercard's visibility model offers a potential advantage here. Because the agent is visible as a distinct participant, it is at least possible to build agent-specific dispute resolution. The merchant knows an agent transacted. The issuer knows an agent transacted. This visibility could enable new dispute categories, new investigation protocols, and new liability frameworks. But Mastercard has not yet published these frameworks. The visibility exists. The recovery architecture built on that visibility does not.
Visa's model faces a harder problem. If the agent is invisible at the transaction level - if the payment looks like a standard card transaction - then the existing chargeback process applies. But the existing process was not designed for delegated agency. The cardholder authorised the delegation. The transaction met the constraints. The dispute is not about fraud or unauthorised use. It is about the agent's judgement. And the current chargeback system has no category for "the agent chose poorly."
From an AXD perspective, recovery is not an edge case. It is a core design requirement. The Failure Architecture framework holds that every agentic system must be designed for failure from the beginning - not as an afterthought. Both Mastercard and Visa have designed for successful transactions. Neither has published a design for failed ones. And in agentic commerce, where the human is absent when the decision is made, failure recovery is not less important than in traditional commerce. It is more important.
The Machine Customer Paradox
The identity schism creates a paradox for the concept of the machine customer. If the agent is a new entity - as Mastercard proposes - then the machine customer is real. It is a distinct participant in the market, with its own identity, its own credentials, and its own transaction history. Merchants can recognise it, market to it, and build relationships with it. The machine customer is not a metaphor. It is an architectural reality.
If the agent is an extension of the human - as Visa proposes - then the machine customer is a fiction. There is no new customer. There is only the existing human customer, now equipped with a more capable tool. The merchant's relationship is still with the human. The agent is invisible. The machine customer, in this model, is simply the human customer shopping through a different interface.
This matters for every organisation preparing for agentic commerce. If you are building your agentic commerce strategy on the assumption that machine customers are real - that agents will have preferences, histories, and relationships - then Mastercard's model supports your strategy and Visa's undermines it. If you are building on the assumption that the human remains the customer and the agent is merely a channel - then Visa's model supports your strategy and Mastercard's complicates it.
The truth, as AXD would suggest, is that both are partially correct - and the partial correctness is the problem. At Stripe's Level 2-3 autonomy of agentic commerce, the agent is indeed an extension of the human. The human is present, supervising, approving. Visa's model fits. At Level 4-5, the agent is operating autonomously - negotiating, comparing, transacting without human involvement. At that level, the agent is functionally a new entity in the market, regardless of whose credentials it carries. Mastercard's model fits.
The industry does not need one model. It needs both - and a trust architecture that governs the transition between them. The agent that starts as an extension of the human and gradually becomes an autonomous entity needs an identity model that evolves with the trust relationship. Neither network has proposed this.
The Distribution Problem Nobody Mentions
Both Mastercard and Visa have framed the agentic payments challenge as an identity and authentication problem. Build the right token model, implement the right constraints, verify the right entities, and the system works. But as the Finextra analysis published on 26 February 2026 observed, the real problem may not be orchestration at all. It may be distribution.
In the current payment system, the payment surface is a page. A checkout page, a payment terminal, a point-of-sale screen. The payment service provider (PSP) integrates with that page. The integration is known, bounded, and stable. In agentic commerce, the payment surface is no longer a page. It is a layer - an invisible, distributed, ambient layer where agents transact across platforms, APIs, and services without any fixed checkout point. The AXD Institute's analysis of the agent checkout examines how this shift from page-based checkout to protocol-based transaction is redesigning the entire purchase moment.
This means PSPs need to be present wherever agents are deployed. Not just on merchant websites. Not just in mobile apps. But inside agent platforms, inside API marketplaces, inside the protocol layer where agents discover and transact with services. The payment infrastructure must be as distributed as the agents themselves.
Neither Mastercard nor Visa has fully addressed this. Both have built frameworks that assume a relatively traditional transaction flow - cardholder authorises, agent transacts, merchant receives. But in a world where agents transact with other agents, where services are discovered and consumed programmatically, where the "merchant" may be another agent offering a service through an API, the four-party model itself begins to strain.
The x402 protocol addresses this distribution problem for machine-to-machine payments through a fundamentally different approach - native HTTP payment headers, stablecoin settlement, no accounts required. But x402 is designed for agent-to-service micropayments, not for consumer-facing delegated commerce. The gap between x402's distributed payment model and the card networks' structured transaction model is where the next generation of payment infrastructure will be built. And neither the identity-as-entity model nor the identity-as-extension model has a clear answer for how to bridge it.
Mastercard vs Visa: Implications for Practitioners
Design for both identity models simultaneously. The industry will not converge on a single model in the near term. Mastercard and Visa will coexist, and many consumers carry cards from both networks. Your trust architecture must accommodate agents that are visible entities in some transactions and invisible extensions in others. Design your trust calibration, observation, and intervention mechanisms to work regardless of whether the payment network treats the agent as an entity or an extension.
Build your own visibility layer. Do not rely on the payment network to make the agent visible. Regardless of whether Mastercard or Visa processes the payment, your system should maintain its own record of agent involvement - what the agent decided, why it decided it, what alternatives it considered, and what constraints it operated under. This visibility layer is your delegation design audit trail, and it must exist independently of the payment infrastructure.
Design recovery before you design execution. Neither network has published a recovery framework for agentic transactions. This is your opportunity and your responsibility. Before implementing agentic payments, design the failure architecture: what happens when the agent chooses poorly? How does the human discover the error? How is the transaction reversed, the trust recalibrated, the constraints adjusted? Recovery design is not an afterthought. It is the primary design challenge of agentic payments.
Implement dynamic consent, not static delegation. Both models configure constraints at setup and enforce them at transaction time. The gap between setup and transaction is where trust erodes. Design systems where the human can observe agent activity in real time, adjust constraints dynamically, and withdraw consent partially - not just all-or-nothing. The Consent Horizon must be a living boundary, not a fixed configuration.
Prepare for the identity transition. As agents mature from Level 2-3 assistants to Level 4-5 autonomous actors, the appropriate identity model shifts from extension to entity. Design your systems to support this transition gracefully. An agent that begins as a Visa-style extension - acting through the human's credentials under close supervision - may need to evolve into a Mastercard-style entity as it earns trust and operates more autonomously. The trust architecture must accommodate this evolution without requiring the human to reconfigure everything from scratch.
Map the trust attribution gap. In every agentic payment flow, identify where the trust signals available to participants do not reflect the actual entity making the decision. These gaps are where failures will concentrate. In Visa's model, the gap is at the merchant - they cannot distinguish human-initiated from agent-initiated transactions. In Mastercard's model, the gap is at the delegation boundary - the agent is visible, but the quality of the delegation that authorised it is not. Map these gaps. Design for them. They are the most consequential surfaces in your trust architecture.
Recognise that the identity schism is a trust architecture problem, not a payment problem. Mastercard and Visa are payment networks. They have built payment solutions. The identity question they are answering - "what is the agent in the payment flow?" - is a subset of the larger question that Agentic Experience Design must answer: "what is the agent in the trust relationship?" The payment identity is one dimension of a multi-dimensional trust architecture that includes delegation, observation, intervention, recovery, and consent. The networks have built one dimension. The other dimensions remain unbuilt. And they will not be built by payment networks. They will be built by designers who understand that trust is the primary material of agentic commerce.
