The Prediction Revisited
Six weeks ago, in the closing section of The State of Agentic Payments Q1 2026, this Institute made five predictions for Q2 2026. The second prediction stated: "Regulators will issue their first formal guidance specifically addressing agent-initiated financial transactions. The most likely source is the EU (under the AI Act's August 2026 applicability date), but the UK FCA and US CFPB are also positioning." The fifth prediction stated that Know Your Agent would become a regulatory requirement within eighteen months.
Both predictions were late. Not early. Late. Between the publication of that essay and the writing of this one, the regulatory landscape shifted faster than any reasonable forecast anticipated. On February 5, 2026, the National Institute of Standards and Technology published its concept paper on "Accelerating the Adoption of Software and AI Agent Identity and Authorization" - the first formal US government initiative to address how autonomous software agents should be identified, authorised, and audited. On February 17, NIST's Center for AI Standards and Innovation launched the AI Agent Standards Initiative, explicitly focused on ensuring the next generation of AI agents can be "adopted with confidence, functioning securely on behalf of users." On January 20, the UK Treasury Committee published a report criticising the Financial Conduct Authority for its "wait and see" approach to AI governance, demanding published guidance by the end of 2026. On January 27, the FCA responded by launching the Mills Review - a strategic examination of AI's long-term impact on retail financial services.
Meanwhile, the private sector did not wait. Sumsub launched AI Agent Verification on January 28 - the first commercial Know Your Agent product. AgentFacts.org published its blockchain-based KYA standard on January 31. FIME released its comprehensive KYA framework on February 27. Trulioo, AstraSync AI, and Vouched.id all published KYA white papers within the same sixty-day window. The question is no longer whether KYA regulation is coming. The question is which of the competing frameworks - regulatory, standards-based, or commercial - will define what KYA actually means.
"The prediction was not early. It was late. The regulatory reckoning for agentic commerce is not a future event. It is a present one."
This essay examines that reckoning. It maps the regulatory landscape across five jurisdictions, analyses the NIST concept paper as a watershed moment, dissects the EU AI Act's classification problem, confronts the Strong Customer Authentication paradox that threatens to stall agentic payments in Europe, surveys the six private-sector organisations racing to build KYA before regulators do, identifies four fundamental gaps that no framework addresses, and closes with five predictions and specific design guidance for Agentic Experience Design practitioners. It is the most technically specific essay this Institute has published, because the regulatory detail now demands it.
Five Jurisdictions, Five Approaches
The first thing to understand about KYA regulation is that it does not exist as a single framework. It is emerging simultaneously from five major regulatory jurisdictions, each approaching agent identity from fundamentally different starting points, with fundamentally different assumptions about what the problem actually is.
| Jurisdiction | Body | Approach | Status | Timeline |
|---|---|---|---|---|
| United States | NIST NCCoE / CAISI | Standards-based, voluntary | Soliciting comments | Comment period closes April 2, 2026 |
| European Union | EU AI Act + PSD2 + DORA | Horizontal regulation, risk-based | Approaching full applicability | August 2, 2026 |
| United Kingdom | FCA (Mills Review) + Treasury Committee | Principles-based, adaptive | Under review | Guidance expected by end of 2026 |
| Singapore | MAS | FEAT principles, governance-focused | Active | Ongoing |
| China | PBOC / SAMR | Platform governance, de facto | Operational | Already in force |
These are not five variations on the same theme. They are five fundamentally different theories of how agent identity should work. The United States trusts industry to build standards voluntarily, with government setting the framework through NIST's collaborative process. The European Union trusts horizontal regulation to classify and manage risk, layering the AI Act on top of PSD2 and DORA to create overlapping compliance requirements. The United Kingdom trusts existing frameworks to adapt - testing whether Consumer Duty, the Senior Managers and Certification Regime, and operational resilience requirements are sufficient without AI-specific rules. Singapore trusts governance principles - Fairness, Ethics, Accountability, Transparency - to guide behaviour without prescriptive mandates. China has bypassed the question entirely by embedding agent governance within existing platform regulation, where Alipay's 120 million agent-initiated transactions operate under the same mobile payment frameworks that govern human transactions.
The implications for agentic commerce are profound. An organisation building an agentic payment system that operates across these five jurisdictions must simultaneously satisfy standards-based voluntary compliance (US), risk-based mandatory classification (EU), principles-based adaptive guidance (UK), governance-principle alignment (Singapore), and platform-embedded de facto rules (China). There is no unified KYA framework that spans all five. There is no mutual recognition agreement that allows an agent authenticated in one jurisdiction to be accepted in another. There is not even a shared vocabulary for what "agent identity" means across these regulatory traditions.
"Each jurisdiction is solving a different problem. None of them is building a comprehensive KYA framework. All of them will need one."
This fragmentation is not a temporary condition that will resolve as regulatory thinking matures. It reflects genuinely different political philosophies about the relationship between technology, markets, and the state. The US approach assumes that innovation should lead and standards should follow. The EU approach assumes that rights should lead and compliance should follow. The UK approach assumes that principles should lead and interpretation should follow. These are not convergent trajectories. They are parallel ones, and the organisations building agentic AI systems must navigate all of them simultaneously.
The NIST Moment
On February 5, 2026, the National Cybersecurity Center of Excellence at NIST published a concept paper titled "Accelerating the Adoption of Software and AI Agent Identity and Authorization." The paper is seventeen pages long. It asks for public comment by April 2, 2026. It proposes a potential demonstration project. In the ordinary course of standards development, it would be unremarkable - one of dozens of concept papers NIST publishes each year.
It is not unremarkable. It is a watershed. The paper represents the first time the US government has formally acknowledged that AI agents require their own identity and authorisation frameworks - distinct from the human users they represent and distinct from the software systems they inhabit. The paper identifies four focus areas: identification of AI agents, authorisation of agent actions, auditing of agent behaviour, and non-repudiation of agent transactions. It also addresses controls to prevent and mitigate prompt injection and other agent-specific attacks. These are precisely the questions that the AXD Institute's original KYA framework identified in Issue 026 - authentication, mandate verification, behavioural fingerprinting, and principal traceability - now expressed in the vocabulary of federal standards development.
The significance lies not in the content but in the mechanism. NIST does not write laws. It writes standards. But NIST standards have a history of becoming de facto mandatory through market adoption, insurance requirements, and procurement mandates. The parallel to PCI DSS is instructive: the Payment Card Industry Data Security Standard was never enacted as legislation. It was developed by the card networks as a voluntary industry standard. Within five years, it was effectively mandatory for any organisation processing card payments - enforced not through regulatory penalties but through network rules, insurance premiums, and contractual requirements. If NIST's concept paper evolves into a formal standard - and the launch of the AI Agent Standards Initiative twelve days later suggests it will - the same dynamic will apply to KYA.
"NIST does not write laws. It writes standards. But NIST standards have a history of becoming de facto mandatory - enforced not through regulatory penalties but through market adoption, insurance requirements, and procurement mandates."
For Agentic Experience Design practitioners, the NIST paper validates a core thesis of this Institute: that agent identity is not a technical implementation detail to be solved by engineers. It is a design challenge that requires the same rigour applied to trust architecture, delegation design, and human agent interaction. The paper's emphasis on "authorization" as a distinct focus area - separate from identification - mirrors the AXD distinction between authentication (who is this agent?) and mandate verification (what is it allowed to do?). The paper's emphasis on "auditing" mirrors the AXD concept of agent observability. The paper's emphasis on "non-repudiation" mirrors the AXD concept of principal traceability. The vocabulary is different. The architecture is the same.
The EU AI Act and the Classification Problem
The European Union's AI Act becomes fully applicable on August 2, 2026. It is the world's first comprehensive horizontal regulation of artificial intelligence, and it applies a risk-based classification system that determines compliance requirements based on the risk category of the AI system. The framework appears straightforward: minimal-risk systems face minimal obligations, limited-risk systems face transparency requirements, high-risk systems face extensive compliance obligations, and unacceptable-risk systems are prohibited.
For AI payment agents, the classification is anything but straightforward. As the law firm Taylor Wessing documented in their February 2026 analysis, a payment service provider using an AI agent to initiate payments on behalf of customers is a "user of an AI system" - a category that carries relatively light obligations. But if that same agent assesses the customer's creditworthiness as part of the payment process, it becomes a "high-risk AI system" subject to extensive requirements including risk management systems, data governance, technical documentation, human oversight, accuracy and robustness standards, and post-market monitoring.
The problem is that sophisticated agentic commerce systems will routinely do both. An agent that manages a consumer's finances does not simply initiate payments. It evaluates whether a purchase is financially prudent, compares credit options, assesses whether to use available funds or a credit facility, and makes decisions that are functionally equivalent to creditworthiness assessment - even if they are not labelled as such. The classification boundary between "payment initiation" (low-risk) and "creditworthiness assessment" (high-risk) is not a bright line. It is a grey zone that most real-world agentic payment systems will inhabit.
"You cannot regulate what you cannot classify. And the current classification frameworks were not designed for systems that dynamically shift between risk categories based on context."
This is the classification problem: the EU AI Act assumes that an AI system has a stable risk classification that can be determined in advance and maintained throughout its lifecycle. Agentic commerce systems do not work this way. They shift between risk categories dynamically, based on the specific task they are performing at any given moment. An agent that is low-risk when ordering groceries becomes high-risk when it decides to use a buy-now-pay-later facility to finance the purchase. The same agent, the same transaction context, two different risk classifications - determined not by the agent's identity or architecture but by the specific decision it makes in the moment.
The implications for KYA are direct. If agent identity verification must include risk classification - and the EU AI Act requires it - then KYA cannot be a one-time verification event. It must be a continuous process that reassesses the agent's risk classification with every transaction, because the classification changes with the nature of the action. This is a fundamentally different model from traditional KYC, where a customer's risk classification is assessed periodically and assumed to be stable between assessments. For agents, stability is the exception. Dynamic classification is the norm. The AXD Institute's analysis of agentic KYC examines how financial institutions must redesign their verification processes, while the agentic banking pillar maps the broader transformation.
Layered on top of the AI Act are PSD2's requirements for payment initiation services and DORA's requirements for third-party risk management. PSD2 applies to AI agents initiating payments because the regulation is technologically neutral - it governs the function (payment initiation) regardless of whether the initiator is human or machine. DORA applies because AI agent providers are third-party technology providers to financial institutions. And by 2027, the European Digital Identity framework will mandate acceptance of the EU Digital Identity Wallet for identity verification, adding yet another layer of compliance. The result is not a single regulatory framework but a stack of overlapping requirements, each designed for a different purpose, none designed specifically for agentic commerce.
The SCA Paradox
Of all the regulatory challenges facing agentic payments, the Strong Customer Authentication paradox is the most technically specific and the most immediately consequential. SCA, mandated under PSD2, requires that electronic payments be authenticated using at least two of three factors: something the customer knows (a password or PIN), something the customer has (a phone or hardware token), and something the customer is (a biometric). The requirement applies to all electronic payment transactions in the European Economic Area above certain thresholds.
Every word of that requirement assumes the customer is human. "Something the customer knows" presupposes a cognitive entity with memory. "Something the customer has" presupposes a physical entity that possesses objects. "Something the customer is" presupposes a biological entity with biometric characteristics. When the customer is an AI agent, none of these categories apply in their current form. An agent does not "know" a password in the way a human does - it stores a credential. An agent does not "have" a phone - it has access to an API. An agent does not "have" biometrics - it has computational signatures that can be replicated, spoofed, or transferred.
This is not a minor implementation detail. It is a fundamental architectural problem that reveals the deeper assumption embedded in all existing financial regulation: the transacting entity is human. The SCA paradox is the point at which that assumption becomes operationally untenable. An agentic payment system operating in Europe must satisfy SCA requirements, but the authentication factors that SCA mandates are designed for entities with bodies, memories, and physical possessions. Agents have none of these.
The payments industry is attempting to solve this through delegation mechanisms. Mastercard's Agentic Tokens authenticate the human principal at the point of delegation, then issue a token that the agent presents for subsequent transactions - effectively treating the human's initial SCA authentication as covering all agent actions within the delegated scope. Visa's Delegated Credentials take a similar approach but embed the delegation within the credential itself. Stripe's x402 protocol sidesteps SCA entirely by operating in the cryptocurrency domain, where PSD2 does not apply.
"The SCA paradox is not a compliance inconvenience. It is the moment at which financial regulation's foundational assumption - that the transacting entity is human - becomes operationally untenable."
None of these solutions addresses the deeper question: if SCA is designed to prevent unauthorised transactions by verifying the transactor's identity, what is the equivalent verification when the transactor is an agent? The human authenticated once, at the point of delegation. But the agent may transact thousands of times over weeks or months, in contexts the human never anticipated, with counterparties the human never approved. The initial SCA authentication becomes increasingly disconnected from the agent's actual transactions over time. The authentication was valid at the moment of delegation. Is it still valid on the agent's five-hundredth transaction, three weeks later, with a merchant the human has never heard of?
The AXD Institute's position is that the SCA paradox cannot be solved within the existing SCA framework. It requires a new authentication paradigm - one that verifies not the identity of the transactor (which SCA does for humans) but the authority of the transactor (which nothing currently does for agents). This is the distinction between authentication and mandate verification that the original KYA framework drew in Issue 026, and it is the distinction that regulators will eventually need to codify. The question is whether they will codify it before or after the first major consumer harm event involving an agent that was technically SCA-compliant but practically unauthorised.
The Private-Sector Race
While regulators deliberate, the private sector is building. In the sixty days between late January and late February 2026, six organisations published KYA frameworks, products, or standards - more concrete KYA activity than the preceding twelve months combined. The race is not to build the best framework. It is to build the framework that becomes the de facto standard before formal regulation arrives.
| Organisation | Product / Framework | Approach | Key Innovation |
|---|---|---|---|
| Sumsub | AI Agent Verification | Human binding - links each agent to a verified human identity | First commercial KYA product (January 28, 2026) |
| FIME | KYA Framework | Three-component: verifiable digital identity + smart wallet + reputation record | "Corporate intern with a prepaid card" governance model |
| AgentFacts.org | KYA Standard | Blockchain-based immutable agent identity verification | Regulatory framework alignment (EU AI Act, NIST AI RMF) |
| Trulioo | KYA White Paper | Enterprise-focused digital identity framework | Identity framework for trusted agentic commerce |
| AstraSync AI | KYA Standard | Collaborative framework for responsible AI agent deployment | Multi-stakeholder governance model |
| Vouched.id | KYA Guide | Verification and accountability for automated transactions | Fraud prevention focus |
The most substantive frameworks are Sumsub's and FIME's, and they reveal a fundamental philosophical divide. Sumsub's approach is human-binding: every agent must be linked to a verified human identity, and the human bears ultimate responsibility for the agent's actions. This is KYA as an extension of KYC - the agent is treated as an instrument of the human, and the human's identity is the anchor of trust. FIME's approach is agent-native: the agent has its own verifiable digital identity, its own smart wallet with enforced spending limits, and its own reputation record that accumulates over time. This is KYA as a new category of identity - the agent is treated as an entity in its own right, with its own credentials and its own track record.
This divide mirrors the three philosophies of agent identity identified in Issue 042: the delegated model (agent as extension of human), the autonomous model (agent as independent entity), and the hybrid model (agent as entity with human oversight). Sumsub is firmly in the delegated camp. FIME leans toward the hybrid. AgentFacts.org, with its blockchain-based immutable identity, is closest to the autonomous model. The private sector is not converging on a single approach. It is diverging into the same philosophical camps that the regulatory jurisdictions are diverging into.
The historical parallel is instructive. When the payments industry needed a security standard in the early 2000s, the card networks - Visa, Mastercard, American Express, Discover, JCB - formed the PCI Security Standards Council and developed PCI DSS collaboratively. The standard succeeded because it was developed by the organisations that controlled the payment networks, giving them both the authority and the incentive to enforce it. No equivalent coordination exists for KYA. The six organisations listed above are competitors, not collaborators. They are building proprietary frameworks, not shared standards. The result will be fragmentation - multiple incompatible KYA implementations that merchants, payment processors, and consumers must navigate simultaneously.
"The private sector is not waiting for regulators. It is building KYA frameworks now, and these frameworks will shape what regulation eventually looks like - just as the payments industry's self-regulation shaped PSD2."
The Four Gaps
Every KYA framework proposed to date - whether regulatory, standards-based, or commercial - addresses the same core questions: Is this agent authentic? Is it authorised? Can we trace it back to a responsible human? These are necessary questions. They are not sufficient ones. Four fundamental gaps persist across every framework, and until they are closed, KYA will remain an identity verification exercise rather than the trust governance architecture that agentic commerce requires.
Gap 1: Agent Competence Assessment. Every KYA framework verifies identity and authority. None assesses competence. This is the equivalent of checking a surgeon's medical licence without ever asking whether they can actually perform surgery. An agent can be perfectly authenticated, properly authorised, and still make catastrophic decisions because it is poorly trained, operating outside its competence boundary, or experiencing model drift - the gradual degradation of performance as the data distribution it encounters in production diverges from its training data. The AXD Institute's Trust Architecture identifies competence trust as the foundational layer of the four-layer trust model - the layer without which no other form of trust is possible. KYA asks: is this agent permitted to act? AXD asks: should this agent be trusted to act well? These are fundamentally different questions, and the second is the one that determines whether agentic commerce succeeds or fails in practice.
Gap 2: Dynamic Authority. KYA frameworks assume static delegation - an agent is authorised to do X within constraints Y, and those constraints remain fixed until explicitly changed. But real-world agentic commerce requires dynamic authority that adjusts based on context, performance history, and environmental conditions. A consumer grants their shopping agent authority to spend £200 per week. Mid-month, an unexpected expense reduces the household budget. Under static delegation, the consumer must manually reconfigure the agent. Under dynamic authority - as described in the AXD Institute's Delegation Design Framework and the Autonomy Gradient Design System - the agent's operational envelope contracts automatically. No KYA framework maps to this reality. They verify that the agent has authority. They do not govern how that authority should behave in context.
Gap 3: Multi-Agent Lineage. When Agent A delegates to Agent B, which delegates to Agent C, which initiates a transaction - who is the principal? KYA frameworks handle single-agent-to-human lineage. They do not handle multi-agent delegation chains, which are rapidly becoming the norm as Google's Agent-to-Agent protocol, Anthropic's Model Context Protocol, and other inter-agent communication standards enable agents to orchestrate other agents across organisational boundaries. A consumer's personal assistant delegates grocery shopping to a specialist agent, which delegates a marketplace purchase to a negotiation agent, which negotiates with three seller agents. The delegation chain crosses four organisational boundaries and potentially multiple legal jurisdictions. The Decentralized Identity Foundation has identified this as a core challenge: "Without distinct agent identities, delegation chains aren't traceable, auditable, or debuggable." No current KYA framework provides chain-of-delegation recording, authority attenuation tracking, or distributed accountability mapping for multi-agent systems.
Gap 4: Cross-Jurisdictional Identity. An agent authenticated under the EU AI Act's framework may not be recognised under NIST standards. Neither framework maps to China's platform-based governance model. There is no mutual recognition framework for agent identity across jurisdictions - the equivalent of passport treaties for AI agents. But the passport analogy has a fundamental limitation: passports verify a stable attribute (human identity) against a universal standard (biometric data). Agent identity is not stable - it changes with every model update, every fine-tuning run, every system prompt modification. The AXD Institute's original contribution to this gap is the concept of trust portability - the ability for a trust relationship built between a human and an agent in one jurisdiction to be recognised and honoured in another. This is not identity portability (a technical challenge) or regulatory equivalence (a political challenge). It is a design challenge: how do you architect an agent's trust record so that the trust it has earned in one context can be meaningfully communicated to a new one?
"The four gaps are not four separate issues. They are four dimensions of a single systemic failure: KYA frameworks verify identity without governing trust."
These gaps compound. An agent that cannot be assessed for competence (Gap 1) cannot have its authority dynamically adjusted based on performance (Gap 2). A multi-agent chain (Gap 3) that crosses jurisdictional boundaries (Gap 4) creates a lineage problem that is simultaneously a competence problem, an authority problem, and a governance problem. Identity verification is a necessary condition for agentic commerce. It is not a sufficient one. The sufficient condition is trust governance - the designed, dynamic, contextual management of the relationship between human intent and agent action. KYA frameworks that address only identity will create a false sense of security. KYA frameworks that address identity and trust will create the foundation for agentic commerce that actually works.
The FIME Roadmap
Of all the private-sector contributions to the KYA conversation, FIME's three-horizon roadmap for agentic payments governance is the most concrete timeline yet proposed. Published by Raphaël Guilley, FIME's SVP of Strategic Portfolio and Growth, on February 27, 2026, it maps the evolution of agent payment governance across short, medium, and long-term horizons. It is worth examining in detail - both for what it gets right and for what it misses.
| Horizon | Timeframe | Key Developments |
|---|---|---|
| Short-term | 1-2 years | Agent payment sandboxes, KYA protocol standardisation, virtual payment credentials ("agent cards"), ISO 20022 message standard updates with agent flag metadata |
| Medium-term | 3-5 years | Programmable CBDCs for agent transactions, new legal definitions for agent delegation and algorithmic liability, agent licensing requirements |
| Long-term | 5+ years | Bot-native payment infrastructure, cross-border agent payment standards ("IBAN for AI"), global agent registries coordinated by BIS or IMF |
The short-term timeline is realistic. Agent payment sandboxes are already emerging - Mastercard's agentic transaction testing environment, Stripe's x402 developer programme, and various fintech sandbox initiatives provide the infrastructure for controlled experimentation. KYA protocol standardisation is underway, driven by the six private-sector frameworks already in market. Virtual payment credentials are a natural extension of existing tokenisation infrastructure. And ISO 20022 message standard updates are a technical exercise that the payments industry has the institutional capacity to execute.
The medium-term timeline is optimistic. Programmable CBDCs remain experimental in most jurisdictions - the Bank of England's digital pound consultation is ongoing, the European Central Bank's digital euro project is in its preparation phase, and the US has no active CBDC programme. New legal definitions for agent delegation and algorithmic liability require legislative action, which operates on political timescales, not technology timescales - a challenge the AXD Institute examines in depth in Liability and the Agent. Agent licensing requirements would need regulatory frameworks that do not yet exist. Three to five years is ambitious for any of these developments; all three within the same window is aspirational.
The long-term timeline is visionary. A global agent registry coordinated by the Bank for International Settlements or the International Monetary Fund would require unprecedented international coordination - the kind of coordination that took decades to achieve for human financial identity through the FATF (Financial Action Task Force) framework. An "IBAN for AI" is a compelling metaphor but a formidable governance challenge.
The critical missing element across all three horizons is trust design. The FIME roadmap addresses infrastructure (payment sandboxes, virtual credentials, CBDCs), governance (KYA protocols, agent licensing, legal definitions), and coordination (cross-border standards, global registries). It does not address the human agent interaction that determines whether consumers will actually delegate financial authority to agents operating within these systems. Infrastructure without trust is plumbing without water. The pipes are necessary. But the value flows through the trust relationship, not through the pipes.
What KYA Means for Agentic Commerce
The regulatory analysis above is necessary context. But for the practitioners, product leaders, and design teams reading this essay, the question is practical: what does KYA regulation mean for the systems you are building? The answer differs by stakeholder, and in each case, the implications are more far-reaching than a simple compliance exercise.
For payment networks. KYA will require Mastercard, Visa, and Stripe to move beyond authentication tokens toward comprehensive agent identity frameworks. The current approaches - Agentic Tokens, Delegated Credentials, x402 protocol - are necessary but insufficient. They solve the authentication problem (is this agent who it claims to be?) without solving the competence problem (is this agent good at what it does?), the dynamic authority problem (is this agent's authority appropriate for this specific transaction?), or the lineage problem (can we trace this agent's actions through a multi-agent chain?). Networks that build KYA-compliant infrastructure first - infrastructure that addresses all four gaps, not just authentication - will have a competitive advantage as regulation crystallises.
For merchants. KYA will create a new compliance burden. Merchants will need to verify not just the human customer but the agent acting on their behalf. This is the AXD Readiness challenge expressed in regulatory language. Merchants who have already invested in machine-readable product data, agent-compatible transaction flows, and the signal clarity that agents need to make informed purchasing decisions will be ahead. Merchants who have optimised exclusively for human browsing behaviour will face a double transformation: adapting their systems for agent interaction and satisfying KYA compliance requirements simultaneously.
For consumers. KYA regulation will paradoxically both enable and constrain agentic shopping. It will enable it by creating the trust infrastructure that makes consumers willing to delegate financial authority to agents - the research from Issue 042 showed that 67% of consumers cite "lack of trust" as the primary barrier to agentic commerce adoption. Regulation creates trust by creating accountability. But KYA will also constrain agentic shopping by adding friction - verification steps, consent confirmations, authority limitations - that slows the frictionless ideal the industry is pursuing. The design challenge is to make this friction productive rather than obstructive: friction that builds trust rather than destroying convenience.
For AXD practitioners. KYA regulation validates the central thesis of Agentic Experience Design: that trust is the primary material of the agentic economy, and that trust must be designed, not assumed. Every KYA requirement maps to an AXD design challenge. Agent identity maps to trust architecture. Authority verification maps to delegation design. Behavioural monitoring maps to the Autonomy Gradient. Dispute resolution maps to the Failure Architecture Blueprint. The organisations that have invested in AXD frameworks are not just better designed. They are better prepared for regulation - because the design discipline and the regulatory requirements are converging on the same set of problems.
Five Predictions
One. NIST's concept paper will evolve into a formal standard by Q4 2026, becoming the de facto KYA framework for the US market - voluntary in name, mandatory in practice through insurance requirements, enterprise procurement mandates, and payment network rules. The comment period closes April 2. The AI Agent Standards Initiative launched February 17. The institutional momentum is unmistakable.
Two. The EU AI Act's August 2026 applicability date will trigger the first enforcement actions against AI payment agents operating without adequate risk classification, forcing the industry to confront the classification problem described in Section 4. These enforcement actions will not target the agents themselves but the organisations deploying them - payment service providers, fintechs, and banks that have classified their agentic systems as low-risk when the systems' actual behaviour crosses into high-risk territory.
Three. The UK FCA will publish AI-specific guidance by end of 2026, as the Treasury Committee demanded. But it will be principles-based and insufficient for the specific challenges of agentic payments - creating a regulatory gap that private-sector KYA frameworks will fill. The Mills Review is examining AI's impact on retail financial services broadly, not agentic payments specifically. The resulting guidance will be directionally correct but operationally vague.
Four. At least one major payment network will announce a formal KYA compliance programme by Q3 2026, requiring all agents transacting on its network to meet minimum identity, authority, and monitoring standards. Mastercard is the most likely candidate, given its advanced position with Agentic Tokens and its history of leading on security standards (it was a founding member of the PCI Security Standards Council). This programme will become the PCI DSS of agentic commerce - not legislation, but an industry standard that is effectively mandatory.
Five. The first cross-jurisdictional agent identity mutual recognition agreement will be proposed - though not ratified - by end of 2026, most likely between the US and EU, modelled on existing data protection adequacy frameworks. The GDPR adequacy decision process provides the template: the EU assesses whether a third country's data protection regime provides "essentially equivalent" protection. A KYA adequacy framework would assess whether a third country's agent identity regime provides essentially equivalent verification. The political will exists. The technical framework does not - yet.
"The question is not whether KYA regulation is coming. The question is which of the competing frameworks - regulatory, standards-based, or commercial - will define what KYA actually means."
KYA Regulation: Implications for Practitioners
This essay has been deliberately regulatory in focus. The design implications are the reason for that focus. KYA regulation is not something that happens to AXD practitioners. It is something that validates AXD practice and creates new design requirements that only AXD frameworks can address. Four specific design imperatives emerge from the analysis above.
Design for auditability now. Every agent interaction should generate an audit trail that can satisfy regulatory inquiry. This is not a future requirement - it is a present one under existing financial regulation. The NIST concept paper's emphasis on "auditing" and "non-repudiation" makes this explicit. Design your agent systems so that every delegation event, every transaction, every authority change, and every human re-engagement is recorded in a format that regulators can interpret. The organisations that build this infrastructure now will not need to retrofit it when regulation arrives.
Build dynamic delegation. Static authority models will not survive KYA regulation. The EU AI Act's dynamic risk classification, the SCA paradox's challenge to one-time authentication, and the competence assessment gap all point in the same direction: agent authority must be dynamic, contextual, and continuously governed. The Delegation Design Framework provides the architectural foundation. Its seven dimensions of delegation - scope, duration, spending authority, category restrictions, vendor preferences, escalation triggers, and revocation conditions - are not just good design practice. They are the design vocabulary that KYA regulation will eventually require.
Implement the hesitation layer. KYA regulation will require human confirmation for high-value or high-risk agent transactions. The hesitation layer - the designed moment of human re-engagement before irreversible action - is not just good AXD practice. It is about to become a regulatory requirement. The SCA paradox makes this inevitable: if one-time authentication at the point of delegation is insufficient (and it is), then periodic re-authentication during the agent's operational lifecycle becomes necessary. The design challenge is to make these re-engagement moments feel natural rather than obstructive - trust-building rather than trust-breaking.
Prepare for multi-agent lineage. As agent-to-agent commerce scales - enabled by Google's A2A protocol, Anthropic's MCP, and the emerging agentic protocol stack - regulators will require principal traceability through delegation chains. Design systems that maintain clear lineage from every agent action back to a responsible human - the fourth pillar of the original KYA framework. This means implementing chain-of-delegation recording, authority attenuation tracking, and distributed accountability mapping before regulation mandates them. The organisations that treat multi-agent lineage as a design requirement today will not need to treat it as a compliance emergency tomorrow.
"KYA regulation is not something that happens to AXD practitioners. It is something that validates AXD practice - because the design discipline and the regulatory requirements are converging on the same set of problems."
The regulatory reckoning is here. Not approaching. Here. The NIST concept paper is published. The EU AI Act applicability date is five months away. The FCA is under parliamentary pressure to act. Six private-sector organisations have published KYA frameworks in sixty days. The question for every organisation building agentic commerce systems is not whether to prepare for KYA regulation. It is whether to prepare now - on your own terms, with design integrity - or later, under regulatory pressure, with compliance as the ceiling rather than trust as the floor.
