AXD Brief 043

The Regulatory Reckoning

Know Your Agent and the Coming Governance of Agentic Commerce

3 min read·From Observatory Issue 043·Full essay: 30 min

The Argument

Know Your Agent (KYA) is a regulatory framework that extends identity verification principles to autonomous AI agents, establishing the foundation for accountable agentic commerce. The current global push for KYA, however, is dangerously incomplete. While regulators and private firms are racing to define standards for agent identification and authorisation, they are fundamentally mistaking identity verification for trust governance. This narrow focus on *who* an agent is, rather than on its competence, its dynamic authority, and the integrity of its decision-making, creates a false sense of security. Without a deeper architecture of trust, KYA will fail to make agentic commerce safe and reliable, leaving the global economy vulnerable to systemic risks.

The Evidence

The drive for KYA is unfolding across at least five major jurisdictions, but this is not a coordinated effort. It is a fractured and divergent one. The United States is pursuing a voluntary, standards-based approach through NIST, while the European Union is implementing a horizontal, risk-based framework via its AI Act. The United Kingdom is adapting existing principles-based financial regulations, Singapore is relying on governance-focused principles, and China has embedded agent governance within its existing platform rules. This fragmentation means an agent verified in one jurisdiction will not be recognised in another, creating a complex and costly compliance maze for any organisation operating globally and preventing the emergence of a universal trust layer for agentic commerce.

This regulatory chaos is compounded by a fundamental architectural flaw in existing financial rules, exemplified by the Strong Customer Authentication (SCA) paradox in Europe. SCA mandates that transactions be authenticated by factors proving what a customer *knows* (password), *has* (phone), or *is* (biometric) - all categories that presuppose a human transactor. An AI agent has no body, no physical possessions, and no biological identity. This paradox reveals that our entire financial regulatory system is built on the assumption that the transacting entity is human, an assumption that becomes operationally untenable in the age of agentic commerce. Current workarounds, like authenticating the human once at the point of delegation, fail to address the ongoing risk of an agent transacting thousands of times without direct oversight.

Furthermore, all current KYA proposals - whether from regulators like NIST or private firms like Sumsub and FIME - suffer from four systemic gaps. They fail to address competence assessment (is the agent actually good at its job?), dynamic authority (can the agent’s permissions adapt to changing contexts?), multi-agent lineage (who is responsible when agents delegate to other agents?), and cross-jurisdictional identity (how can trust be portable across borders?). These are not minor details; they are the core components of a functional trust architecture. By focusing only on initial identity verification, these frameworks ignore the complex, dynamic, and often multi-layered nature of agentic systems, thereby failing to govern the trust relationship between human intent and agent action.

The Implication

If the current trajectory of KYA regulation continues, organisations will be forced to invest in compliance solutions that provide a veneer of security while failing to address the underlying risks of agentic systems. The result will be a brittle and untrustworthy ecosystem. Product leaders and designers must therefore look beyond mere compliance and treat KYA as a design challenge centered on trust architecture. This requires building systems that don't just verify an agent’s identity but continuously govern its behaviour. For example, instead of static permissions, designers should implement delegation design frameworks where an agent’s authority dynamically adjusts based on its performance, the context of a transaction, and the principal’s real-time circumstances.

Practitioners in Agentic Experience Design (AXD) must champion a shift from identity verification to trust governance. This means creating mechanisms for agent observability and competence scoring, allowing both users and systems to assess an agent’s reliability over time. It also requires designing for multi-agent accountability, creating clear and auditable records of delegation chains. For payment networks and financial institutions, this is an opportunity to build a competitive advantage by developing infrastructure that offers genuine trust, not just regulatory theatre. For merchants and consumer-facing businesses, it is a mandate to build agent-ready systems that provide the signal clarity necessary for reliable autonomous operation. Ultimately, the success of agentic commerce hinges not on whether we can identify agents, but on whether we can trust them.

TW

Tony Wood

Founder, AXD Institute · Manchester, UK