Brutalist concrete security checkpoint with holographic identity verification gates - representing the infrastructure of agent identity
Back to Observatory

The Observatory · Issue 026 · February 2026

Know Your Agent (KYA) | AXD Institute

The Four Pillars of Agent Identity in Agentic Commerce

By Tony Wood·26 min read


Know Your Customer has been the bedrock of regulated commerce for half a century. The principle is deceptively simple: before you transact with someone, verify who they are. Banks check passports. Exchanges confirm addresses. Payment processors validate identities. The entire architecture of financial trust rests on the assumption that the entity initiating a transaction is a human being whose identity can be established through documentary evidence and biometric confirmation.

That assumption is now breaking. When a machine customer initiates a purchase, requests a credit line, or executes a portfolio rebalance, there is no passport to check, no face to scan, no signature to verify. The agent is not a person. It is a software entity acting on behalf of a person - or, increasingly, on behalf of another agent acting on behalf of a person. The chain of delegation can extend several layers deep, and at each layer, the question of identity becomes more complex.

Know Your Agent (KYA) is the AXD Institute's framework for addressing this challenge. It does not replace KYC - it extends it into the agentic domain, as explored in the Institute's analysis of agentic KYC. Where KYC asks "who is this person?", KYA asks four additional questions that together constitute the minimum viable identity architecture for agentic commerce: Is this agent what it claims to be? Is it authorised to act? Is it behaving consistently? And can we trace its actions back to a responsible human? The companion framework - Know Your Human (KYH) - inverts this lens, asking how agents continuously validate the humans they serve.


01

The Identity Crisis

The fundamental problem is not that we lack identity verification technology. It is that every identity verification system ever built assumes the entity being verified is human. Biometric authentication requires a body. Document verification requires a government-issued credential tied to a natural person. Even behavioural analytics - keystroke dynamics, mouse movement patterns, typing cadence - are calibrated against human behavioural baselines.

AI agents violate every one of these assumptions. They have no biometrics. They carry no documents. Their behavioural patterns are determined by their training data and system prompts, not by the neurological characteristics of a human operator. When an agent authenticates to a banking API, it presents cryptographic credentials - tokens, certificates, API keys - that prove it has been granted access, but reveal nothing about the human principal whose intent it claims to represent.

This creates what Sumsub's research team calls the "identity gap" - the space between technical authentication (proving the agent has valid credentials) and meaningful identity (establishing who is responsible for the agent's actions and what authority has been delegated to it). Traditional KYC closes this gap for humans through documentary evidence and biometric confirmation. For agents, the gap remains wide open.

"KYC asks who is this person. KYA asks four harder questions: is this agent authentic, is it authorised, is it behaving normally, and can we trace its actions back to a human who bears responsibility?"

The consequences of this gap are not theoretical. As Birch and Hoffart documented in the Journal of Digital Banking (2025), financial institutions are already encountering agent-initiated transactions that pass all existing fraud detection systems because those systems were designed to detect anomalous human behaviour. An agent that executes thirty transactions per second is not exhibiting suspicious behaviour - it is exhibiting agent behaviour. The detection models need to be rebuilt from first principles.

KYA provides those first principles. It is structured around four pillars, each addressing a distinct dimension of the identity challenge. Together, they form the minimum viable architecture for admitting autonomous agents into regulated commercial relationships.


02

Agent Authentication

The first pillar answers the most basic question: is this agent what it claims to be? Authentication in the agentic context means establishing that the software entity presenting itself to a service is genuinely the agent it purports to be, deployed by the organisation it claims to represent, and running the version it advertises.

This is more complex than it appears. In human authentication, the entity and the identity are inseparable - a person is their biometric signature. In agent authentication, the entity (the running software) and the identity (the registered agent profile) are fundamentally separable. An agent can be cloned, spoofed, or impersonated. A compromised agent can continue presenting valid credentials while executing malicious instructions. The authentication challenge is therefore not merely "does this agent have valid credentials?" but "is this the genuine, uncompromised instance of the agent it claims to be?"

Current best practice, as documented by Sumsub and the emerging AgentFacts standard, involves a layered authentication model. At the base layer, cryptographic credentials - OAuth 2.1 client credentials, mutual TLS certificates, or signed JSON Web Tokens - establish that the agent possesses secrets known only to the legitimate deployer. Above this, agent metadata verification confirms the agent's declared capabilities, version, and deployment context against a registered profile. At the highest layer, runtime attestation mechanisms can verify that the agent's execution environment has not been tampered with.

"In human authentication, the entity and the identity are inseparable. In agent authentication, they are fundamentally separable - and that separation is where the attack surface lives."

The MCP (Model Context Protocol) specification, developed by Anthropic, has begun addressing this through its OAuth 2.1 integration with dynamic client registration. This allows agents to authenticate to tool servers in a standardised way, but it addresses only the machine-to-machine layer. It does not, by itself, establish who deployed the agent or on whose behalf it acts. That requires the remaining three pillars.

For identic AI agents - those acting as cognitive extensions of a specific individual - authentication must also establish the binding between the agent instance and the human principal. This is what Sumsub calls "human binding": the cryptographic and procedural chain that connects a specific agent deployment to a verified human identity. Without this binding, authentication proves only that the agent is technically valid, not that it is legitimately authorised to act on anyone's behalf.


03

Mandate Verification

Authentication establishes that an agent is genuine. Mandate verification - the second pillar - establishes that the agent is authorised to perform the specific action it is attempting. This is the distinction between identity and authority, and in the agentic context, it is the distinction that matters most.

A human customer who authenticates to their bank can, in principle, perform any action their account permits. The authentication is the authorisation. For agents, this conflation is dangerous. An agent authenticated to act on behalf of a customer should not automatically inherit the full scope of that customer's authority. The agent's mandate - the specific set of actions, value thresholds, temporal boundaries, and contextual constraints within which it is authorised to operate - must be verified independently of its identity.

This is where the AXD concept of delegation design intersects directly with KYA. The operational envelope - the bounded authority space within which an agent is permitted to act - must be cryptographically encoded, machine-readable, and verifiable at the point of transaction. The emerging standard for this is Verifiable Digital Credentials (VDCs), which allow a human principal to issue a signed, tamper-evident mandate specifying exactly what the agent may do.

"Authentication proves the agent is who it claims to be. Mandate verification proves it is allowed to do what it is attempting. Conflating the two is the most dangerous design error in agentic commerce."

South et al.'s influential 2025 paper on authenticated delegation established the theoretical framework for this separation. Their model distinguishes between "identity credentials" (who the agent is) and "authority credentials" (what the agent may do), and argues that both must be presented and verified at every transaction boundary. The Agent Payments Protocol (AP2) builds on this foundation, requiring that every payment-initiating agent present a verifiable mandate alongside its authentication credentials.

Mandate verification also introduces a temporal dimension that has no parallel in human KYC. Human identity is relatively stable - a passport remains valid for ten years. Agent mandates are inherently dynamic. A shopping agent's spending authority might change hourly based on budget consumption. A trading agent's mandate might narrow automatically during periods of market volatility. The verification infrastructure must support real-time mandate queries, not just point-in-time checks.

The design challenge is significant: mandate verification must be fast enough to not impede transaction flow (agents operate at machine speed), comprehensive enough to prevent scope creep (agents will test boundaries), and flexible enough to accommodate the dynamic nature of delegated authority. This is not a solved problem. It is, however, a defined one - and defining the problem precisely is the first step toward solving it.


04

Behavioural Fingerprinting

The first two pillars are static checks: is the agent authentic, and is it authorised? The third pillar introduces a dynamic dimension: is the agent behaving consistently with its established patterns? Behavioural fingerprinting is the continuous monitoring of an agent's operational characteristics to detect anomalies that might indicate compromise, drift, or misuse.

Every agent develops a behavioural signature over time. A shopping agent that typically compares prices across five retailers before purchasing, operates within a consistent price range, and transacts during specific hours has a fingerprint as distinctive as a human's typing cadence. When that fingerprint changes - the agent suddenly purchases without comparison, at unusual price points, at unusual times - the deviation is a signal that something has changed. The agent may have been compromised. Its instructions may have been altered. Or it may simply be operating in a new context that its monitoring systems have not yet learned to recognise.

This is directly analogous to the concept of agent observability in the AXD framework, but applied specifically to the identity verification context. Where agent observability asks "can we understand what the agent is doing and why?", behavioural fingerprinting asks "is what the agent is doing consistent with what we expect it to do?" The two are complementary: observability provides the data, fingerprinting provides the analysis.

"Every agent develops a behavioural signature as distinctive as a human fingerprint. When that signature changes, the deviation is a signal - not necessarily of malice, but always of something that demands investigation."

Palo Alto Networks' research on agentic AI governance identifies three categories of behavioural anomaly that fingerprinting systems must detect. Velocity anomalies occur when an agent's transaction rate, data access frequency, or API call volume deviates significantly from its baseline. Pattern anomalies occur when the sequence or structure of an agent's actions changes - for example, a procurement agent that begins accessing customer data it has never previously requested. Contextual anomalies occur when an agent's behaviour is normal in isolation but inappropriate for its current context - a trading agent executing a standard rebalance during a market halt.

The challenge is calibration. Human behavioural analytics benefit from decades of baseline data and well-understood statistical distributions. Agent behavioural baselines are new, rapidly evolving, and highly variable across agent types - a challenge the AXD Institute's agent taxonomy addresses by classifying agents along dimensions of autonomy, domain, and risk profile. A financial trading agent and a customer service agent have entirely different normal behavioural profiles. The fingerprinting system must be agent-type-aware, context-sensitive, and adaptive - learning what "normal" means for each specific agent deployment rather than applying universal thresholds.

Okta's emerging agent identity platform addresses this through what they call "policy-based access with behavioural monitoring" - a Zero Trust model adapted for autonomous systems. Rather than granting persistent access based on initial authentication, the system continuously evaluates the agent's behaviour against its policy profile and can revoke or restrict access in real time when anomalies are detected. This is the operational expression of the AXD principle that trust architecture must be dynamic, not static.


05

Principal Traceability

The fourth pillar addresses the question that regulators care about most: when an agent acts, who is responsible? Principal traceability is the ability to maintain an auditable, unbroken chain from any agent action back to the human being who authorised it - not merely the human who deployed the agent, but the human whose intent the agent's action is supposed to serve.

This is straightforward in simple delegation chains. When a consumer instructs their personal shopping agent to purchase a specific product within a specific budget, the chain is one link long: agent action traces directly to consumer intent. But agentic commerce is rapidly moving beyond simple chains. A consumer's personal agent might delegate a subtask to a specialist comparison agent, which in turn queries multiple retailer agents, each of which may invoke payment processing agents. The delegation chain can extend four or five layers deep, and at each layer, the connection to the original human intent becomes more attenuated.

Chaffer's 2025 research on governing AI identity on the agentic web proposes a decentralised traceability model using distributed ledger technology. Each delegation event - each point at which one agent grants authority to another - is recorded as an immutable, timestamped entry that includes the delegating entity, the receiving entity, the scope of delegated authority, and the temporal constraints. The full chain can be reconstructed at any point by traversing these entries back to the originating human principal.

"The question is not whether an agent can be traced back to a human. The question is whether that trace can survive a five-layer delegation chain, a disputed transaction, and a regulatory audit - simultaneously."

The consent horizon concept from the AXD framework is directly relevant here. Consent given to a first-layer agent does not automatically propagate to agents further down the delegation chain. Each delegation boundary is a consent boundary, and principal traceability must record not just the chain of delegation but the chain of consent - which human decisions authorised which agent actions at which points in the chain.

For financial services, principal traceability has immediate regulatory implications. Anti-money laundering regulations require that the beneficial owner of every transaction be identifiable. When an agent executes a transaction, the beneficial owner is the human principal - but proving that connection requires the traceability infrastructure to be in place before the transaction occurs. Retrospective reconstruction is insufficient; the chain must be established and verified in real time.

The design pattern that emerges is what the AXD framework calls "delegation receipts" - cryptographically signed records that accompany every agent action, documenting the complete authority chain from human intent through each delegation layer to the final action. These receipts serve triple duty: they satisfy regulatory audit requirements, they enable dispute resolution (which human authorised this specific action?), and they provide the evidentiary foundation for liability attribution when things go wrong.


06

The Implementation Architecture

The four pillars do not operate in isolation. They form an integrated verification architecture in which each pillar reinforces the others. Authentication establishes identity, which provides the baseline for behavioural fingerprinting. Mandate verification defines the boundaries within which behaviour is evaluated. Principal traceability connects the entire chain back to human accountability. Remove any one pillar and the architecture becomes vulnerable.

In practice, the implementation follows a verification cascade. When an agent initiates a transaction, the receiving system first authenticates the agent (Pillar 1), confirming its cryptographic credentials and metadata against its registered profile. It then verifies the agent's mandate (Pillar 2), confirming that the specific action falls within the agent's delegated authority. Simultaneously, the behavioural fingerprinting system (Pillar 3) evaluates whether the action is consistent with the agent's established patterns. Finally, the principal traceability system (Pillar 4) records the complete delegation chain for the transaction, creating the audit trail that connects the action to human intent.

This cascade must execute at machine speed. Agents do not wait patiently while their credentials are checked - they operate in milliseconds. The verification infrastructure must therefore be designed for sub-second response times, which rules out any approach that requires synchronous human review. The architecture must be fully automated, with human intervention reserved for exception handling when one or more pillars flag an anomaly.

The AgentFacts standard, emerging from the open-source community in 2025, provides a practical foundation for this architecture. AgentFacts defines a universal metadata schema for AI agents - a machine-readable "identity card" that includes the agent's capabilities, deployment context, version history, and authority chain. When combined with W3C Verifiable Credentials for mandate encoding and OAuth 2.1 for authentication, the technical building blocks for a complete KYA implementation are already available. What is missing is the integration layer that brings them together into a coherent verification cascade.

The AXD Institute's position is that this integration layer is fundamentally a design problem, not merely an engineering problem. The verification cascade must be designed with the same intentionality that we bring to any other aspect of the agentic experience - considering not just whether it works, but how it fails, how it recovers, and how it communicates its state to the humans who depend on it. Failure architecture applies to KYA infrastructure as much as it applies to the agents themselves.


07

The Regulatory Imperative

Regulation is catching up. NIST's AI Agent Standards Initiative, launched in early 2026, explicitly addresses agent identity and accountability as core safety requirements. The EU AI Act, while primarily focused on AI system classification, creates obligations around transparency and traceability that map directly onto the KYA framework. The UK's FCA has signalled through its AI and Digital Innovation strategy that agent-mediated financial transactions will require identity verification frameworks that go beyond existing KYC requirements.

The regulatory landscape is converging on a principle that the AXD framework has articulated from the beginning: autonomous action requires proportionate accountability. The more authority an agent exercises, the more robust the identity verification must be. A chatbot that answers customer queries requires minimal KYA. A trading agent that executes million-pound transactions requires the full four-pillar architecture. The verification burden scales with the risk.

This proportionality principle has important implications for implementation. Not every agent interaction requires the full verification cascade. The AXD concept of interrupt frequency applies here: the KYA system must be designed to apply the right level of verification at the right moment, avoiding both under-verification (which creates risk) and over-verification (which creates friction that defeats the purpose of autonomous operation).

The regulatory trajectory is clear: KYA will become mandatory for agent-mediated financial transactions within the next regulatory cycle. Institutions that build the infrastructure now will have a competitive advantage. Those that wait for regulatory mandates will find themselves scrambling to retrofit identity verification into systems that were designed without it - a far more expensive and disruptive proposition.


08

The Banking Frontline

Financial services will be the first sector where KYA becomes operationally essential. Banks are already encountering agent-initiated transactions - from personal finance management agents that aggregate account data across institutions, to corporate treasury agents that execute foreign exchange transactions, to investment agents that rebalance portfolios based on market conditions. Each of these interactions requires the bank to answer the KYA questions: is this agent authentic, is it authorised, is it behaving normally, and can we trace its actions to a responsible human?

Santander's announcement in January 2026 of a dedicated agentic commerce infrastructure - including Getnet's payment processing for autonomous agents - signals that major financial institutions are beginning to build the KYA infrastructure. But the industry remains in the early stages. Most banks' fraud detection systems still assume human transaction patterns. Most API gateways authenticate applications, not agents. Most compliance frameworks have no category for machine customers.

The identic AI essay explored what happens when banks' most valuable customers begin sending agents instead of appearing in person. KYA provides the operational answer: banks must build agent onboarding processes that mirror - but fundamentally differ from - their customer onboarding processes. Where customer onboarding verifies a human identity once and grants persistent access, agent onboarding must verify agent identity continuously, validate mandates dynamically, monitor behaviour perpetually, and maintain traceability permanently.

"The bank that masters KYA will not be the one with the best app. It will be the one with the most trustworthy agent infrastructure - the institution that machine customers consistently choose because it verifies without impeding, monitors without surveilling, and traces without constraining."

The competitive implications are significant. In a world where agents choose service providers based on API quality, verification speed, and mandate flexibility, the bank with the most sophisticated KYA infrastructure becomes the preferred partner for autonomous commerce. KYA is not a compliance cost - it is a competitive differentiator. The institution that can verify an agent's identity, validate its mandate, and process its transaction in milliseconds will capture the agent-mediated market. The institution that requires manual review, imposes rigid authentication ceremonies, or cannot support dynamic mandates will be bypassed.


09

Know Your Agent: Design Implications

KYA is not merely a technical specification. It is a design challenge that touches every aspect of the agentic experience. How do we design agent onboarding flows that are rigorous without being prohibitive? How do we communicate mandate boundaries to agents in ways they can interpret and respect? How do we surface behavioural anomalies to human operators without creating alert fatigue? How do we make delegation chains legible to regulators, auditors, and the humans whose intent they represent?

These are AXD practice questions. They require the same design thinking that we apply to consent horizons, operational envelopes, and relational arcs - but applied to the specific domain of identity verification. The KYA designer must balance security against usability, thoroughness against speed, and accountability against autonomy.

The AXD framework positions KYA as one of the twelve core practices - specifically, as the identity layer that underpins all other agentic interactions. Without KYA, trust architecture has no foundation. Without KYA, delegation design has no enforcement mechanism. Without KYA, agent observability has no identity context. KYA is the practice that makes all other practices possible in regulated environments.

"Know Your Agent is not a compliance checkbox. It is the identity layer that makes every other agentic design practice possible in regulated commerce."

The discipline is young. The standards are emerging. The regulatory frameworks are forming. But the design principles are clear: agent identity must be verifiable, agent authority must be bounded and auditable, agent behaviour must be monitored continuously, and the chain from agent action to human intent must be unbroken and legible. These are the four pillars of Know Your Agent. They are the minimum viable architecture for trust in the age of machine customers.

The organisations that build this architecture now - not as a regulatory response but as a design commitment - will define the standards that the rest of the industry eventually adopts. KYA is not a problem to be solved later. It is the foundation upon which agentic commerce will be built, or upon which it will fail. And as the Institute's Know Your Human framework demonstrates, the verification obligation runs in both directions - agents must also continuously validate the humans whose authority they exercise, detecting authority drift, context shifts, and the moment when graceful suspension becomes the only responsible design response.



About the Author

Tony Wood

Tony Wood is an AI Transformation Consultant at the UK's leading retail bank and founder of AgenticCommerce.design, where he writes about the intersection of agentic AI, customer experience, and the future of financial services.


Frequently Asked Questions