A long concrete corridor stretching toward a distant horizon of warm light - the boundary of consent
Back to Observatory

The Observatory · Issue 007 · August 2026

The Consent Horizon

Designing Permission for Systems That Never Stop Acting

By Tony Wood·25 min read

There is a corridor in every permission system. It begins at the moment of consent - the click, the signature, the spoken "yes" - and it extends forward in time toward a horizon that the person giving consent cannot see. In screen-based design, this corridor was short. You consented to a transaction, and the transaction completed. You agreed to terms of service, and the terms governed a relationship you could exit at any time by closing the app. The corridor had walls you could see, a floor you could feel, and a door at the end you could walk through.

In agentic systems, the corridor stretches toward infinity. When you delegate authority to an autonomous agent - "manage my energy costs," "optimise my insurance," "handle my routine financial transactions" - you are giving consent that extends across time in ways that are fundamentally different from any previous form of digital permission. The agent will act tomorrow, next week, next year, in contexts you cannot predict, making decisions you cannot foresee, on your behalf but without your presence.

This is the consent horizon: the temporal boundary beyond which the person who gave consent can no longer meaningfully predict or control what their consent authorises. And it is the most underexamined design challenge in agentic experience design.


Beyond the Checkbox

The consent mechanisms we have inherited from web-era design are catastrophically inadequate for agentic systems. The checkbox, the cookie banner, the terms-of-service scroll - these are consent theatre. They create a legal record of agreement without creating genuine understanding, genuine choice, or genuine control. They are designed to protect the organisation, not to empower the individual.

In screen-based design, this inadequacy was tolerable because the consequences of uninformed consent were relatively contained. You agreed to cookies you did not understand, and the result was targeted advertising you found mildly annoying. You accepted terms of service you did not read, and the result was a data-sharing arrangement you would have objected to if you had known about it but that did not materially harm you.

In agentic systems, uninformed consent has teeth. When you delegate financial authority to an autonomous agent without fully understanding the scope of that delegation, the agent can make decisions that cost you money, commit you to contracts, or expose you to risks you did not intend to accept.

The stakes of consent in agentic systems are not theoretical. They are financial, legal, and in some domains, physical. A healthcare agent acting on poorly understood consent could make treatment decisions. A legal agent could commit you to obligations. A financial agent could execute transactions that alter your economic position. The checkbox is not just inadequate - it is dangerous.


The first dimension of the consent horizon is temporal. Consent in agentic systems is not a moment - it is a duration. And the design of that duration is a challenge that has no precedent in digital design.

I propose a framework I call Temporal Consent Architecture, which distinguishes four modes of consent duration. Transactional consent is the simplest: consent for a single action. "Buy this item." "Transfer this amount." The consent is consumed by the action and does not persist. This is the mode that screen-based design handles well, and it remains appropriate for discrete, bounded agent actions.

Standing consent is open-ended delegation without a defined expiry. "Manage my savings." "Optimise my insurance." "Handle my routine banking." This is the most powerful and the most dangerous form of consent, because it extends across the consent horizon into contexts the human cannot predict. Standing consent requires the most sophisticated design: regular reaffirmation mechanisms, scope monitoring, and what I call consent health checks - periodic moments where the system surfaces the current state of the delegation and invites the human to confirm, modify, or revoke it.

Emergent consent is the most complex: consent that evolves as the agent's capabilities and the human's trust develop over time. An agent that begins with transactional consent for individual purchases might, through demonstrated competence and the human's growing confidence, accumulate broader authority. Emergent consent is not given at a single moment - it accretes through a series of successful interactions, each of which slightly expands the agent's operational envelope. Designing for emergent consent requires a trust calibration system that makes the expansion of authority visible, reversible, and always under the human's ultimate control.


The Horizon Problem

The fundamental challenge of temporal consent is what I call the horizon problem: the further consent extends into the future, the less meaningful it becomes. A person who consents to an agent managing their energy costs today is consenting based on their current understanding of energy markets, their current financial situation, their current living arrangements, and their current priorities. Six months from now, any or all of these may have changed. The consent they gave was genuine at the moment of giving, but its validity degrades over time as the context in which it was given diverges from the context in which it is being exercised.

This is not a hypothetical concern. It is the central design challenge of long-duration agentic relationships. And it requires a design response that goes far beyond the current practice of burying a "you can revoke consent at any time" clause in the terms of service.

Consent is not a switch that is either on or off. It is a living agreement that must be maintained, refreshed, and adapted as circumstances change. The design of consent maintenance is as important as the design of consent acquisition.

The design response I propose is Consent Maintenance Architecture: a systematic approach to keeping consent alive, meaningful, and aligned with the human's evolving context. This includes scheduled consent reviews (not buried in settings, but surfaced proactively), context-triggered consent checks (when the agent detects that the operating context has changed significantly), and consent health indicators (visible signals that show the human how their consent is being exercised and whether it remains aligned with their current situation).


Consent decays. This is not a metaphor - it is a design principle. The validity of consent diminishes over time as the gap between the context of giving and the context of exercising widens. A consent given yesterday for a routine transaction is highly valid. A consent given a year ago for an ongoing delegation is less valid. A consent given three years ago for a broad, open-ended authority is potentially invalid - not legally, perhaps, but ethically and experientially.

I propose that agentic systems should implement what I call a consent decay model: a designed degradation of authority over time that requires periodic renewal. The rate of decay should be proportional to the scope and consequence of the delegation. A narrow delegation with low financial consequence might decay slowly - annual renewal is sufficient. A broad delegation with high financial consequence should decay rapidly - quarterly or even monthly renewal may be appropriate.

The consent decay model is not about creating friction. It is about creating integrity. It ensures that the authority an agent exercises is always grounded in a consent that is reasonably current, reasonably informed, and reasonably aligned with the human's present circumstances. It treats consent not as a binary state (given or not given) but as a gradient that requires active maintenance.

The design of consent renewal is itself a significant challenge. A renewal that is too burdensome will train humans to click through it mindlessly - recreating the checkbox problem at a different cadence. A renewal that is too lightweight will fail to surface the information the human needs to make a genuinely informed decision. The optimal consent renewal is brief, contextual, and informative: it tells the human what the agent has been doing, what has changed since the last renewal, and what the agent plans to do next - and it asks for a genuine, considered reaffirmation.


The second dimension of the consent horizon is contextual. Consent is not just about time - it is about circumstances. A person who consents to an agent managing their investments during a stable market may not have consented to the same agent making the same kinds of decisions during a market crisis. The consent was given in one context and is being exercised in another.

Contextual consent requires what I call consent boundaries: defined conditions under which the agent must pause, notify, or seek reaffirmation before proceeding. These boundaries are not the same as operational constraints (which limit what the agent can do). They are consent constraints (which limit the contexts in which existing consent remains valid).

A well-designed consent boundary for a financial agent might specify: "If market volatility exceeds a defined threshold, pause autonomous trading and seek reaffirmation." The boundary does not revoke the consent - it recognises that the context has changed sufficiently that the original consent may no longer reflect the human's current wishes. It creates a moment of re-engagement that is triggered not by time but by circumstance.


The Regulatory Gap

Current regulatory frameworks for consent - GDPR, the Consumer Duty, financial services conduct regulations - were designed for a world in which consent was a moment, not a duration. They require that consent be informed, specific, and freely given. But they do not adequately address the temporal dimension: how consent maintains its validity over time, how it should be renewed, or how it should respond to changing contexts.

This regulatory gap creates both risk and opportunity. The risk is that organisations will exploit the gap - relying on a single moment of consent to justify years of autonomous action that the human may no longer endorse. The opportunity is that organisations that design consent well - that treat consent as a living relationship rather than a legal checkbox - will build deeper trust, stronger relationships, and more defensible regulatory positions.

The FCA's Consumer Duty, with its emphasis on good outcomes and fair treatment, provides a philosophical foundation for temporal consent design even if it does not yet provide specific guidance. An organisation that can demonstrate it actively maintains, refreshes, and respects the consent of its customers - even when the law does not yet require it - is an organisation that is ahead of the regulatory curve rather than behind it.


The deepest insight of the consent horizon is that consent in agentic systems is not a legal mechanism - it is a relationship. It is the ongoing negotiation between a human and an autonomous system about what the system is permitted to do, under what conditions, and with what degree of independence. Like all relationships, it requires communication, adjustment, and mutual respect.

This is a profound shift from how consent has been understood in digital design. In the web era, consent was transactional: you gave it, the system took it, and the exchange was complete. In the agentic era, consent is relational: it must be given, maintained, adapted, and sometimes renegotiated as the human-agent relationship evolves.

The organisations that understand consent as a relationship - not a transaction - will build the deepest trust, the strongest customer loyalty, and the most resilient competitive positions in the age of autonomous agents.

Designing consent as a relationship means designing for dialogue, not declaration. It means creating systems that listen to the human's evolving needs, that surface relevant information at appropriate moments, that make it easy to adjust the terms of the relationship, and that never assume that yesterday's consent is today's permission. It is harder than designing a checkbox. It is also more honest, more respectful, and ultimately more valuable - for both the human and the organisation.


Designing the Boundary

The practical design challenge of the consent horizon is boundary design: creating the mechanisms through which consent is scoped, maintained, and renewed. I propose five design principles for consent boundaries in agentic systems.

Granularity. Consent must be decomposable. A human should be able to consent to some agent actions while withholding consent for others. "Manage my savings but do not touch my pension." "Optimise my energy costs but do not switch providers without asking." Granular consent gives the human meaningful control without requiring them to understand every technical detail of the agent's operation.

Reversibility. Consent must be revocable at any time, with immediate effect, and without penalty. This is not just a legal requirement - it is a design requirement. A human who feels trapped by their consent will not trust the system. A human who knows they can withdraw consent at any moment will grant it more freely and more genuinely.

Proportionality. The rigour of consent mechanisms should be proportional to the consequence of the actions they authorise. Low-consequence actions require lightweight consent. High-consequence actions require heavyweight consent - more information, more explicit confirmation, more frequent renewal.

Adaptability. Consent mechanisms must adapt to the human's demonstrated preferences and behaviour. A human who consistently reviews and confirms their consent can be offered lighter-touch renewal. A human who rarely engages with consent mechanisms should receive more prominent, more informative renewal prompts. The system should meet the human where they are, not where the designer assumed they would be.


The Infinite Corridor

The consent horizon is not a problem to be solved. It is a condition to be designed for. As autonomous agents become more capable, more pervasive, and more deeply integrated into our financial, legal, and personal lives, the corridor of consent will stretch further and further into the future. The challenge for designers is not to shorten the corridor - that would mean limiting the capability of agents, which would limit their value. The challenge is to illuminate the corridor: to make the journey of consent visible, navigable, and always under the human's ultimate control.

This is perhaps the most important design challenge of the agentic era. Not because consent is the most technically complex problem - it is not - but because consent is the foundation on which every other aspect of the human-agent relationship is built. Trust requires consent. Delegation requires consent. Autonomy requires consent. If the consent architecture is weak, everything built on top of it is unstable.

The corridor stretches toward the horizon. The light at the end is warm but distant. The walls are solid - they can be touched, tested, trusted. The floor is firm. And at every point along the corridor, there is a door that leads back to the beginning, back to the moment of choice, back to the human who said "yes" and who retains, always, the right to say "no."

Design the corridor. Illuminate the horizon. And never forget who is walking through it.


Tony WoodEmerging Technologies and Innovation Consultant & Agentic AI Product Specialist, Lloyds Banking GroupFounder, Agentic Experience Design InstituteManchester