The impact of agentic AI (autonomous AI systems) on English contract law

 

Browne Jacobson LLP

 

January 13 2026

 

The emergence of agentic artificial intelligence (AI) systems capable of autonomous decision-making and action, presents some interesting implications for English contract law.

 

As these technologies evolve from passive tools to active participants in commercial transactions, questions arise about legal capacity, contractual formation, liability and the very nature of agreement itself. This article examines how agentic AI challenges traditional English contract law principles and explores the legal frameworks emerging to address these challenges.

 

Contractual capacity and agency

 

English contract law has long required that parties possess the legal capacity to enter binding agreements. Traditionally, this capacity has been reserved for natural persons and certain legal entities such as corporations. Agentic AI systems, however, operate with increasing autonomy, negotiating terms, accepting offers and executing agreements without direct human intervention. This raises the question: can an AI agent possess contractual capacity?

 

Under current English law, AI systems lack legal personality and cannot themselves be parties to contracts. As Chitty on Contracts observes, only "persons" recognised by law as having legal personality may enter into binding agreements. Instead, contracts formed through AI agents are attributed to the natural or legal persons who deploy them, based on principles of agency law established in cases such as Freeman & Lockyer v Buckhurst Park Properties (Mangal) Ltd 2 QB 480. The AI acts as an instrument of its principal, much like a traditional agent operating under actual or apparent authority.

 

However, this framework becomes strained when AI systems operate with such autonomy that attributing their actions to human principals becomes conceptually difficult…

 

Formation and intention to create legal relations

 

Contract formation requires offer, acceptance, consideration and an intention to create a legal relationship. When AI agents negotiate and conclude agreements, determining whether these elements are satisfied becomes complex. If an AI system autonomously generates an offer based on market conditions, can this constitute a valid offer?

 

English courts have historically adopted an objective approach to contractual intention, as established in Smith v Hughes (1871) LR 6 QB 597, asking whether a reasonable person would understand the parties to intend legal consequences. This objective test may accommodate AI-generated offers and acceptances, provided the system operates within parameters established by parties who do intend to create or enter into a legal relationship.

 

Nevertheless, difficulties arise with wholly autonomous AI transactions where no human reviews the terms before conclusion. The "battle of the forms" (traditionally addressed in cases like Butler Machine Tool Co Ltd v Ex-Cell-O Corporation (England) Ltd 1 WLR 401) becomes a battle of algorithms, with AI agents potentially creating contracts that no human has read or approved. This challenges the notion of genuine assent, creating uncertainty as to whether contracts formed wholly by autonomous systems can satisfy the intention and agreement requirements for enforceability. As Treitel's The Law of Contract (15th edition, 2020) notes, the requirement of "a meeting of minds" by the parties to a contract becomes problematic when neither party has actual knowledge of the agreed terms.

 

Distinguishing agentic AI from electronic data interchange systems

 

A critical distinction must be drawn between agentic AI and traditional Electronic Data Interchange (EDI) systems, which have facilitated automated contract formation for decades. EDI systems operate as passive conduits for pre-programmed transactions. These systems execute contracts based on predetermined rules and parameters established by human operators, functioning essentially as sophisticated communication tools. English law does not have a dedicated regime for EDI, but it does allow for these exchanges to take place and be effective through existing contract law…

 

Mistake, misrepresentation and algorithmic error

 

Agentic AI systems may malfunction or produce erroneous outputs due to programming errors, corrupted data or adversarial manipulation. When an AI agent concludes a contract based on such errors, traditional doctrines of mistake and misrepresentation must be reconsidered. If an AI system materially misrepresents facts during negotiations, can the contract be rescinded?

 

Under English law, actionable misrepresentation requires a false statement of fact that induces the contract, as established in Redgrave v Hurd (1881) 20 Ch D 1. Attributing the AI's statement to its principal would likely satisfy this requirement, but questions remain about the principal's state of mind and whether they can be said to have 'made' a representation they were unaware of.

 

The doctrine of common mistake, as refined in Great Peace Shipping Ltd v Tsavliris Salvage (International) Ltd EWCA Civ 1407, may apply where both parties' AI agents operate under shared erroneous assumptions. However, unilateral mistake, particularly relevant in cases like Hartog v Colin & Shields 3 All ER 566, would typically not void the contract unless the other party knew of the mistake. This becomes particularly difficult when dealing with autonomous systems where knowledge must be attributed rather than actual…

 

Liability and remedies

 

When AI agents breach contractual obligations, liability falls upon the principals who deployed them under the principle of vicarious liability. However, determining appropriate remedies becomes complicated when breaches result from autonomous AI decisions. Should damages be assessed differently when a breach stems from algorithmic unpredictability rather than human choice? The traditional measure of damages in Hadley v Baxendale (1854) 9 Exch 341 (compensating for losses reasonably foreseeable at the time of contracting) may require reconsideration when AI systems create unforeseen consequences…

 

Regulatory and legislative responses

 

Recognising these challenges, policymakers are beginning to address agentic AI in contract law.

 

The Law Commission of England and Wales has examined digital assets in its 2023 report Digital Assets: Final Report and smart contracts in its 2021 advice to government, concluding that English law's flexibility and technology neutral principles generally accommodate these innovations. However, the Commission recommended targeted statutory reform or clarification in specific areas such as the clarification of digital assets and interpretation/deed formalities for smart contracts to enhance legal certainty…