South Africa's Electronic Communications and Transactions Act 25 of 2002 ("ECTA") is one of those rare pieces of legislation that has for the most part aged exceptionally well. More than two decades later, it still speaks with surprising relevance to modern digital commerce in most cases. That longevity is not accidental: the drafters of ECTA deliberately set out to formulate technology-neutral language aimed at supporting electronic commerce without tying legal recognition to any specific platform, format, or system architecture.
At the time, the drafters were likely dealing with a world where electronic data interchange ("EDI") was the burgeoning "next big thing" in business-to-business transactions. Large corporates were already exchanging structured messages such as purchase orders, invoices, shipping notices and confirmations. The technology looked modern then, but the underlying commercial need was timeless: businesses needed legal certainty that "machine-made" messages could create real obligations.
The same provisions designed to give legal effect to automated EDI workflows may have a renewed role to play in the era of transactional agentic AI, where systems can act with a degree of autonomy, interact with other systems, and execute tasks that look increasingly like commercial "agency".
Transactional agentic AI
Agentic AI is often described as "AI that can act". In a transactional context, this means something more specific, namely that an agentic AI system does not merely generate suggestions or draft messages, but can take steps that move a commercial process forward, including steps that may result in a binding agreement.
In practical terms, a transactional AI agent can be configured to:
- gather information (for example, by sourcing quotes from multiple suppliers);
- apply rules and constraints (such as through generating preferred supplier lists, price ceilings, or delivery requirements);
- negotiate within parameters (for example, by requesting a better price or faster delivery); and
- execute the next step in a specific process (such as by placing an order, confirming a booking, or accepting terms).
What makes these systems "agentic" is not simply that they are automated. It is that they can operate with a degree of independence, make selections between options, and trigger real-world actions without a human's direct involvement.
Transactional agentic AI is therefore not merely software that helps. It is increasingly software that acts, and that is precisely why ECTA's framework for automated transactions and electronic agents becomes relevant again in this context.
General approach followed by ECTA
A key theme in ECTA is that electronic communications should not be dismissed merely because they are electronic. This principle is often referred to as functional equivalence: if an electronic process achieves what the law requires (writing, signature, originality, retention, communication, attribution), then the law should recognise it.
ECTA uses the concept of a data message to cover electronic communications broadly. In effect, the law treats electronic messages as capable of carrying legal meaning, whether that message is an email, a web form submission, an EDI transmission, or an API call between systems.
This matters because once the "data message" is recognised as a legally meaningful object, the rest of the framework can operate around it in relation to attribution, timing, formation, and enforceability.
Provisions in ECTA that deal with automated transactions
Key definitions
Section 1 of ECTA defines an "automated transaction" as "…an electronic transaction conducted or performed, in whole or in part, by means of data messages in which the conduct or data messages of one or both parties are not reviewed by a natural person in the ordinary course of such natural person's business or employment". The latter part of the definition sets a human-in-the-loop threshold, seeking to exclude cases where a data message is in fact reviewed by a human as part of the ordinary process (such as a procurement officer).
The same section defines an "electronic agent" as "…a computer program or an electronic or other automated means used independently to initiate an action or respond to data messages or performances in whole or in part, in an automated transaction".
Following the theme of technology-neutral drafting, both definitions are broad and inclusive.
Stepping through section 20
In respect of "automated transactions", the section essentially provides that:
- a contract can be formed automatically even if the step that creates the agreement is performed by a computer system ("electronic agent"), rather than a person;
- a contract can be formed where one party, or both parties, use electronic agents to make or accept the deal; and
- if a person uses an electronic agent to contract, that person is generally bound by what the agent agrees to, even if the person did not review terms.
The aforegoing is qualified by two exceptions, namely that contract is not formed if:
- the contractual terms were not available for human review before the contract was concluded; or
- in the case of one of the parties being a natural person, that person makes a material mistake when dealing directly with someone else's electronic agent.
The latter exception pertaining to a material mistake is only available if all of the following apply, namely that the natural person:
- was not given a reasonable chance to prevent or correct the mistake;
- notifies the other party of the mistake as soon as reasonably possible after discovering it;
- takes reasonable steps to return (or destroy, if instructed) anything they received; and
- has not used or gained any material benefit from what they received.
Interaction with default position at common law
Section 20 proceeds on the basis that the principal deploying the electronic agent is identifiable. This allows ordinary agency principles to continue to apply and means a principal cannot generally avoid being bound by blaming a "rogue" electronic agent, except in the limited instances where section 20 itself provides relief.
Unlike the common law emphasis on receipt of acceptance, section 20 permits contracts to be formed immediately through automated acts without the offeror having actual knowledge of or notice of acceptance at the time.
The common law ability to rely on mistake is narrowed significantly. A party interacting with an electronic agent may, insofar as the error relates, escape contract formation only if the strict requirements of the material error exception are satisfied.
Taken together, these changes reflect a deliberate legislative choice to adapt contract law to system-driven commerce, shifting risk toward the party that deploys automation while preserving sufficient safeguards to prevent obvious unfairness.
Is section 20 adequate to govern transactional agentic AI, or does it introduce risk?
As far as we can see, the provisions of section 20 have not been applied by our courts in this context to date.
Does transactional agentic AI fit into section 20?
In many practical deployments, transactional agentic AI will fit comfortably within the definition of "electronic agent" because it is:
- a computer program or automated means
- acting independently at the point of transaction, and
- initiating actions or responding to data messages.
Importantly, ECTA does not require that the system be deterministic or "rules-based". The definition is functional. This means that even where an AI system operates with a degree of discretion (for example, choosing between suppliers or selecting delivery options), it can still be understood as an electronic agent for the purposes of section 20.
The "human-in-the-loop" threshold is both helpful and risky
The definition of "automated transaction" hinges on whether the relevant conduct or data messages are reviewed by a natural person in the ordinary course of business. That is a useful threshold because it distinguishes true automation (where systems transact without routine human review) from ordinary electronic processes (where a human still approves each step).
However, it also creates a practical risk: many businesses are moving toward "partial automation", where AI proposes actions and a human only sometimes approves. This can create grey areas as to whether a particular transaction is truly "automated" or not.
From a risk perspective, the more a business wants to rely on section 20's automated transaction legal framework, the more it should ensure that its processes are designed so that automated transactions clearly fall within the ambit of section 20. This means any human review step that could exclude the operation of section 20 should be deliberate, recorded, and consistently applied.
New risks arise due to the nature of agentic AI
A major difference between EDI and agentic AI is not that contracts can be formed (section 20 covers that), but that agentic AI can sometimes operate beyond what a business expected.
Given that section 20 simply provides that an electronic agent is deemed to have the authority to bind its principal, it will cause the principal to make a binding offer. From a policy perspective, this is fair: the person who chooses to deploy an electronic agent must ensure that it operates within in the intended parameters (i.e. mandate). For the purposes of legal certainty and commercial efficiency, the counterparty must be able to rely on this fact.
The risks that arise in this context must likely be addressed through contractual and common-law principles (including mandate and representation), as well as the terms and controls the deploying business put in place.
Requirement for terms to be available for review before contract formation
It is important to note that the terms of the proposed agreement do not have to be reviewed; they are simply required to be available for review.
There is perhaps some tension in the language of section 20 in that on one hand, to qualify as an automated transaction, a natural person must not in the ordinary course review the terms of the proposed transaction, but on the other hand, the terms must be available for review.
Whoever alleges a contract was formed would need to allege and prove the terms. Having the terms available for review before contracting, and stored thereafter, therefore does not seem like an insurmountable obstacle.
The material error exception available to natural persons
The material error exception is likely to become more relevant as businesses increasingly field transactional AI to deal with people.
In conclusion
As businesses increasingly deploy AI systems that do more than assist and instead act, the legal question will not be whether AI is a legal person. The real question will be whether the law recognises the reality that systems can bind the organisations that deploy them. South African businesses may already have more legal footing for that future than they realise.
Agentic AI can create "autonomous momentum" where multiple steps occur rapidly and with limited human visibility. This can magnify commercial risk arising from incorrect orders at scale, unintended acceptances, and transacting outside procurement policy, while disputes can become evidence-heavy because the "decision" was embedded in an AI model.
Businesses using agentic AI in transactional settings should treat the following as minimum hygiene:
- define what the AI is authorised to do (and what it may not do);
- impose value limits, supplier lists, and approval thresholds;
- ensure audit logs exist and can be retrieved;
- ensure the counterparty terms are reviewable and retainable; and
- ensure humans can intervene, reverse, or halt transactions when necessary.
In our view, where section 20 applies, it is best understood as a legal foundation that supports automated contracting in principle. It does not make it safe by default, and businesses should not treat it as a substitute for proper governance and risk management.
Wherever possible, an overarching master agreement or binding terms of use be could suitable to remove the uncertainties that arise from section 20 as it applies to the new realities of transactional agentic AI. Within such a contractual framework, the deal parameters negotiated by transactional agentic can be perhaps limited to the commercial terms, and exclude legal terms derived from the overarching master agreement or binding terms of use.