The legal implications of transparency related to artificial intelligence

By Dr Dirk Brand on 21 September 2023
  Back

Transparency, when used as an important principle in constitutional law, is used as a tool to support accountability and good governance.  Citizens want to see and understand the decisions of public officials, so that they can hold the officials and government accountable. Transparency is also a key principle found in most policy documents on ethical and responsible AI.  While the notion of ‘making visible and understandable’ seems similar, transparency is a much more complex matter when dealing with AI.

The nature of AI, in particular machine-learning that adapt and change over time, is opaque and complex.  This means that making some part of the algorithm visible to give effect to transparency is not helpful.  Transparency in this context had to be redefined to include ‘explainability’, that is the ability to explain how the algorithmic decisions are made, and ‘traceability’, which provides information about the data sources and how they are managed.

Transparency is not only a key principle in AI policy documents but is now also included in legislative frameworks.  In the Blueprint for an AI Bill of Rights in the US transparency, labelled as ‘notice and explanation’ is included as one of the five key principles that must guide the development and deployment of AI.  Transparency also features as an important requirement in the debates in the US Senate to develop new AI legislation.

In the EU AI Act (June 2023) transparency is included as one of the six key ethical principles in Art. 4a that should be used in all AI systems:

‘transparency’ means that AI systems shall be developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights.

The European lawmakers clearly wanted to make sure that transparency is given effect to in practice, and thus included it as a substantive legal requirement applicable to high-risk AI systems.  Art. 13 stipulates in detail what is required of AI developers to ensure transparency of an AI system.  This requirement starts with the design of the AI system and also applies to the development and operation thereof.

High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and users to reasonably understand the system’s functioning.

Before the AI system is placed on the market, all the technical means available must be used to ensure that the AI system’s output is interpretable by the provider and the user.  High-risk AI systems must also be accompanied by intelligible instructions for its use.  Some of the other requirements for high-risk AI systems, such as the need for detailed technical documentation (Art. 11) and record-keeping (Art.12) further support the notion of transparency.  In the case of foundation models, the providers must also prepare extensive technical documentation and intelligible instructions to enable the downstream providers to comply with the requirements when they use the foundational model in developing a new AI system.

This is not the end of the transparency obligations in the EU AI Act.  A specific obligation to inform natural persons that they are interacting with an AI system is provided in Art. 52:

Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use.

These transparency requirements must be met either before or at the first interaction with or exposure of the AI system to natural persons.  In practice it means for example that a humanoid robot does not have to say ‘I am a robot and not a human’ since it would be obvious.  A chatbot on a company’s website, on the other hand, should however state clearly that it is not a natural person who communicates with the clients of the company, but in fact and AI system.

When AI systems are used to generate or manipulate text, audio or visual content that could falsely appear to be authentic, there must be a clear and visible disclosure that the content was created by AI.

The transparency requirements in the EU AI Act are aimed at addressing the opacity of AI systems to enable users to interpret the output of the system and to use it appropriately (Recital 47).  The information that is provided to give effect to the transparency requirements should also be clear about relevant potential risks to the fundamental rights of users.

Transparency is thus much more than an important principle underpinning ethical AI, but it also implies adherence to specific legal requirements when designing, developing and deploying high-risk AI systems.

Back to top

Please note that our blog posts are informal commentaries on developments in the law as at the time of publication and not legal advice. You should place no reliance on our blog posts; we look forward to discussing your particular matter with you.