Regulating the AI value chain

By Dr Dirk Brand on 4 September 2023
  Back

Regulating artificial intelligence is very different to regulating tangible products such as a motor vehicle or consumer goods.  This is simply due to the nature of the technology.  The various attempts to define AI are an indication of the complexity of regulating the technology.  The definition in the EU AI Act is such an example: “a machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations or decisions, which influence physical or virtual environments”.  

AI is also part of a family of technologies such as augmented reality and the Internet of Things.  Considering the possible application of AI systems, it is evident that it could be used as stand-alone software, embedded into a physical product or be included as a component of a larger system.  It is therefore not possible to only focus on the final product as in the case of regulating the use of a motor vehicle.  There is a whole value chain that should be in the focus of regulators.

The drafters of the EU AI Act identified the need to define the various role-players and to regulate the whole AI value chain.  Since the developer (provider) of the AI system is not necessarily the deployer, both these role players have specific obligations regarding the AI system.  Distributors and importers of AI systems are the other key role players in the AI value chain, who also have specified responsibilities.  The EU AI Act refers to them collectively as ‘operators’.  While the legislation is aimed at the promotion of human-centric and trustworthy AI, to ensure the protection of health, safety, fundamental rights, democracy and the rule of law, and the environment, it is necessary to determine the responsibilities of all the role players in the AI value chain, that must contribute to achieving this overall objective.

A provider is defined as a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trade mark, whether for payment or free of charge.

A deployer is a natural or legal person, public authority, agency or other body using an AI system under its authority.

The responsibilities of providers of high-risk AI systems under the EU AI Act focus on the design and development of the AI system, and include for example establishing and maintaining a risk management system that is applied to the whole AI life cycle, the drafting of detailed technical documentation that describes the AI system, and intelligible instructions for use to ensure sufficient transparency.  If the AI system is intended to interact with a natural person, the provider must ensure that the design enables the AI system, the provider or the deployer to inform the natural person exposed to the AI that they are interacting with an AI system, unless it is obvious from the circumstances and context of use.

Since deployers are instrumental in the operation of the high-risk AI system, their obligations include ensuring the AI system is used in accordance with the instructions of use, and doing a fundamental rights impact assessment of the AI system in the specific context of use, and where applicable also a data protection impact assessment.  

The following example illustrates the roles of the different actors in the AI value chain.  Company A (provider) develops an AI system to be used in a delivery robot, and includes another AI system for person recognition from company B (provider) on the robot.  Company A is now also a deployer since it puts the third-party AI system into use on its robot.  A hospital group (deployer) uses the robot for medicine deliveries on their large hospital campuses.  If the robot engages natural persons on its route, it should be evident from the context that this is a robot, but the medical staff who would receive packages through this form of delivery should be informed by the deployer that they can expect a robot delivery and not one done by a human. The provider also has an obligation to provide sufficient information to inform people that they will be interacting with a robot.

This limited overview of the responsibilities of various role players in the AI value chain could be misleading, since the EU AI Act indeed provides for various other obligations, in particular for providers and deployers.  Implementation of a post-market monitoring system and conformity assessments are just some of the further obligations. It should be noted that harmonised standards and other sectoral EU legislation, e.g. on machinery, are also important to consider to ensure legal compliance.

Ethical principles such as transparency, privacy and fairness provide an important basis for the design and development of all AI systems and all operators should apply these principles which strengthen ethical and trustworthy AI.

Back to top

Please note that our blog posts are informal commentaries on developments in the law as at the time of publication and not legal advice. You should place no reliance on our blog posts; we look forward to discussing your particular matter with you.