AI, human rights, democracy and the rule of law

By Dr Dirk Brand on 6 June 2024
  Back

One of the pillars of the European Union’s Artificial Intelligence Act (AI Act, 2024) is the protection of fundamental human rights, democracy and the rule of law.  In 2022 the White House Office for Science and Technology Policy in the USA published a Blueprint for an AI Bill of Rights.  At an international level the protection of human rights in the development and use of AI is confirmed in the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (2024).  These are only some of the policy and legal initiatives relating to AI and human rights.  It is thus fair to conclude that the protection of human rights, democracy and the rule of law in the context of AI is not only acknowledged in various policy and legal frameworks, but it is also given specific meaning.  A brief overview of the extent of human rights protection in these documents is provided next.

The Framework Convention on AI, Human Rights, Democracy and the Rule of Law, is the first ever legally binding international treaty on this topic.  It stipulates that parties to the Convention shall adopt and maintain measures to ensure that:

  • the activities within the lifecycle of artificial intelligence systems are consistent with obligations to protect human rights; and 
  • AI systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes, including individuals’ fair access to and participation in public debate.

It also contains key principles applicable to the lifecycle of AI systems, namely:

  • Human dignity and individual autonomy;
  • Transparency and oversight;
  • Accountability and responsibility;
  • Equality and non-discrimination;
  • Privacy and personal data protection;
  • Reliability; and
  • Safe innovation.

These principles all have a direct or indirect relation to the protection of human rights during the lifecycle of an AI system and should be included in appropriate measures adopted by the parties to the Convention.

The Blueprint for an AI Bill of Rights is not a legal document, but it is a framework for the development of policies and practices that protect human rights and democratic values in the deployment and governance of AI.  The name is somewhat misleading since it does not suggest the development of an AI Bill of Rights, but it rather provides guidelines based on five key principles that should support policy and regulatory developments on AI to protect the public against harm.  These principles are safe and effective systems, protection against algorithmic discrimination, data privacy, notice and explanation, human alternatives, consideration and fallback.  It nevertheless states clearly that this framework must be applied to all automated systems that could potentially impact individuals’ or communities’ exercise of:

  • Civil rights, civil liberties and privacy;
  • Equal opportunities; or
  • Access to critical resources or services, such as healthcare.

For a clear stipulation on how human rights should be protected in the development and deployment of AI, it is necessary to turn to the EU AI Act, the first comprehensive law on AI in the world. It has an extensive focus on the protection of human rights, democracy and the rule of law, which is in line with the Joint European Declaration on Digital Rights and Principles for the Digital Decade (2023). The seven non-binding ethical AI principles on which the EU AI Act is based, should guide the development and deployment of AI systems, namely:

  • Human agency and oversight;
  • Technical robustness and safety;
  • Privacy and data governance;
  • Transparency;
  • Diversity, non-discrimination and fairness;
  • Societal and environmental well-being; and
  • Accountability.

While there is no doubt that the continuously expanding scope of AI development brings huge benefits to society that contribute to human well-being, the risks and potential harms related to AI are also well-known.  It is due to these risks and in support of safe and trustworthy AI that clear rules that protect human rights are stipulated and given effect to.  The fundamental approach of the EU AI Act is the protection of health and safety, fundamental human rights, including democracy, the rule of law and environmental protection.  Specific human rights protection is provided by stipulating that a deployer of a high-risk AI system must do a fundamental rights impact assessment (FRIA) and a data protection impact assessment (DPIA) in accordance with the General Data Protection Regulation (GDPR), where applicable, prior to putting it into use.  A fundamental rights impact assessment in accordance with Art. 27 of the EU AI Act must inter alia indicate:

  • The categories of natural persons and groups likely to be affected by the use of the high-risk AI system in the specific context.
  • The specific risks of harm likely to have an impact on the identified categories of individual natural persons or groups of persons.
  • A description of the implementation of human oversight measures.
  • A description of the risk mitigation measures to be taken in the case of materialisation of the identified risks.

In terms of the data governance requirements for developers of high-risk AI systems in Art. 10 of the EU AI Act the adoption of appropriate data governance practices shall include an examination of possible biases and a potential negative impact on fundamental rights.  These specific requirements relating to high-risk AI systems are clearly aimed at the protection of fundamental human rights.

Having regard to these AI policy developments and the specific determinations in the EU AI Act, the question is if this provides adequate protection of human rights in the context of AI.  It is argued that although the above developments are applauded, they are not enough.  Some existing human rights have been given more or clearer content in the context of AI, for example the right to privacy.  It is also possible that new digital rights could be developed and thus also warrant protection, for example a right to protect your online identity, a right to cyber security, or a right of protection of children against manipulation and abuse online.  Technological developments require a fresh look at the scope and content of human rights relating to the development and deployment of AI.  There should be an international initiative comparable to the Universal Declaration of Human Rights (1948) that provides clear recognition and protection of human rights in the context of AI, and which should have global application.

Back to top

Please note that our blog posts are informal commentaries on developments in the law as at the time of publication and not legal advice. You should place no reliance on our blog posts; we look forward to discussing your particular matter with you.