Jump to content

Framework Convention on Artificial Intelligence

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 2a04:ee41:82:7b26:d044:a89f:a100:f179 (talk) at 09:44, 12 March 2025. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (also called Framework Convention on Artificial Intelligence or AI convention) is an international treaty on artificial intelligence. It was adopted under the auspices of the Council of Europe (CoE) and signed on 5 September 2024.[1] The treaty aims to ensure that the development and use of AI technologies align with fundamental human rights, democratic values, and the rule of law, addressing risks such as misinformation, algorithmic discrimination, and threats to public institutions.[2]

Background

The development of the Framework Convention on AI emerged in response to growing concerns over the ethical, legal, and societal impacts of artificial intelligence. The Council of Europe, which has historically played a key role in setting human rights standards across Europe, initiated discussions on AI governance in 2020, leading to the drafting of a binding legal framework. The treaty is designed to complement existing international human rights instruments, including the European Convention on Human Rights and the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.

Structure and Content

The Convention establishes fundamental principles for AI governance, including transparency, accountability, non-discrimination, and human rights protection. It mandates risk and impact assessments to mitigate potential harms and provides safeguards such as the right to challenge AI-driven decisions. It applies to public authorities and private entities acting on their behalf but excludes national security and defense activities. Implementation is overseen by a Conference of the Parties, ensuring compliance and international cooperation.

Competing Approaches

While the CoE's AI Convention represents a multilateral effort to regulate AI through a human rights-based approach, alternative frameworks have also been proposed. One notable example is the Munich Draft for a Convention on AI, Data and Human Rights, an initiative led by legal scholars and policymakers in Germany.[3] The Munich Draft advocates for stronger safeguards against AI-related risks, emphasizing stricter data protection measures, accountability for AI developers, and explicit prohibitions on high-risk AI applications, such as mass surveillance and autonomous lethal weapons. Unlike the CoE convention, which focuses on balancing innovation with regulation, the Munich Draft takes a more precautionary stance, calling for tighter controls over AI deployment in sensitive domains.

Other competing international efforts include the OECD’s AI Principles, the GPAI (Global Partnership on AI), and the European Union's AI Act, each of which offers different regulatory strategies to govern AI at regional and global levels.

References

  1. ^ "US, Britain, EU to sign first international AI treaty". Reuters. 5 September 2024. Retrieved 5 September 2024.
  2. ^ Murphy, Neil. "US, Britain and EU sign international treaty to tackle AI threats". The National. Retrieved 2024-09-06.
  3. ^ "Munich Draft for a Convention on AI, Data and Human Rights". ResearchGate. 2024. Retrieved 2024-09-06.