First Legally binding AI treaty
Kartavya Desk Staff
Syllabus: Governance: AI Governance
Source: IE
Context: The first legally binding AI treaty, signed by the US, EU, and UK, prioritizes human rights and democracy, adopting a risk-based approach to AI regulation across public and private sectors.
The Framework Convention on AI by the Council of Europe aims to mitigate AI threats to human rights, democracy, and the rule of law. Unlike the EU AI Act, it ensures AI lifecycle activities align with these principles.
Key Features of the AI Treaty:
• Risk-Based Approach: Regulates AI systems based on risk levels throughout their lifecycle.
• Human Rights Focus: Ensures AI systems uphold human rights, democracy, and the rule of law.
• Public and Private Sector Applicability: Covers AI use in both public and private sectors globally.
• Accountability: Signatories are responsible for harmful and discriminatory AI outcomes.
• Legal Recourse: Victims of AI-related rights violations are guaranteed legal protection.
• Democratic Safeguards: AI systems must not undermine democratic institutions or judicial independence.
• Global Reach: Involves commitments from nations beyond Europe for a united global AI standard.
• No Punitive Sanctions: Relies on monitoring for compliance rather than fines or penalties.
Significance of the AI treaty:
Area | Significance of the Treaty
Human Rights | Ensures AI systems respect human dignity and privacy, addressing risks like discrimination and bias.
Democracy | Protects democratic institutions by regulating AI’s role in political discourse and decision-making.
Rule of Law | Ensures legal safeguards against AI misuse, promoting accountability and equality before the law.
Global Governance | Establishes a global framework for responsible AI use, harmonizing international AI regulations.
Accountability | Holds AI developers and operators responsible for harmful outcomes, ensuring legal recourse.
Key Components of the EU Framework for AI Regulation:
Components | Details
Safeguards in Legislation | Empowerment of Consumers: Individuals have the ability to launch complaints against perceived AI violations.
Clear boundaries are set on AI usage by law enforcement agencies.
Strong restrictions on facial recognition technology and AI manipulation of human behaviour are specified.
Tough penalties for companies found breaking the rules.
Limitations on governments for use of real-time biometric surveillance in public areas (allowed only in cases of serious threats like terrorist attacks)
Categorization of AI Applications | AI applications are classified into four risk categories based on their level of risk and invasiveness.
- 1.Banned Applications: Mass-scale facial recognition and behavioural control AI applications are largely banned, with exemptions for law enforcement.
- 2.High-Risk Applications: Allowed with certification and provision for backend technique transparency, such as AI tools for self-driving cars.
- 3.Medium-Risk Applications: Deployable without restrictions, like generative AI chatbots, with explicit disclosure to users about AI interaction, transparency obligations, and detailed tech documentation.
- 4.Low-risk applications with transparency obligations.
Other EU’s Regulatory Achievements | General Data Protection Regulation (GDPR) Implementation: Enforced since May 2018, focusing on privacy and explicit consent for data processing.
Digital Services Act (DSA): Focus on regulating hate speech, counterfeit goods, etc.
Digital Markets Act (DMA): Identifying “dominant gatekeeper” platforms and addressing non-competitive practices and dominance abuse.
Concerns related to AI use:
Concerns | Explanation
Privacy | Risk of personal and sensitive information being used unethically, such as for targeted advertising or political manipulation.
Responsibility | AI models generating new content may lead to challenges in identifying responsibility, causing ethical dilemmas over the content’s origin.
Automation and Job Displacement | AI’s potential to automate processes raises concerns about job displacement, impacting those skilled in the automated areas.
Bias and Discrimination | AI training on biased data may result in decisions that unfairly disadvantage certain groups, perpetuating societal inequalities and discrimination.
Lack of Transparency | Concerns about accountability for AI actions, questioning whether responsibility lies with creators, deploying companies, or regulating governments.
India’s Stance on AI Regulation:
India aims to develop sovereign AI and AI computing infrastructure, focusing on real-life applications in healthcare, agriculture, governance, language translation, etc., to catalyse economic development.
Currently, India is transitioning from a no-regulation stance on AI to actively formulating regulations based on a risk-based, user-harm approach. India’s AI penetration factor is 3.09, the highest among G20 and OECD countries. Some of the initiatives are:
Initiatives:
• Awareness Initiatives like #AIFORALL
• NITI Aayog’s National Strategy for AI (2018) focused on inclusive and responsible AI
• NITI Aayog’s ‘Principle of Responsible AI*,*‘ outlines seven key principles for AI use
• Meity launched IndiaAI in 2023, covering all AI-related research.
• TRAI proposed a domestic statutory authority for AI regulation in July 2023, emphasizing a risk-based framework.
• Ethical guidelines for AI in healthcare by the Indian Council of Medical Research
• SEBI’s circular guiding AI policies in the capital market
• National Education Policy 2020 recommends integrating AI awareness into school courses.
Initiatives by other countries:
Country | Initiative
G7 | The Group of Seven (G7) has proposed a “risk-based” regulation for artificial intelligence (AI) tools, which could be a first step towards creating a template to regulate AI such as OpenAI’s ChatGPT and Google’s Bard
Italy | Became the first major Western country to ban OpenAI’s ChatGPT due to privacy concerns.
UK | Adopts a ‘light-touch’ approach to encourage innovation in the AI industry.
Japan | Takes an accommodative approach to AI developers.
China | Drafted a 20-point plan to regulate generative AI services, expected to be enforced later this year.
US | Proposed a nonbinding roadmap, the AI Bill of Rights Blueprint, outlining five core principles for the responsible use of AI.
Conclusion:
Although the risks of AI are widely known, it remains unclear how the AI regulations would address these risks and how grievances would be remedied. Previously, Elon Musk, Steve Wozniak, and over 15,000 others had called for a six-month pause in AI development, and for shared safety protocols to be implemented by labs and independent experts.
Insta Links:
• What are hallucinating chatbots?
Prelims Links:
Q1. With the present state of development, Artificial Intelligence can effectively do which of the following? (UPSC 2020)
• Bring down electricity consumption in industrial units
• Create meaningful short stories and songs
• Disease diagnosis
• Text-to-Speech Conversion
• Wireless transmission of electrical energy
Select the correct answer using the code given below:
(a) 1, 2, 3 and 5 only (b) 1, 3 and 4 only (c) 2, 4 and 5 only (d) 1, 2, 3, 4 and 5
Ans: B