As artificial intelligence (AI) continues to advance rapidly, governments around the world are grappling with the challenge of regulating its development and deployment to protect citizens while fostering innovation. India, the European Union (EU), and the United States (US) have each taken steps to address the risks posed by certain AI practices, though their approaches vary in scope and implementation.
European Union’s AI ACT
In contrast, the EU has taken a more comprehensive approach with the proposed Artificial Intelligence Act (AI Act). The AI Act categorizes AI systems based on the level of risk they pose, with certain “unacceptable risk” practices explicitly prohibited.
The AI Act bans AI systems designed to deceive, coerce, or manipulate human behavior in harmful ways, as well as those that exploit individuals’ vulnerabilities. It also prohibits social scoring systems that lack transparency, fairness, or accountability, and the use of real-time biometric identification in public spaces for law enforcement purposes. The AI Act imposes strict requirements on providers of “high-risk” AI systems[1], including conducting risk assessments, implementing risk management systems, and ensuring transparency and human oversight. Violations can result in hefty fines of up to €30 million or 6% of global turnover.
One of the key elements of this legislation is Article 5[2], which outlines prohibited AI practices. These prohibitions aim to protect individuals and society from the potential harms of certain AI applications, ensuring that the deployment and use of AI systems adhere to ethical and safety standards.
Prohibited AI Practices: (Chapter II, Art. 5)[3]
The following types of AI systems are ‘Prohibited’ according to the AI Act.
- deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
- exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
- biometric categorization systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorizes biometric data.
- social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.
- assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.
- compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.
- inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
- ‘real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when: searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited; preventing substantial and imminent threat to life, or foreseeable terrorist attack; or identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organised crime, and environmental crime, etc.).
INDIA’S APPROACH
AI regulation is evolving with the introduction of advisories and guidelines aimed at promoting responsible AI development while balancing innovation and public safety. India’s approach to artificial intelligence (AI) regulation is currently characterized by a lack of dedicated legislation, relying instead on advisories and existing laws to guide responsible AI development. This analysis compares India’s current regulatory framework with the European Union’s AI Act, particularly focusing on Article 5, which outlines key prohibitions and obligations for AI systems.
India’s Current AI Regulatory Framework
In March 2024, the Ministry of Electronics and Information Technology (MeitY) issued an advisory[4] aimed at regulating AI technologies. This advisory includes several mandates for platforms and intermediaries, such as:
- Permission for Unreliable AI Models: Platforms must obtain permission before deploying “unreliable AI models” and must inform users about the potential fallibility of these models.
- Labeling of AI-Generated Content: All AI-generated content must be labeled to ensure transparency for users.
- User Awareness: Users must be informed about the consequences of engaging with unlawful information, including potential legal repercussions.
- Bias and Electoral Integrity: Intermediaries must ensure that AI models do not propagate bias or discrimination and do not threaten electoral integrity.
The DPDPA, enacted in August 2023, provides a framework for personal data protection that indirectly regulates AI practices.
Digital Personal Data Protection Act (DPDPA)[5]
Key provisions include:
- Consent for Processing Personal Data: Data fiduciaries must obtain valid consent from individuals before processing their personal data, essential for AI systems that rely on such data.
- Rights of Data Principals: Individuals have rights to access, correct, and delete their data, which AI systems must respect.
- Security Safeguards: Data fiduciaries must implement reasonable security measures to protect personal data from breaches.
US
The United States currently does not have a comprehensive federal law regulating artificial intelligence (AI). Instead, the regulatory landscape is characterized by individual state initiatives aimed at addressing the challenges posed by AI technologies. This situation has emerged as Congress has struggled to enact a unified federal framework, leading to a patchwork of state-level regulations.
Present State of AI Regulation in the U.S.
- Lack of Federal Legislation:
Despite discussions about the need for federal AI legislation, progress has been slow. The absence of a comprehensive federal law has created a regulatory vacuum that states are beginning to fill with their laws and guidelines.
- State Initiatives:
States like Colorado and Utah have taken proactive steps to establish their own AI regulations. For instance, Colorado recently enacted a law focused on “Consumer Protections in Interactions with Artificial Intelligence Systems,” which aims to mitigate consumer harm and discrimination by AI systems. This law provides a risk-based regulatory framework that parallels the EU’s approach in its AI Act.
Utah has also introduced legislation that includes disclosure obligations for generative AI interactions, ensuring that users are aware when they are engaging with AI rather than a human.
Key features include:
- Sector-Specific Regulations: Various sectors, such as healthcare and finance, have specific regulations that govern AI applications, but there is no overarching federal AI law akin to the EU AI Act.
- Focus on Innovation: The U.S. emphasizes fostering innovation and economic growth, often prioritizing these over stringent regulatory measures.
- Voluntary Guidelines: The U.S. government has issued voluntary guidelines for AI development, focusing on ethical considerations and transparency but lacking enforceable prohibitions like those in the EU AI Act.
The US has taken a more decentralized approach, with various federal agencies and states introducing guidelines and regulations for specific sectors. For example, the Federal Trade Commission (FTC) has issued guidance[6] on the use of AI in advertising, warning against deceptive practices and biased algorithms. Several states, including California and Illinois, have passed laws restricting the use of facial recognition technology by law enforcement and private entities. However, the US lacks a comprehensive federal AI regulation akin to the EU’s AI Act.
Challenges And Concerns:
Regulating AI poses several challenges, including the rapidly evolving nature of the technology, the difficulty in defining “harm,” and the need to balance innovation with public safety. Experts emphasize the importance of a flexible, adaptable regulatory framework that can keep pace with technological advancements.
Another key consideration is the potential for unintended consequences. Overly restrictive regulations could stifle innovation and drive AI development overseas, while lax regulations may expose citizens to unacceptable risks.
Collaboration between governments, industry, and civil society is crucial in developing effective AI regulations. Stakeholders must work together to identify and mitigate risks, promote transparency and accountability, and ensure that AI systems respect fundamental rights and values.
State legislators expressed concerns about the potential for premature regulation to stifle innovation in a rapidly evolving field. There is a recognition that while regulation is necessary to protect consumers, it must be balanced with the need to foster technological advancement.
COMPLIANCE WITH GLOBAL STANDARDS
U.S. companies operating in the EU must comply with the EU AI Act if their AI outputs are used within the EU. This extraterritorial application means that U.S. firms need to align their practices with EU standards to access the lucrative European market. In contrast to the EU, which has established a comprehensive regulatory framework through the AI Act, the U.S. remains relatively unsettled in its approach to AI regulation. The EU’s AI Act includes specific prohibitions on harmful AI practices, while U.S. regulations vary significantly from state to state, leading to inconsistencies and potential confusion for businesses operating across state lines.
Overview of the EU AI Act and Article 5
The EU AI Act introduces a risk-based regulatory framework that categorizes AI systems by their potential risk to health, safety, and fundamental rights. Article 5 specifically prohibits certain high-risk AI practices, including:
- Manipulative Practices: Banning AI systems that manipulate human behavior in harmful ways, such as social scoring by governments.
- Real-Time Biometric Identification: Prohibiting the use of real-time biometric identification in public spaces, aimed at protecting privacy and civil liberties.
- High-Risk Classification: AI systems are classified (unacceptable, high-risk, limited risk, and minimal risk), with corresponding regulatory obligations.
COMPLIANCE AND PENALTIES
The EU AI Act imposes significant penalties for non-compliance, with fines reaching up to €35 million or 7% of global turnover. This strict enforcement mechanism contrasts sharply with the more lenient regulatory approaches in India and the U.S.
COMPARATIVE ANALYSIS IN REGULATORY SCOPE
India: Currently lacks a comprehensive AI-specific law, relying on advisories and existing data protection regulations. The focus is on fostering innovation while ensuring responsible AI use.
U.S.: Employs a decentralized approach with sector-specific regulations and voluntary guidelines, emphasizing innovation over strict regulatory measures.
EU: Implements a comprehensive, risk-based framework with clear prohibitions and penalties, aiming to ensure safe and ethical AI deployment.
COMPLIANCE BURDENS
India: Faces challenges in aligning its emerging regulations with global standards, particularly as it seeks to attract foreign investment and technology partnerships.
U.S.: Must navigate compliance with the EU AI Act for companies operating in the EU, potentially increasing operational costs and regulatory burdens.
EU: Sets a high standard for compliance that may influence global AI practices, with the potential to create a “Brussels Effect” where companies worldwide adapt to EU regulations to access their market.
INNOVATION VS. REGULATION
India: Strives to balance innovation with regulatory oversight, recognizing the need for a nuanced approach to avoid stifling growth.
U.S.: Prioritizes innovation, often at the expense of comprehensive regulation, which may lead to challenges in addressing ethical concerns.
EU: Aims for a balanced approach that fosters innovation while ensuring public safety and ethical standards, setting a precedent for global AI governance.
CONCLUSION
As AI continues to transform industries and reshape society, the need for effective regulation becomes increasingly urgent. India, the EU, and the US have each taken steps to address the risks posed by certain AI practices, but their approaches vary in scope and implementation.
India’s focus on innovation and a light-touch regulatory approach aims to foster the growth of the AI industry, while the EU’s AI Act takes a more comprehensive and restrictive stance. The US has adopted a decentralized approach, with various federal agencies and states introducing guidelines and regulations for specific sectors.
Ultimately, effective AI regulation requires a balanced approach that protects citizens from harm while enabling innovation and economic growth. Governments must work closely with industry and civil society to develop flexible, adaptable frameworks that can keep pace with the rapid evolution of AI technology.
[1] http://data.europa.eu/eli/reg/2024/1689/oj
[2]http://data.europa.eu/eli/reg/2024/1689/oj
[3] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689
[4] https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf
[5] https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf