EU Standards: Obligations for Third-Country AI Providers and Deployers (Series-4)

The EU Artificial Intelligence Act (EU AI Act) represents a significant regulatory effort by the European Union to govern the development, deployment, and use of artificial intelligence (AI) systems within its jurisdiction. The Act is designed to ensure that AI technologies are used responsibly, with an emphasis on safeguarding fundamental rights, promoting transparency, and fostering trust in AI systems. It imposes detailed obligations on entities both within and outside the EU, with specific articles addressing different aspects of these obligations.

1. Applicability to Third-Country Entities (Article 23)

Article 23 of the EU AI Act[1] is crucial as it extends the reach of the regulation beyond the borders of the European Union. According to Article 2, the EU AI Act applies to any provider or deployer of AI systems if the output produced by these systems is intended for use within the EU, regardless of the location of the provider or deployer. This means that companies based outside the EU must comply with the Act if their AI systems have an impact on individuals or entities within the EU. The extraterritorial application of the EU AI Act ensures that the protective measures and obligations set out in the regulation cannot be avoided by simply operating from outside the EU. Obligations for Providers, Deployers, and Other Entities.

Article 23 also outlines the different types of entities that are covered under the Act, including:

Providers: These are entities that place AI systems on the EU market or put them into service within the EU. This includes entities established in third countries (non-EU countries).

Deployers: Entities that use AI systems within the EU are also subject to the Act’s requirements, ensuring that the use of AI technologies adheres to EU standards.

Importers and Distributors: Those involved in bringing AI systems into the EU market, such as importers and distributors, are required to comply with the Act’s provisions.

Authorized Representatives: Non-EU providers must appoint an authorized representative within the EU who will be responsible for ensuring compliance with the Act. This representative serves as a point of contact for regulatory authorities within the EU.

2. High-Risk AI Systems (Article 16)

Article 16 focuses on AI systems that are classified as “high-risk” due to their potential impact on safety and fundamental rights. The EU AI Act places stringent obligations on providers of these high-risk systems to ensure they are developed and deployed responsibly. Key obligations include:

Identification Requirements: Providers must indicate their name, registered trade name, and contact address on the high-risk AI system or its packaging. This requirement ensures transparency and accountability, making it easier to trace the origin of the AI system.

Compliance with Accessibility Standards: High-risk AI systems must comply with accessibility requirements as per existing EU directives, including Directive 2019/882 (the European Accessibility Act) and Directive 2016/2102 (on the accessibility of websites and mobile applications of public sector bodies).[2] This ensures that AI technologies are accessible to all users, including those with disabilities.

Risk Management System: Providers are required to implement a robust risk management system that identifies, assesses, and mitigates risks associated with their AI systems throughout the system’s lifecycle. This system should be dynamic and adaptable to new risks as they arise.

Data Governance: High-risk AI systems must adhere to strict data governance standards. This includes ensuring that the data used for training, validation, and testing is relevant, representative, and free from bias. Effective data governance is crucial to prevent discriminatory outcomes and ensure the fairness of AI systems.

Human Oversight: There must be mechanisms in place for human oversight of high-risk AI systems. This oversight allows for human intervention when necessary to prevent harm or mitigate risks, ensuring that AI systems do not operate autonomously in critical scenarios without adequate safeguards.

Transparency and Information Provision: Providers are obligated to provide clear and comprehensive information about the AI system’s capabilities, limitations, and intended use. This information must be accessible to users and other affected parties to ensure they understand how the AI system works and its potential impact.

3. Regulatory Framework and Compliance (Article 24)

Article 24 of the EU AI Act[3] establishes a regulatory framework aimed at ensuring compliance with the obligations set out in the Act, particularly for high-risk AI systems. The article outlines the following key aspects:

Market Surveillance: The Act mandates the creation of mechanisms for market surveillance to monitor compliance with the Act. This involves regular assessments and audits of AI systems to ensure they meet the established standards. Market surveillance authorities in each EU Member State will be responsible for enforcing these provisions.

Enforcement Authorities: Designated authorities within each Member State will be responsible for enforcing compliance with the EU AI Act. These authorities will have the power to investigate AI systems, assess compliance, and impose penalties for non-compliance. The enforcement framework ensures that providers adhere to their obligations and that non-compliant entities are held accountable.

Collaboration with Member States: The Act emphasizes the importance of cooperation between EU institutions and Member States to ensure a consistent application and enforcement of the AI Act across the EU. This collaboration is essential for maintaining a unified approach to AI regulation and ensuring that AI systems are subject to the same standards throughout the EU.

Reporting Obligations: Providers of high-risk AI systems must report any incidents or malfunctions that could pose risks to health and safety. This reporting obligation ensures that any issues are promptly identified and addressed, minimizing potential harm to users and society.

Penalties for Non-Compliance: Article 24 also outlines the penalties for non-compliance, which can include significant fines. For high-risk AI systems, fines can reach up to 3% of the provider’s worldwide annual turnover or €15 million, whichever is higher. These penalties serve as a strong deterrent against non-compliance and emphasize the importance of adhering to the regulations.

4. Transparency Obligations for Limited-Risk AI Systems (Article 50)[4]

Article 50 addresses the transparency obligations for AI systems that are classified as limited-risk. These obligations are designed to ensure that users are informed about the nature and capabilities of AI systems, particularly when they interact with them. Key provisions include:

User Awareness: Providers of AI systems that interact directly with users, such as chatbots, must ensure that users are informed that they are engaging with an AI system. This requirement is intended to prevent confusion and ensure that users can make informed decisions about their interactions with AI systems.

Marking AI-Generated Content: For AI systems that generate synthetic content (e.g., text, audio, images, or videos), providers must ensure that the outputs are marked as artificially generated or manipulated. This marking should be in a machine-readable format, making it detectable and identifiable as AI-generated content. This provision helps prevent misinformation and ensures that users are aware when they are viewing or interacting with content created by AI.

Disclosure for Public Interest: Deployers of AI systems that generate or manipulate content intended for public dissemination must disclose that the content has been artificially generated or altered. However, this obligation does not apply when the content is used for law enforcement purposes or has undergone significant human intervention. This ensures transparency in public communications and prevents the misuse of AI-generated content.

Implementation Timeline: The transparency requirements outlined in Article 50 will come into effect two years after the EU AI Act enters into force, which is expected around late June 2026. This timeline gives providers and deployers sufficient time to adjust their practices and ensure compliance.

Compliance and Penalties: National authorities will oversee compliance with these transparency obligations. Non-compliance can result in significant fines, potentially reaching up to €15 million or 3% of the operator’s total worldwide annual turnover, whichever is higher. This enforcement mechanism ensures that providers take their transparency obligations seriously.

5. Obligations for General-Purpose AI Systems (Article 53)[5]

Article 53 specifically addresses the obligations of providers of general-purpose AI systems. These systems, due to their wide applicability and potential impact, are subject to specific requirements to ensure they are developed and deployed responsibly. Key aspects include:

Risk Assessment: Providers of general-purpose AI systems must conduct a thorough risk assessment to identify and mitigate potential risks associated with their systems. This assessment should consider the potential impact on users and society, ensuring that any risks are managed effectively.

Documentation Requirements: Providers are required to maintain comprehensive documentation that outlines the design, functioning, and intended use of the AI system. This documentation must be made available to users and relevant authorities to ensure transparency and accountability. Detailed documentation helps users understand how to use the AI system safely and effectively.

User Instructions: Providers must provide clear instructions for users on how to use the AI system safely and effectively. This includes guidance on the system’s potential risks, limitations, and appropriate use cases, helping to prevent misuse and ensure the AI system is used as intended.

Monitoring and Reporting: Providers are obligated to monitor the performance of their AI systems continuously and report any incidents or malfunctions that could pose risks to users or society. This ongoing monitoring ensures that any issues are quickly identified and addressed, maintaining the safety and reliability of the AI system.

6. Compliance with Transparency Obligations (Article 55)[6]

Article 55 reinforces the importance of compliance with the transparency obligations outlined in Articles 50 and 53. It highlights the following:

Oversight and Enforcement: National competent authorities are responsible for ensuring compliance with the transparency obligations. They will conduct regular assessments and audits to monitor adherence to these requirements, ensuring that providers meet their obligations under the EU AI Act.

Penalties for Non-Compliance: Similar to other articles in the EU AI Act, non-compliance with the transparency obligations can lead to substantial fines. This reinforces the importance of adhering to these regulations and ensures that providers take their transparency obligations seriously.

Guidance and Support

The European Commission and other relevant bodies will provide guidance and support to help providers comply with the transparency obligations. This includes providing best practices and resources to assist providers in meeting their regulatory requirements.

Conclusion

The EU Artificial Intelligence Act imposes comprehensive obligations on a wide range of entities, including those based outside the EU. By establishing a robust regulatory framework and setting clear compliance requirements, the Act ensures that AI systems used within the EU are developed and deployed responsibly. The Act’s extraterritorial application and stringent enforcement mechanisms, including significant penalties for non-compliance, underscore the EU’s commitment to safeguarding fundamental rights and promoting transparency in AI technologies.

For non-EU entities, aligning their AI systems with the EU AI Act’s requirements is crucial to maintaining access to the EU market. The Act’s far-reaching impact is likely to influence global AI governance, encouraging entities worldwide to adopt similar standards and practices. This regulatory framework positions the EU as a leader in AI regulation, setting a precedent for responsible AI development and deployment across the globe.


[1] https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf

[2]https://www.undp.org/sites/g/files/zskgke326/files/2024-03/undp_ukraine_-_digital_accessibility_legislation._european_best_practices_-_eng_0.pdf

[3] https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf

[4] https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf

[5] https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf

[6] https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf

Article By:- Rahul Bagga (Founder & Managing Director) & Vidhi Agrawal (Associate) & Jyoti verma (intern)
Aumirah Insights

See More insights

Contact us

Partner with Us for Comprehensive Legal Solutions

We’re happy to answer any questions you may have and help you determine which of our services best fit your needs.

Your benefits:
What happens next?
1

We Schedule a call at your convenience 

2

We do a discovery and consulting meting 

3

We prepare a proposal 

Schedule a Free Consultation