In an era where data drives industries, regulations governing its use are essential. With AI's expansion and growing global concerns over privacy, the call for regulation like the EU AI Act is clear. This landmark initiative sets a legal framework addressing AI risks and applies to various entities, both within and outside the EU. Preparing for this regulation isn't merely about compliance; it's about leadership and innovation in the AI world. This article will explore the EU AI Act, the necessity for AI regulation, the new framework's risk-based approach, urgent preparations, and insights on achieving compliance with Apheris.
What is the EU AI Act?
The EU AI Act, introduced in April 2021, aims to regulate AI within the EU, aligning AI systems with safety standards and fundamental rights and values. Focused on legal certainty to spur investment and innovation, it also emphasizes governance of high-risk AI systems. The Act lays out obligations for AI stakeholders across all sectors, building trust and mitigating risks. As of June 2023, the Act passed a vote, and lawmakers are working to finalize details, including definitions, prohibitions, and obligations on various AI models.
Why it’s needed
The last year's surge in AI adoption has led to extensive benefits across various sectors but also sparked concerns about safety and ethics. The EU AI Act is a response to this, aiming to balance rapid AI development with safety, ethical considerations, privacy protection, and accountability. Key areas the Act intends to address include safety standards, ethical fairness, personal data handling, responsibility assignment, global alignment, and possible environmental concerns. By providing a legal framework, the Act seeks to control risks while fostering AI investment and innovation within a clear legal context.
A risk-based approach
A hallmark of the EU’s AI Act is its risk-based framework, acknowledging the varied applications and impacts of AI across disparate sectors. This approach emphasizes that AI systems, depending on their use and potential effects, present different levels of risk, and consequently should be met with proportional regulation.
The tiers explained
Unacceptable risk: This category bans AI practices that fundamentally contravene EU values and rights. Examples include government-initiated social scoring, real-time biometric identification in public areas, or voice assistants promoting dangerous or harmful conduct.
High risk: Encompassing AI applications in critical fields such as healthcare, transportation, and law enforcement, this category demands stringent assessments and adherence to legal requirements to fortify public safety. This includes systems vital to aviation safety, automotive transport, public conveyance, medical devices, or those utilized in any of the Act's eight identified areas.
Limited risk: AI systems with user interaction, such as chatbots, emotion recognition, or image manipulation, fall under this category. They are subject to transparency obligations to notify users of their non-human interaction, thus promoting clarity and trust.
Minimal risk: AI systems where risk to individual rights or safety is considered negligible are largely unregulated. Voluntary codes of conduct may guide providers of these systems to align with requirements for high-risk counterparts.
The risk-based approach is an attempt to create a responsive and tailored regulatory strategy for the complex landscape of AI technologies.
Requirements for high-risk systems
High-risk systems will be subject to rigorous requirements before gaining access to the EU market, including:
Conducting an adequate risk assessment and implementing mitigation measures
Utilizing high-quality datasets to minimize risks and reduce discriminatory outcomes
Maintaining logging activity to ensure full traceability
Providing detailed documentation regarding the system and its intended purpose
Supplying clear and sufficient user information
Ensuring appropriate human oversight
Maintaining a high level of robustness, security, and accuracy
Providers of high-risk systems will be obligated to register them in an EU database before they are placed on the market and will be required to sign a declaration of conformity. This approach emphasizes the EU's commitment to maintaining stringent standards for systems that could have significant impacts on public safety and individual rights.
Let’s look at a few of the specific requirements that high-risk AI systems must meet to comply with the Act. This is only a selection but begins to provide a sense of the complexity of this regulation and the importance of being prepared.
|Risk mitigation||Assess data, robustness, safety||The model must achieve appropriate levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity.|
The datasets the model uses must be subject to appropriate data governance, including measures to examine and mitigate biases.
Make best efforts to abide by the following general principles:
The company must implement standards for reducing energy consumption.
|Transparency||Data sources, capability, and limitations||Models must be registered in an official public registry including the following information:
If training data is protected under copyright law, the company must provide a public summary of that data
|Demonstrate compliance||Quality control, documentation, & upkeep||The company must establish a quality management system to ensure and document compliance with this law.
The company must ensure compliance with this law before putting the model on the market
The company should prepare documentation that downstream providers will need.
The company must keep relevant technical documentation for at least 10 years.
Why you shouldn’t wait to be ready for EU AI Act
The proposed EU AI Act, expected to be approved by year's end with a 2-year implementation period, introduces fines for non-compliance, requiring early adaptation to minimize risks. Aligning with these standards now minimizes financial risks and demonstrates a commitment to responsible AI development and use.
Early preparation for this complex regulation can streamline processes, reduce future costs, and smooth entry into new markets. Simultaneously, it requires navigating existing laws like GDPR and HIPAA and poses questions about data preparation for AI. Acting promptly will position organizations as forward-thinking, ensuring compliance and innovation in alignment with an evolving legal landscape.
Where Apheris fits in your journey to EU AI Act compliance
Apheris enables governed, private, and secure computational access to data for ML and analytics. Recognizing data as a vital differentiator for organizations, Apheris ensures it can be leveraged for ML without sacrificing security or compliance. The Apheris Compute Gateway permits only approved computations on data, enabling collaboration across organizational or geographical boundaries without sharing data. This approach aligns with various obligations, including those under the EU AI Act, ensuring compliance with data privacy, security, and broader governance requirements. For example:
Apheris Trust Centre:
Inbuilt guidance: offers direction on model properties for computational data access and advice on data preparation and governance to align with privacy and AI regulations.
Demonstrates compliant practices with examples from the Model Registry.
Apheris Governance Portal:
Asset policies: allows users to define access control at the computational level, dictating who can access specific data and for what purpose.
Human oversight: enables connection and description of data properties, creation and revocation of asset policies, and review and approval of compute jobs.
Demonstrate compliance: provides audit, quality management, and pre-market compliance documentation as well as access to logs and visualization of data and model usage. Report and control over computing costs and resources while ensuring security of submitted jobs and privacy of results.
A collection of approved models categorized on properties including:
Privacy of models – applicability of privacy reconstruction attacks
Safety of models – capabilities, limitations, robustness, applicability of security attacks (e.g. adversarial attacks)
Fairness of models – explainability, trustworthiness, reliability
Other properties required for transparency such as risks, how the model was tested, and what data the model was trained on.
Apheris's product is designed to simplify the complexities of aligning AI models with various regulatory requirements, including those under the EU AI Act. It offers a comprehensive and streamlined solution that integrates data governance and compliance management, ensuring that organizations can leverage AI technology without sacrificing security or broader compliance obligations.
This seamless integration of functionality reflects Apheris's commitment to enabling responsible AI development while addressing broader regulatory and governance challenges.
The EU AI Act is a significant advancement in technology regulation, emphasizing a balance between innovation, safety, and ethics. It presents both challenges and opportunities for global AI development, extending beyond just a regional directive. Understanding and adapting to these new norms can be intricate but is pivotal in the rapidly evolving landscape. Early readiness and swift adaptation to the Act's requirements are essential. Now is the time to act, positioning your organization for a seamless transition towards responsible AI development and use.