Revolutionizing AI Governance - New Horizon of the European Union's Artificial Intelligence Act

Josef Bergt

2023

Introduction

The dawn of artificial intelligence (AI) has brought forth a paradigm shift in technological advancements and their intersection with legal frameworks. The European Union's Artificial Intelligence Act stands as a testament to the EU's commitment to shaping a future where AI is not only innovative but also trustworthy and aligned with fundamental human rights and democratic values.

Comprehensive Regulation of AI: A New Era

The European Union has embarked on a journey to meticulously regulate AI, ensuring it aligns with the core values of safety, fundamental rights, and democracy, while simultaneously fostering a conducive environment for businesses to flourish. The provisional agreement on the Artificial Intelligence Act marks a historic milestone, setting the stage for a balanced approach to harnessing AI's potential while safeguarding against its risks.

Key Provisions and Their Implications

  • Safeguards for General Purpose AI: The Act introduces robust safeguards for general purpose AI systems, recognizing their broad capabilities and rapid evolution. This includes mandatory transparency requirements, technical documentation, adherence to EU copyright law, and detailed summaries of training content.
  • Restrictions on Biometric Identification: The Act imposes stringent limitations on law enforcement's use of biometric identification systems, recognizing the delicate balance between security needs and individual privacy rights.
  • Prohibition of High-risk Applications: Certain AI applications, deemed to pose significant threats to citizens' rights and democracy, are expressly prohibited. These include AI systems that categorize individuals based on sensitive characteristics, untargeted scraping of facial images, emotion recognition in workplaces, social scoring, and systems designed to manipulate or exploit vulnerabilities.
  • Obligations for High-risk AI Systems: AI systems identified as high-risk must undergo rigorous assessments, including mandatory fundamental rights impact assessments, particularly in sectors like banking and insurance.
  • Guardrails for General AI Systems: General AI systems are subject to specific requirements, including technical documentation, compliance with copyright laws, and reporting on systemic risks and energy efficiency.
  • Support for Innovation and SMEs: Recognizing the critical role of small and medium-sized enterprises (SMEs) in innovation, the Act promotes regulatory sandboxes and real-world testing environments, established by national authorities.
  • Sanctions for Non-compliance: The Act stipulates substantial fines for non-compliance, ranging from 35 million euros or 7% of global turnover to 7.5 million euros or 1.5% of turnover, underscoring the seriousness with which these regulations are to be adhered to.

 

The Balance between Innovation and Regulation

The European Union's AI Act is an effort in balancing the need for innovation in AI with the need for regulation. By establishing clear guidelines and prohibitions, the Act provides a framework for the ethical development and deployment of AI, ensuring that innovation does not come at the cost of fundamental rights and democratic values.

Implications for Businesses and Legal Practitioners

For businesses, especially those operating in the AI space, this Act signifies a shift towards more accountability and transparency. The Act's emphasis on safeguarding fundamental rights necessitates a reevaluation of how AI systems are designed, developed, and deployed.

Setting a Global Precedent

The European Union's AI Act is poised to set a global precedent. Its comprehensive approach to regulating AI is likely to influence other jurisdictions, potentially leading to a more harmonized global framework for AI governance. This Act could serve as a model for other countries, highlighting the importance of aligning AI development with ethical and democratic principles.

Analysis of the EU AI Act's Provisions

  • Broad Scope and Applicability: The EU AI Act extends its reach to encompass a wide array of stakeholders in the AI arena, including developers, deployers, importers, and distributors. Significantly, it also holds accountable entities outside the EU, such as in Switzerland, if their AI outputs are intended for use within the EU. This expansive scope underlines the EU's commitment to ensuring that AI systems used within its borders adhere to its standards, regardless of their origin.
  • Comprehensive Risk Categorization: The Act introduces a nuanced risk-based framework for AI systems. At one end of the spectrum are AI applications posing 'unacceptable risks', which are outrightly prohibited. These include real-time remote biometric identification in public spaces and systems designed for social scoring or subliminal manipulation. On the other end are systems with minimal risk, which merely require transparency in their AI-driven nature. The critical focus, however, lies on 'high-risk' AI models. These are subject to rigorous compliance requirements, including a conformity assessment before market release and mandatory registration in an EU database.
  • High-Risk AI in the Financial Sector: The financial services sector, identified as a key area of AI impact, faces significant implications under the Act. AI systems used for creditworthiness assessments, risk premium evaluations, and the operation of critical financial infrastructure are classified as high-risk. This classification mandates stringent adherence to the Act's provisions, underscoring the need for robust risk management, data governance, and human oversight.
  • Preparatory Steps for Compliance: Organizations must take proactive steps to align with the AI Act. This involves conducting a thorough assessment of existing and potential AI models, classifying them based on risk, and developing comprehensive governance structures to ensure compliance. Particularly for organizations new to AI model management, establishing a model inventory and a repository is a critical first step.

Strategic Compliance: A Path Forward for Businesses

  • Conduct a Status Quo Assessment: Businesses should begin by evaluating their current use and development of AI models. This involves identifying AI models in use or in development and cataloging them in a model repository.
  • Implement Model Management: For entities previously unengaged in model management, establishing a comprehensive system is crucial. This includes risk classification of models, ensuring appropriate data governance, and integrating AI into existing governance frameworks.
  • Raise Awareness and Assign Responsibility: It is vital to foster awareness about the AI Act's provisions among stakeholders and designate responsibility for ensuring compliance. This includes ongoing training and development for personnel involved in AI development and deployment.
  • Stay Informed and Adapt: Given the dynamic nature of AI and its regulation, staying abreast of developments and adapting practices accordingly is essential. This includes monitoring legislative changes and emerging best practices in AI governance.
  • Establish Ethical AI Frameworks: Beyond legal compliance, businesses are encouraged to adopt ethical AI practices. This involves developing a Code of Conduct around ethical AI and ensuring AI systems are designed with fairness, accountability, and transparency in mind.

Source: European Parliament Press Release (2023), Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI.

Executive Summary:

  • The European Union's Artificial Intelligence tries to balance the need for innovation with the protection of fundamental rights and democracy.
  • The Act introduces stringent regulations and prohibitions on certain AI applications, especially those posing risks to individual rights and democratic values.
  • It emphasizes transparency, accountability, and the protection of fundamental rights in AI development and deployment.
  • Businesses and legal professionals must adapt to this new regulatory landscape, which could set a global standard for AI governance.
  • The Act's focus on supporting innovation and SMEs highlights the EU's commitment to fostering a competitive and ethical AI ecosystem.
  • It introduces a risk-based classification system for AI models, with stringent requirements for high-risk AI systems.
  • The financial services sector, in particular, faces significant impacts under the Act, especially in areas like credit scoring and critical infrastructure operation.
  • Proactive steps toward compliance include conducting AI model assessments, implementing robust model management systems, and fostering organizational awareness.
  • Severe penalties for non-compliance underscore the criticality of understanding and adhering to the Act's provisions.

Kontakt

Bergt Law Logo

Anschrift

Rechtsanwaltskanzlei Bergt & Partner AG
Buchenweg 6
Postfach 743
9490 Vaduz
Liechtenstein

Telefon

+423 235 40 15

E-Mail

office@bergt.law