badge icon

This article was automatically translated from the original Turkish version.

Article

European Artificial Intelligence Act (AI Act)

Law

+2 More

European Artificial Intelligence Act (AI Act) is Regulation (EU) 2024/1689, recognized as the first comprehensive legal framework on artificial intelligence worldwide. This law aims to promote the development of trustworthy artificial intelligence in the European Union by providing a set of risk-based rules for AI developers and users.


The AI Act is a legally binding instrument that establishes rules for the development, placing on the market, putting into service and use of AI systems within the European Union. The law categorizes the potential risks of AI technologies and prescribes different obligations and requirements for each risk level. This regulation was designed to ensure that artificial intelligence is developed safely, transparently and sustainably, and constitutes a key component of Europe’s digital transformation strategy.

History

Development Process

Efforts by the European Union to develop an AI regulatory framework began with the recognition of a lack of political assessment and insufficient public debate regarding the rise of digital economies and artificial intelligence. While AI policy advanced rapidly in innovation and applications, companies started using larger datasets to develop products and encourage new innovations across diverse sectors.


The EU’s work on developing an AI regulatory framework commenced in 2018, during which the views of numerous stakeholders were gathered. In 2019, the “Ethics Guidelines for Trustworthy AI” was published, followed in 2020 by the “White Paper on Artificial Intelligence,” which shaped the regulatory approach. In April 2021, the European Commission presented the first draft proposal, initiating a comprehensive negotiation process.

Adoption and Entry into Force

The AI Act was approved by the EU Council on 21 May 2024 and entered into force on 1 August 2024, 20 days after its publication in the Official Journal on 12 July 2024. The final version of the EU AI Act was published in the Official Journal of the European Union on 12 July 2024.


In June 2024, the EU adopted the world’s first rules on artificial intelligence. The AI Act will become fully applicable 24 months after its entry into force. However, certain provisions have already begun to apply: the ban on AI systems posing unacceptable risks took effect on 2 February 2025. Full compliance requirements for high-risk AI systems will enter into force on 2 August 2026.

Risk-Based Approach

Risk Categorization

The EU’s AI policy evaluates technology as both an opportunity and a threat, requiring specific policy responses such as the AI Act’s risk-based categorizations to align diverse AI concepts.


The AI Act classifies AI systems into four main categories based on risk levels: unacceptable risk, high risk, limited risk and minimal risk. This classification prescribes different obligations and requirements for each risk level.

Prohibited Applications

AI applications classified under the unacceptable risk category are entirely banned. This category includes systems that use subliminal techniques and those likely to cause harm to individuals or others. Prohibited applications include biometric classification systems, emotion recognition systems and social scoring practices.

High-Risk Systems

AI systems classified as high risk are those used in critical infrastructure, education, employment, credit scoring, justice and law enforcement. For these systems, comprehensive compliance requirements are established, including quality management systems, risk assessment and mitigation procedures, data and data management requirements, and obligations for transparency and user information.

Implementation Mechanisms

Conformity Assessment

The AI Act offers two distinct conformity assessment options for high-risk AI systems: internal assessment or assessment through an accredited third-party body. For high-risk AI systems, conformity with officially recognized harmonized standards provides a presumption of conformity.


The conformity assessment process includes stages such as preparing technical documentation, establishing a quality management system, conducting risk assessments and implementing risk mitigation measures. Providers must complete the conformity assessment procedure before placing high-risk AI systems on the European market or putting them into service in the EU.

Regulatory Sandboxes

AI regulatory sandboxes, a key component of the AI Act’s implementation, require each Member State to establish at least one national AI regulatory sandbox by 2 August 2026, as stipulated in Article 57 of the AI Act. These sandboxes enable the testing of innovative AI systems in a controlled environment and help regulators gain practical experience in ensuring compliance.

General-Purpose AI Models

The AI Act specifically regulates general-purpose AI models (GPAI). These are systems trained on large volumes of data and capable of performing a wide range of tasks. Special obligations and evaluation criteria have been established for large-scale GPAI models.

Deepfake Regulations

Under the AI Act, distributors using AI systems to generate deepfakes must clearly indicate that content has been artificially created or manipulated and label AI outputs to disclose their synthetic origin. This regulation is critically important for digital content security and combating disinformation.

Governance Structure

European Artificial Intelligence Office

The European Artificial Intelligence Office and national market surveillance authorities are responsible for implementing, monitoring and enforcing the AI Act. The European Artificial Intelligence Office has the authority to enforce rules for general-purpose AI models and is supported by the powers granted to the Commission under the AI Act, including conducting evaluations.

Advisory Bodies

The governance of the AI Act is guided by three advisory bodies: the European Artificial Intelligence Council, composed of representatives of EU Member States; the Scientific Panel, made up of independent experts in AI; and the Advisory Forum, representing a diverse selection of commercial and non-commercial stakeholders. This multi-stakeholder governance ensures a balanced approach to implementing the AI Act.

Member State Responsibilities

Nationally, the AI Act requires each Member State to establish at least one notification authority and at least one market surveillance authority to implement the AI Act. Each Member State must also establish at least one national AI regulatory sandbox.

Sectoral Impacts

Healthcare Sector

The EU AI Act, which entered into force in August 2024, has significant implications for healthcare services. Many AI systems used in healthcare are classified as high risk, creating specific compliance requirements for companies in the sector. Areas such as medical imaging, diagnostic support systems and patient monitoring technologies fall within this scope.

Financial Services

Applications in the financial sector such as credit scoring, risk assessment and algorithmic trading are regulated under the AI Act. Transparency and explainability requirements are of particular importance for AI systems used in banking and insurance.

Human Resources and Employment

AI systems used in recruitment processes, performance evaluation and employee monitoring technologies are classified as high-risk systems under the AI Act. Companies operating in this area face comprehensive compliance requirements.


Advantages

The key advantages of the AI Act include ensuring the safe and responsible development of AI technologies, protecting consumer rights and contributing to the establishment of a trustworthy AI ecosystem in Europe. This regulation assesses AI applications to ensure their ethical and responsible use and promotes safe and lawful AI development within the EU’s single market.


The Act also has the potential to influence global AI norms and serves as a model for other countries developing similar regulations. The risk-based approach focuses on ensuring safety without stifling innovation and provides a flexible framework adaptable to technological advancements.


Through harmonized standards, a single regulatory framework for companies operating in the EU market reduces compliance costs and facilitates market access. Additionally, by increasing consumer confidence in AI systems, it supports broader public acceptance of these technologies.

Disadvantages

Potential disadvantages of the AI Act include high compliance costs, which create a financial burden especially for small and medium-sized enterprises. Additionally, complexity in regulatory processes and difficulties in understanding technical requirements may arise. Conformity assessment procedures can be time-consuming and delay the time-to-market for innovative products.


Regulations do not apply to systems exclusively used for military, defense or national security purposes, nor do they apply to research, testing or development activities conducted before placing systems on the market or putting them into service. The existence of these exemptions raises concerns about potential gaps in the law’s scope.


An increased bureaucratic burden for technology companies may create a competitive disadvantage, particularly for startups and innovative businesses. Furthermore, there is a risk that the legal framework may not keep pace with rapidly evolving AI technologies. Differences in implementation across EU Member States could threaten the integrity of the single market.

Global Impacts

Evaluating the EU’s efforts to develop AI regulations raises important questions about how they will influence global norms. The AI Act is widely regarded as a pioneering regulation expected to play a leading role in shaping AI regulations worldwide.


The EU’s so-called “Brussels Effect” in regulatory influence is expected to manifest in the field of AI as well. Large technology companies’ need to comply with AI Act requirements to access the EU market may lead to the global adoption of these standards. Many countries are referencing the AI Act when developing their own AI regulations and adopting similar approaches.


In terms of international trade, the exclusion of non-compliant systems from the EU market could affect global supply chains and collaboration models. This creates both opportunities and challenges for technology companies, particularly in developing countries. At the same time, as part of its digital diplomacy strategy, the AI Act reinforces the EU’s leadership role in global AI governance.

Author Information

Avatar
AuthorEbrar Sıla PeriDecember 3, 2025 at 12:42 PM

Discussions

No Discussion Added Yet

Start discussion for "European Artificial Intelligence Act (AI Act)" article

View Discussions

Contents

  • History

    • Development Process

    • Adoption and Entry into Force

  • Risk-Based Approach

    • Risk Categorization

    • Prohibited Applications

    • High-Risk Systems

  • Implementation Mechanisms

    • Conformity Assessment

    • Regulatory Sandboxes

    • General-Purpose AI Models

    • Deepfake Regulations

  • Governance Structure

    • European Artificial Intelligence Office

    • Advisory Bodies

    • Member State Responsibilities

  • Sectoral Impacts

    • Healthcare Sector

    • Financial Services

    • Human Resources and Employment

  • Advantages

  • Disadvantages

  • Global Impacts

Ask to Küre