badge icon

This article was automatically translated from the original Turkish version.

Article

Artificial Intelligence Regulations

Philosophy

+3 More

Gemini_Generated_Image_vnjtdsvnjtdsvnjt.png

Yapay Zekâ Regülasyonu (Yapay Zeka ile Oluşturulmuştur)

Basic Approach
Risk-based classification (UnacceptableHighLimitedMinimal)
Preliminary Regulation
European Union Artificial Intelligence Act (EU AI Act)
Primary Objectives
Protection of Fundamental RightsSecurityTransparencyPromotion of Innovation
International Standard
ISO/IEC 42001 (Artificial Intelligence Management System)

Artificial intelligence (AI) regulations refer to the legal, ethical, and technical standards governing the development, deployment, and oversight of these technologies. AI systems hold significant potential for enhancing efficiency and innovation across diverse fields such as healthcare, transportation, finance, and education. However, they also pose substantial risks including the violation of personal data protection, infringement of fundamental rights, discrimination, security vulnerabilities, and ethical dilemmas. Consequently, policymakers have turned to national and international regulatory initiatives to manage the societal impacts of these technologies. The primary objective of regulation is to support technological advancement and innovation while safeguarding human rights, democratic values, and the rule of law. Thus, the aim is to maximize the societal benefits of AI-based solutions while limiting their harmful effects.

Definition of Artificial Intelligence and Its Regulatable Characteristics

Artificial intelligence is a discipline concerned with developing systems capable of learning from data, drawing inferences, making decisions, and acting autonomously toward specific goals. Unlike traditional software, AI systems possess a dynamic structure; their outcomes are not always certain and their behavior may be unpredictable. This characteristic is a decisive factor in the emergence of regulatory needs.


Since AI systems do not possess human-like consciousness, moral values, or emotions, they may generate decisions during task execution that conflict with societal values or produce unforeseen negative consequences. This situation brings with it particular risks such as the reinforcement of biases, the proliferation of discriminatory practices, and increased security vulnerabilities.


Moreover, the operational mechanisms of these systems are often complex and difficult to understand from the outside. This issue is described in the literature as the “black box” or “opacity” problem. In most cases, it is difficult to explain which data a given AI model used to reach a specific decision. This feature complicates the application of principles such as transparency, accountability, and traceability. Therefore, regulatory frameworks aim not only to establish technical standards but also to ensure adherence to principles of transparency, ethical compliance, and legal responsibility.

Reasons for Regulation and Ethical Dimensions

The need to regulate artificial intelligence technologies arises from both the opportunities they offer and the risks they entail. While AI can significantly contribute to economic growth, service efficiency, and scientific progress, it can also lead to serious problems such as violations of individual rights, erosion of democratic processes, unethical practices, and security threats. Therefore, regulations are designed not merely to monitor technical functioning but also to control ethical and societal impacts.

Ethical Issues and Social Impacts

Failing to design AI systems in accordance with ethical principles can lead to widespread social consequences. For instance, algorithms used in hiring, credit scoring, or insurance assessments may learn biases present in historical data and discriminate against specific genders, ethnic backgrounds, or socioeconomic groups. This undermines principles of equality and justice.


Additionally, the manipulation of individuals’ behaviors and preferences through big data analysis constitutes a serious ethical concern. As seen in the Cambridge Analytica scandal, the targeted use of personal data for political advertising undermines the transparency and reliability of democratic processes. Such incidents reveal that AI must be regulated not only for economic but also for political and social dimensions.


Ethical compliance is not merely a legal requirement but a critical factor in establishing public trust in technology. An AI ecosystem that is fair, transparent, and respectful of human rights facilitates public adoption of these technologies and contributes to building a digital future aligned with democratic values.

Security Risks

Autonomous Lethal Weapons (ALWs)

Defined by the United Nations (UN) as systems capable of identifying, selecting, and neutralizing targets without human intervention, autonomous lethal weapons are among the most controversial applications of artificial intelligence. It remains uncertain whether such weapons can operate in accordance with international humanitarian law principles, particularly those of distinction between combatants and civilians and proportionality. Since current technology lacks the capacity for subjective judgment to replace human decision-making, debates continue regarding the need for international bans or strict oversight of these weapons.

Cybersecurity

Artificial intelligence has a dual impact in the field of cybersecurity, serving both defensive and offensive purposes. Defensively, it is a powerful tool for detecting anomalous network traffic, preventing phishing attempts, and blocking fraudulent activities. However, it can also be exploited by attackers. AI algorithms can scan for software vulnerabilities, conduct sophisticated phishing campaigns, or develop advanced ransomware, all enabled by automation. The digitalization of critical infrastructure—such as energy grids, water supply systems, and transportation networks—further amplifies the potential impact of such attacks.

Disinformation

Generative artificial intelligence possesses the capacity to produce large volumes of highly convincing synthetic content. Text, images, audio, and video generated through such technologies—including deepfake systems—can be used to mislead public opinion, deepen societal polarization, and negatively affect democratic processes. This poses a serious threat to the credibility of the information ecosystem and heightens the importance of media literacy, transparency, and verification mechanisms.

Global Approaches to AI Regulation

The rapid advancement of artificial intelligence has prompted different countries and international organizations to develop regulatory frameworks in this domain. Globally adopted strategies can generally be categorized into two main approaches: comprehensive and fragmented. This divergence is closely linked to countries’ legal systems, political priorities, economic goals, and technological capacities.

Comprehensive Approach

The comprehensive approach seeks to address artificial intelligence in all its dimensions by establishing a single, overarching legal framework. In this model, elements such as ethics, security, transparency, accountability, and human rights are consolidated under one central regulatory authority. The European Union’s Artificial Intelligence Act represents the most prominent example of this approach. The law adopts a risk-based model, classifying AI systems into categories such as “high-risk,” “limited-risk,” and “minimal-risk,” and prescribing specific oversight mechanisms for each category. Thus, it aims to foster innovation while protecting citizens’ fundamental rights.

China has also taken steps toward embedding its AI regulations within a comprehensive framework. China’s approach emphasizes social stability and state control, imposing strict oversight mechanisms on content generation and data usage. Canada, meanwhile, has pursued a comprehensive regulatory path with its 2022 Artificial Intelligence and Data Act (AIDA), placing particular emphasis on safeguarding individuals’ safety and rights.

Fragmented Approach

In the fragmented approach, rather than enacting a single law specific to AI, relevant provisions are integrated into existing sectoral regulations. Under this model, risks posed by AI systems are addressed indirectly through existing legislation in areas such as data protection, product safety, consumer rights, competition law, and cybersecurity.


The advantage of this approach is that it provides a flexible and adaptable legal framework in response to rapidly evolving technologies. However, the dispersion of regulations across different sectors may lead to inconsistencies in implementation and fragmented oversight. The United Kingdom initially adopted this model, distributing AI-related rules among the jurisdictions of various agencies. Switzerland and Australia have similarly followed a fragmented approach by integrating AI-related provisions into their existing legal infrastructures.

Global Trends

When examining overall trends, the European Union’s comprehensive model is beginning to emerge as a global standard-setting reference point. In contrast, some countries prefer a more flexible and fragmented approach to avoid stifling innovation. In the coming years, it is likely that hybrid regulatory frameworks will emerge from the interaction of these two models.

European Union Artificial Intelligence Act (EU AI Act)

The European Union has taken a pioneering global step by introducing the Artificial Intelligence Act, the first comprehensive regulation targeting artificial intelligence. The law’s primary objective is to ensure that AI systems within the EU market are used in a safe, transparent, traceable, non-discriminatory, and rights-respecting manner. Simultaneously, the law seeks to uphold the rule of law and democratic values while supporting innovation.


The law adopts a risk-based approach and classifies AI systems into four main categories:

  1. Unacceptable Risk: Applications that pose a clear threat to human rights fall into this category and are entirely prohibited. Examples include social scoring systems used by governments, manipulative technologies exploiting human vulnerabilities, and emotion recognition systems deployed in workplaces or educational institutions. Real-time biometric identification systems used by law enforcement in publicly accessible areas are also prohibited, except for narrowly defined exceptions.
  2. High Risk: Systems that may directly affect human safety, health, or fundamental rights are classified as high-risk. These include AI used in critical infrastructure management, medical devices, hiring processes, and systems managing justice and democratic processes. Providers developing or deploying these systems are subject to stringent obligations, including establishing a risk management system, using high-quality and unbiased datasets, preparing technical documentation, ensuring human oversight, and implementing robust cybersecurity measures.
  3. Limited / Specific Transparency Risk: Applications in which users must be informed that they are interacting with an AI system fall into this category. For example, chatbots must clearly disclose their AI nature. Deepfake content or artificially generated images must also be explicitly labeled.
  4. Minimal Risk: The majority of AI applications fall into this category. Systems with low risk in everyday contexts—such as video games or email spam filters—are not subject to additional obligations.


To oversee implementation of the law, the European AI Office has been established. Companies failing to comply may face fines of up to 7% of their global annual turnover or 35 million euros. The law is being implemented gradually and will become fully enforceable by 2026. Due to its extraterritorial effects, the EU’s regulation constitutes a binding framework not only for member states but for all companies offering products or services in the EU market.

AI Regulation in Türkiye

Türkiye is among the countries closely monitoring global developments in AI regulation. To date, AI-related provisions have been addressed within various existing legal frameworks, following what is termed a fragmented approach. However, in recent years, significant steps have been taken toward the need for a comprehensive regulatory framework.


National Artificial Intelligence Strategy (2021–2025): The strategy document published in 2021 outlines Türkiye’s roadmap in the field of artificial intelligence. The strategy aims to develop a trustworthy, transparent, and responsible AI ecosystem. It provides a broad framework covering areas from human resource development and strengthening the data ecosystem to international cooperation and the establishment of ethical standards.


Legal Developments: The Artificial Intelligence Bill, submitted to the Grand National Assembly of Türkiye in June 2024, represents the most concrete step toward Türkiye’s transition to a comprehensive regulatory model. The bill adopts a risk-based approach similar to the European Union’s AI Act and imposes strict regulations on high-risk applications. However, criticisms have been raised regarding shortcomings and ambiguities in adapting the bill to local needs. Additionally, the Artificial Intelligence Research Commission established within the Turkish Parliament is conducting studies to monitor technological developments and contribute to the legal framework.


Future Plans: Türkiye’s future goals include the labeling of AI-based products, protection of intellectual property rights, and the implementation of a “Safe AI Seal.” Due to the extraterritorial impact of the EU AI Act, compliance with these regulations is considered one of the most critical challenges for Turkish companies offering products or services in the EU market in the coming period.

International Standards and Other Regulatory Concepts

In addition to legal regulations, international standards, guiding principles, and innovative regulatory approaches play a vital role in ensuring the responsible and reliable development of artificial intelligence. Such mechanisms help harmonize practices across countries and provide global frameworks for companies.

ISO Standards

The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are developing various technical and governance standards in the field of artificial intelligence. These standards serve not only as references for regulators but also as binding guidelines for organizations developing AI solutions.

  • ISO/IEC 42001: Provides a framework for an Artificial Intelligence Management System (AIMS). This standard defines organizational processes for the responsible, safe, and ethical development, deployment, and oversight of AI systems.
  • ISO/IEC 22989: Defines AI terminology. This standard prevents conceptual confusion by establishing a common international language and facilitates communication among diverse stakeholders.
  • ISO/IEC 23894: Offers guidance on AI risk management. This standard includes methodologies for identifying, assessing, and mitigating risks throughout the AI system lifecycle.

The development of these standards supports not only technical compliance but also ethical alignment and transparency. Thus, the ISO framework serves a complementary function to national regulations.

Algorithmic Regulation

An innovative approach to AI regulation is known as algorithmic regulation. This model aims not only to define legal and ethical rules at the legislative level but also to embed them directly into the design, algorithms, and data processing procedures of AI systems.

  • Differential Privacy: Mathematical techniques used to protect the privacy of individuals in datasets. This approach enables learning from aggregated data while preventing the exposure of individual records.
  • Explainable AI (XAI): Aims to make the decision-making processes of AI systems understandable to humans. This ensures transparency and accountability when systems generate bias or make erroneous decisions.
  • Ethics by Design: An approach that embeds ethical principles directly into system design. This method ensures that legal and societal values are not confined to paper but are directly reflected in the technological infrastructure.

Algorithmic regulation enables the integration of regulatory mechanisms directly within the functioning of technology, particularly in situations where traditional legislation proves insufficient. This enhances both user trust and legal compliance more effectively.

Author Information

Avatar
AuthorÖmer Said AydınDecember 3, 2025 at 12:06 PM

Discussions

No Discussion Added Yet

Start discussion for "Artificial Intelligence Regulations" article

View Discussions

Contents

  • Definition of Artificial Intelligence and Its Regulatable Characteristics

  • Reasons for Regulation and Ethical Dimensions

    • Ethical Issues and Social Impacts

    • Security Risks

      • Autonomous Lethal Weapons (ALWs)

      • Cybersecurity

      • Disinformation

  • Global Approaches to AI Regulation

    • Comprehensive Approach

    • Fragmented Approach

    • Global Trends

  • European Union Artificial Intelligence Act (EU AI Act)

  • AI Regulation in Türkiye

  • International Standards and Other Regulatory Concepts

    • ISO Standards

    • Algorithmic Regulation

Ask to Küre