logologo
Ai badge logo

This article was created with the support of artificial intelligence.

ArticleDiscussion

Explainable Artificial Intelligence (XAI)

fav gif
Save
viki star outline

Explainable Artificial Intelligence (XAI) is a set of methods and techniques aimed at making the results and decisions produced by artificial intelligence (AI) systems understandable and interpretable by humans. In the case of complex algorithms such as deep learning, the decision-making processes of AI systems are often described as a "black box". When these systems receive an input and produce an output, they do not clearly show an external observer how they arrived at that conclusion. XAI aims to make the functioning of these black box models transparent, thereby increasing the reliability, accountability, and user acceptance of the systems.

Historical Development and Necessity

The need for Explainable AI has evolved in parallel with the increasing complexity and scope of AI systems. The adoption of AI in high-stakes fields that directly impact human life, such as healthcare, finance, law, cybersecurity, and autonomous systems, has heightened the demand for transparency and understandability in these systems' decision-making processes.


This necessity has become more apparent in cases where AI systems have produced unfair or erroneous results. For instance, situations such as predictive models for criminal recidivism in the U.S. generating biased scores against certain races, Amazon's same-day delivery service excluding neighborhoods with certain ethnic demographics, or an AI-based medical imaging system overlooking a finding and delaying treatment have highlighted the need for explainability. This development is also confirmed by the recent increase in the number of academic publications on XAI.

Related Concepts

Explainable Artificial Intelligence is defined as the ability to present a model's decisions and predictions in a way that is understandable and auditable for humans. This field reveals not only what decision an AI system made, but also why and how it made that decision. In the literature, there are several fundamental concepts associated with XAI, which are sometimes used interchangeably:

  • Understandability: The quality of a model that allows its functioning to be made clear to humans without needing to explain its internal structure or algorithmic processes. It aims to ensure that users can grasp how the model generally works.
  • Interpretability: The ability to explain or present the meaning or decision mechanism of a model in terms understandable to humans. This concept is often associated with the transparency of a model and indicates the degree to which a model is comprehensible on its own.
  • Explainability: Providing details, reasons, or steps that make a model's internal workings and decisions clear or easy to understand.
  • Transparency: A model is considered transparent when it is understandable on its own. This represents the opposite of a "black box" model.

Theoretical Approaches and Techniques

XAI methods can be classified based on their application time (ante-hoc or post-hoc), model specificity (model-specific or model-agnostic), and the scope of the explanation (local for a single decision or global for the entire model).

Transparent Models (Ante-hoc Explainability)

These models are inherently interpretable by nature. Decision trees, rule-based systems, and coefficient-based models like linear regression fall into this category. In these models, it is possible to directly observe which rules or feature coefficients a decision is based on.

Explanation Techniques for Black Box Models (Post-hoc Explainability)

These techniques are used to make the decisions of complex models, often referred to as "black boxes," understandable after they have been trained.

Local Explanations

Focus on how the model makes a single prediction.

  • LIME (Local Interpretable Model-agnostic Explanations): This is a model-agnostic technique. To explain any single prediction of a complex model, it trains a simple, interpretable model (e.g., a linear model) locally around the data point for which the prediction was made. This simple model helps in understanding the behavior of the complex model at that specific point.
  • SHAP (SHapley Additive exPlanations): Based on the Shapley values concept from game theory. It fairly allocates the contribution of each feature (input variable) to the final prediction. This reveals which features influenced the prediction, in what direction, and by how much. Variants like Kernel SHAP and Deep SHAP, optimized for deep learning models, are available.

Global Explanations

Aim to understand the overall behavior of the model.

  • Feature Importance: Some models, like random forests, provide metrics that measure the overall contribution of each feature to the model's accuracy.

Visual Explanations

  • Explanation/Saliency Maps: Particularly in image data, these produce heatmaps that show which parts of an image the model focused on when making a decision. These maps visualize which pixels led to the recognition of a detected defect or object.

The Relationship Between Explainability and Performance

There is a general consensus that there is a trade-off between a model's performance (accuracy) and its interpretability. Simple models (e.g., decision trees) have high interpretability but lower performance, while complex models (e.g., deep learning) have high performance but low interpretability. However, some researchers argue that this generalization is not always true, stating that in some cases, simpler and more interpretable models can also achieve high performance, and the use of unnecessarily complex models can be detrimental.

Application Areas

XAI technologies find applications in a wide variety of fields where transparency in decision-making is required.

  • Healthcare: It is used in a broad spectrum from medical image analysis to disease diagnosis and treatment planning. For example, it provides decision support to doctors and increases patient trust by explaining how an AI system detects cancerous cells, diagnoses a meniscus tear from an MRI based on specific findings, or scores spinal deformities.
  • Autonomous Systems and Transportation: It plays a critical role in increasing the safety, reliability, and accountability of autonomous vehicles and railway systems. Explaining why an autonomous vehicle performs a specific maneuver or why a defect on railways is labeled as "defective" by an autonomous system is necessary for the acceptance of these systems.
  • Finance: Used in areas such as credit application assessments, risk analyses, and investment recommendations. Explaining why a credit application was rejected is both a legal requirement and an element of customer satisfaction.
  • Cybersecurity: Employed for purposes such as detecting anomalies in network traffic, analyzing malware, strengthening intrusion detection systems (IDS), and discovering web application vulnerabilities. An XAI model can explain to a security expert why a network activity was flagged as a potential attack, enabling a faster and more effective response.
  • Law and Legal Processes: Contributes to fair processes by ensuring the transparency of AI models in legal document analysis and decision support.
  • Other Areas: It also has applications in different sectors, such as product recommendation systems in e-commerce and recruitment processes in human resources.

Legal and Ethical Regulations

The proliferation of AI systems has brought about the need for regulating these technologies within legal and ethical frameworks. XAI plays a central role in complying with these regulations.

European Union (GDPR)

The European Union's General Data Protection Regulation (GDPR) grants individuals the right to obtain meaningful information about automated decision-making processes that have legal or similarly significant effects on them. The GDPR's principle of transparency requires that clear information be provided on how personal data is processed in AI applications. The High-Level Expert Group on AI (AI HLEG), established by the European Commission, has also set principles for "Trustworthy AI," including human oversight, transparency, and accountability.

Türkiye (KVKK)

Türkiye's Law on the Protection of Personal Data (KVKK) No. 6698 does not contain specific provisions directly related to XAI. However, the law's general principles for data processors, such as transparency, accountability, and lawfulness, indirectly support the explainability of AI systems.

Other Institutions

Institutions like the U.S. National Transportation Safety Board (NTSB) have highlighted the necessity of data recording and systems that can provide explanations for events to facilitate post-accident investigations in autonomous vehicles.

Bibliographies

Alpkoçak, Adil. "Sağlıkta Açıklanabilir Yapay Zekâ." TOTBİD Dergisi 23 (2024): 18–19. https://dergi.totbid.org.tr/uploads/pdf_1335.pdf.

Baş, Erhan, ve Ahmet Ali Süzen. "Web Uygulama Zafiyetlerinin Keşfinde Açıklanabilir Yapay Zekânın Yeri." (2023). Erişim 19 Haziran 2025. https://www.researchgate.net/publication/372589266_Web_Uygulama_Zafiyetlerinin_Kesfinde_Aciklanabilir_Yapay_Zekanin_Yeri.

Kırat, Selçuk Sinan, ve İlhan Aydın. "Açıklanabilir Yapay Zekâ Tabanlı Denetimsiz Öğrenme ile Ray Kusur Tespiti." Demiryolu Mühendisliği 18 (2023): 1–13. https://dergipark.org.tr/en/download/article-file/2881449.

Kompas, Muhsin. Derin Pekiştirmeli Öğrenmeye Dayalı Otonom Sürüş için Açıklanabilir Yapay Zeka. Yüksek lisans tezi, Pamukkale Üniversitesi, 2024. https://gcris.pau.edu.tr/bitstream/11499/58023/1/10521347.pdf.

Pehlivanlı, Ayça Çakmak, ve Rahmi Ahmet Selim Deliloğlu. "Hibrit Açıklanabilir Yapay Zeka Tasarımı ve LIME Uygulaması." Avrupa Bilim ve Teknoloji Dergisi 27 (2021): 228–236. https://dergipark.org.tr/en/pub/ejosat/article/959030.

Söğüt, Esra. "Use of Explainable Artificial Intelligence (XAI) in the World of Informatics and Cyber Security." Bildiri, 2. BİLSEL International Sumela Scientific Researches Congress, Trabzon, Türkiye, 2023. Erişim 19 Haziran 2025. https://avesis.gazi.edu.tr/yayin/8f3a9c42-5d93-4452-aa44-394d3f8e8bfa/use-of-explainable-artificial-intelligence-xai-in-the-world-of-informatics-and-cyber-security.

You Can Rate Too!

0 Ratings

Author Information

Avatar
Main AuthorYunus Emre YüceJune 22, 2025 at 2:19 PM
Ask to Küre