badge icon

This article was automatically translated from the original Turkish version.

Article

Artificial Intelligence-Assisted Intelligence

Artificial intelligence-supported intelligence is defined as the systematic use of artificial intelligence methods in the processes of collecting, processing, analyzing, and presenting interpreted outputs to decision-makers from multi-source data. This approach does not reduce the entire intelligence production cycle to a single automated mechanism; rather, it creates decision-support layers that work alongside human expertise to alleviate the pressure imposed by increasing data volume and velocity on analysts and command structures, enhance situational awareness in time-sensitive scenarios, and reinforce evaluation quality through transparent processes.

Conceptual Framework and Relationship with the Intelligence Cycle

Artificial intelligence-supported intelligence touches upon all stages of the intelligence cycle—not only analysis—but also requirement definition, collection planning, preprocessing, fusion, analytical production, and dissemination. Some assessments emphasize that one of the primary motivations for early AI adoption was the strain placed on analysts’ capacity for selective attention and filtering within what has been termed a “data smog”; over time, the focus has shifted toward improving the entire cycle. Within this framework, artificial intelligence functions as an accelerator that transforms raw data into meaningful indicators, while aspects of intelligence requiring accuracy, context, justification, and accountability remain under human oversight.

Artificial Intelligence-Supported Intelligence (Generated by Artificial Intelligence.)

Data Sources, Collection, and Preprocessing

Intelligence production relies on the integrated analysis of heterogeneous sources such as remote sensing, autonomous sensor networks, communications and signals data, cyber domain observations, open-source streams, and human intelligence inputs. In multi-domain and coalition operations, the distributed nature of these sources across different institutions and partners introduces additional complexity in terms of data formats, access permissions, security constraints, and temporal synchronization. Consequently, in AI-supported intelligence architectures, preprocessing functions such as data quality control, noise reduction, handling missing data, and inference under uncertainty are considered critical. Moreover, adversaries’ attempts to deceive algorithms through manipulation techniques necessitate a broader assurance approach than mere optimization for “accuracy.”

Data Fusion and Interpretation

Value generation in a multi-source environment is achieved not through individual signals but through the integration of disparate sources in a complementary manner. AI techniques support the data fusion layer through mechanisms such as entity resolution, pattern recognition, anomaly detection, event clustering, and contextual linking. At this stage, the fundamental challenge is not merely to construct a unified picture but to make visible which assumptions underlie which data, where uncertainty increases, and where human judgment must intervene. Explainable AI approaches, particularly those employing layered explanation frameworks, are used to simultaneously address operational speed requirements and oversight needs by providing justifications at varying levels of detail for the same assessment.

Decision Support in Command and Control Contexts

Artificial intelligence-supported intelligence directly intersects with command and control processes, as intelligence outputs frequently feed into decision cycles at stages such as perception, processing, comprehension, and decision support. The integration of AI into command and control systems is pursued with the goal of enhancing situational awareness and supporting time-critical decisions, while data quality, cyber security vulnerabilities, and the need for explainability emerge as key risk areas. Such assessments underscore that AI use must be balanced to preserve human command authority and that implementation typically progresses as a gradual transformation through doctrinal, procedural, and educational adaptations rather than through abrupt disruption.

Human Expertise, Analytical Production, and Automation

In artificial intelligence-supported intelligence, automation is viewed not as a replacement for analysts but as a component that reshapes their workflow. Automated extraction and summarization tools, thematic clustering, and alternative hypothesis generation systems can redirect analysts’ attention toward critical deviations, sources of uncertainty, and verification needs. However, the outputs of analytical production—particularly when expressed in textual assessments—require additional validation and traceability mechanisms due to the risk that AI-generated systems may produce fluent but erroneous or unjustified content. Within this framework, the value of automation lies less in accelerating output generation and more in organizing evidence chains, making alternatives visible, and enhancing the effectiveness of human oversight.

Explainability, Assurance, and Trust

In operational intelligence environments, “trust” depends not only on model performance but also on resilience to changing data conditions, approaches to managing uncertainty, resistance to deception, and the ability to justify outcomes. In multi-domain operations, rapidly evolving situations, restricted access to real data, noisy and incomplete inputs during operations, and adversary deception widen the gap between training and operational conditions. Therefore, explainability is used not only to answer the “why” question for users but also to define the boundaries of model reliability, identify high-risk use cases, and clarify the human-machine division of labor. Neuro-symbolic approaches, which combine neural networks with symbolic reasoning, are discussed in this context as a direction that integrates flexible pattern learning with rule-based justification to meet the needs for interpretability and knowledge representation in decision support processes.

Coalition Environment and Technology Adoption Challenges

The success of artificial intelligence-supported intelligence in alliances and coalition structures depends not only on technical capacity but also on adoption processes. Areas such as iterative development, access to qualified human resources, data accessibility, and interaction with the civilian technology ecosystem directly influence joint decision-making and policy alignment. Differing rates of technology adoption among partners can increase interoperability risks, while data sharing and data retention regimes are constrained by both national laws and international regulations. Therefore, artificial intelligence-supported intelligence must be evaluated not merely as a technical modernization but as an integration of institutional trust relationships, standards, and governance design.

Organizational Strategies and Responsible Use Principles

Organizational strategy documents and policy analyses do not limit the use of artificial intelligence in defense and security to goals of speed and efficiency; they also incorporate principles of accountability, responsibility, security, and risk management as integral components of implementation. This approach aims to ensure that AI-supported intelligence does not function as a mechanism forcing decision-makers toward a “single correct answer,” but instead makes evidence-based options and uncertainties visible while preserving decision-making responsibility within human authority. This perspective reinforces the need for common standards, training, oversight, and evaluation processes at the alliance level.

Cyber Domain and Autonomous Agents

A significant dimension of intelligence relies on cyber indicators observable through networks and information systems. Proposed reference architectures for autonomous cyber defense agents integrate functions such as detection and situation determination, planning, action selection and execution, impact monitoring, learning, and cooperation when necessary. From an intelligence perspective, such architectures are important in two ways: first, they provide automated observation and preliminary assessment capabilities in the face of the continuity and volume of cyber data; second, they establish tighter feedback loops between intelligence’s warning and early detection functions and action-oriented cyber defense processes. However, as autonomy increases, so do risks of misidentification, inappropriate responses, and adversary manipulation; therefore, the operational boundaries, control points, and logging mechanisms of such agents must be explicitly designed.

Limits and Risks

Artificial intelligence-supported intelligence gains meaning not through the promise of “fully automated intelligence” but through high-impact support functions in limited domains. Rules governing data collection and storage, privacy and authority boundaries, data-sharing practices across institutions, and relationships with the civilian technology ecosystem are fundamental factors determining the actual transition of technical capacity into operational use. Furthermore, data quality issues, cyber security vulnerabilities, insufficient explainability, and deception attempts can generate false confidence and negatively affect decision-making processes. Therefore, artificial intelligence-supported intelligence is understood as a socio-technical system that continuously re-negotiates the balance between the advantages of speed and scale and the requirements of assurance, oversight, and preservation of human authority.

Author Information

Avatar
AuthorÖmer Said AydınFebruary 20, 2026 at 12:14 PM

Tags

Discussions

No Discussion Added Yet

Start discussion for "Artificial Intelligence-Assisted Intelligence" article

View Discussions

Contents

  • Conceptual Framework and Relationship with the Intelligence Cycle

  • Data Sources, Collection, and Preprocessing

  • Data Fusion and Interpretation

  • Decision Support in Command and Control Contexts

  • Human Expertise, Analytical Production, and Automation

  • Explainability, Assurance, and Trust

  • Coalition Environment and Technology Adoption Challenges

  • Organizational Strategies and Responsible Use Principles

  • Cyber Domain and Autonomous Agents

  • Limits and Risks

Ask to Küre