badge icon

This article was automatically translated from the original Turkish version.

Article

Sensor fusion is the process of combining data from multiple sensors to create a more consistent, reliable, and meaningful information framework. In this process, sensor data that are limited or uncertain when evaluated individually are integrated using appropriate mathematical and statistical methods to produce state estimates that individual sensors cannot provide. Sensor fusion does not only encompass simultaneous measurements from different types of sensors but also includes the integration of data collected from the same sensor at different times. In this regard, sensor fusion is an information processing technique aimed at reducing measurement uncertainty, enhancing system perceptual awareness, and generating results with higher accuracy. Conceptually, this approach resembles biological systems’ mechanisms of integrating multiple sensory inputs to understand the environment.


The primary goal of sensor fusion is to overcome the limitations of individual sensors and obtain a more accurate, reliable, and holistic representation of the observed system or environment. Measurements from a single sensor are often insufficient due to limited spatial and temporal coverage, low accuracy, sensor biases, or random noise. In this context, sensor fusion aims to reduce uncertainty by combining multiple measurements, improve measurement reliability, enhance system tolerance, and establish decision-making processes on a more robust foundation. Moreover, sensor fusion provides detailed situational awareness that cannot be achieved with a single sensor by expanding spatial and temporal coverage. For example, relying solely on GPS data for vehicle positioning can generate errors during signal interruptions or noise; however, combining GPS data with inertial measurement unit (IMU) data ensures higher accuracy and continuity in position estimation. Therefore, sensor fusion emerges as a fundamental approach to enhancing perceptual accuracy, reliability, and decision-making capacity.

Implementation of Sensor Fusion

The implementation of sensor fusion is addressed as a multi-layered processing pipeline and generally consists of several stages: data collection, preprocessing, alignment, modeling, and fusion algorithms. The first stage involves collecting raw data from different types of sensors. These data may have distinct characteristics in terms of measurement frequency, resolution, and error profiles. In the second stage, preprocessing, the acquired data are cleaned of noise, missing or corrupted data are corrected, and measurement discrepancies between sensors are resolved using calibration techniques. In particular, when heterogeneous sensors are used together, it is critical to transform each sensor’s data into the same physical and mathematical representation.


The next step is the alignment (registration) process. Ensuring temporal and spatial consistency of data from different sensors is a fundamental condition for successful fusion. Techniques such as timestamping【1】, interpolation【2】, or extrapolation【3】 are used to correct data delays between sensors; for spatial alignment, data are transformed into a common coordinate system. Following alignment, mathematical models are employed that define the relationship between the quantities measured by the sensors and the actual state of the system being observed. These models may be deterministic or may incorporate stochastic representations of uncertainty【4】.


In the final stage, processed and aligned data are integrated through a fusion algorithm. The choice of algorithm at this point depends on the application’s requirements. For instance, the Kalman filter is commonly used when linear and Gaussian distribution assumptions hold; for nonlinear and complex systems, the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), or particle filters are preferred. Alternatively, Bayesian methods, Dempster-Shafer theory, or machine learning-based approaches may also be applied in different scenarios. Thus, multiple data sources from sensors are optimally combined while considering error covariances and reliability levels.


Types of Sensor Fusion

Sensor fusion is classified into different types based on the relationships between the sensors used and the nature of the quantities they measure. The most fundamental classification divides sensor fusion into three main categories: competitive, complementary, and cooperative fusion.


Competitive fusion involves combining data from multiple sensors that measure the same physical quantity. In this approach, each sensor provides a measurement of the same information, and the resulting data are integrated for cross-validation and error reduction. The goal is to compensate for faulty or noisy measurements from any one sensor using data from other sensors. For example, using two different GPS receivers to determine a vehicle’s position enhances measurement reliability and improves system fault tolerance.


Complementary fusion, on the other hand, is based on combining measurements from different types of sensors that capture distinct aspects of the environment. In this type, each sensor measures a different and independent dimension of the observed system. For instance, a LiDAR sensor provides high resolution at short distances, while radar is more effective at longer ranges; due to their complementary properties, the system can perceive both nearby and distant surroundings holistically. Thus, complementary fusion enables the system to achieve comprehensive environmental awareness.


Cooperative fusion relies on deriving information that cannot be obtained by a single sensor alone through the combined use of multiple sensors. In this case, sensors measure different physical quantities, but when these measurements are combined, they generate new and more complex information. Stereo camera systems exemplify this scenario; images captured by two separate cameras enable the extraction of depth information. Similarly, combining GNSS data with an inertial measurement unit (IMU) enables high-accuracy position and orientation estimates that cannot be achieved individually.


Architectural Approaches

Architectural approaches in sensor fusion systems are categorized into three main types—centralized, distributed, and hybrid—based on when, where, and how sensor data are processed. The selection of these architectures is directly related to criteria such as bandwidth requirements, computational capacity, fault tolerance, and real-time performance needs of the application.


The centralized architecture is based on collecting all raw data from sensors at a single central processing unit for processing. In this approach, sensors perform only data acquisition, and the entire fusion process occurs on a central processor. The advantage is the ability to perform the most comprehensive and theoretically most accurate fusion due to access to all raw data. However, its limitations include high bandwidth requirements, risks of transmission delays, and the potential for complete system failure in case of a single-point fault.


The distributed architecture enables sensors or sensor nodes to perform preprocessing and partial fusion locally. In this structure, each sensor processes raw data locally instead of sending it directly to the center, generating summarized information and transmitting only this information to other nodes or the central system. The most significant advantage of the distributed architecture is increased system scalability and elimination of dependency on a single central node. It also reduces communication load and eases bandwidth requirements. However, ensuring consistency among sensors and harmonizing locally generated information for the entire system requires additional algorithmic complexity.


The hybrid architecture represents a combination of centralized and distributed approaches. In this structure, specific groups of sensors process their data locally and generate summarized results, which are then recombined at a higher level by a central fusion unit. Thus, both the scalability and fault tolerance advantages of the distributed architecture and the global optimization power of the centralized architecture are utilized together. Hybrid architectures are preferred in large-scale sensor networks and autonomous systems to achieve a balance between efficiency and accuracy.

Levels of Sensor Fusion

The sensor fusion process is also classified according to the level of abstraction at which data are combined, not only based on sensor types or relationships. In the literature, the most common classification distinguishes between data (pixel) level, feature level, and decision level fusion.


Data level fusion occurs through the direct combination of raw data from sensors. In this approach, data are fused without processing or after only basic preprocessing steps. For example, pixel-level merging of images from two different cameras or direct integration of radar and LiDAR measurements as time series are examples of this type of fusion. Data level fusion is often preferred to reduce measurement noise and obtain higher-resolution information. However, its limitations include high bandwidth requirements and increased sensitivity to sensor errors.


Feature level fusion is based on combining meaningful attributes extracted from raw data. At this level, each sensor extracts characteristic features (e.g., shape, color, speed, frequency component) from its own data, and these features are aggregated into a common feature vector. These features are then processed using statistical methods or machine learning algorithms. For example, in a security system, object shapes detected by a camera and sound frequency features captured by a microphone can be fed into a single classifier to produce more accurate decisions. Feature level fusion provides a more abstract representation than raw data, thereby reducing computational load while preserving information integrity.


Decision level fusion is based on combining decisions independently generated by each sensor or group of sensors. In this approach, sensors reach a specific decision (e.g., “object present” or “threat detected”) through their internal processing mechanisms; these decisions are then fused at a higher level to produce a final outcome. Methods used in decision level fusion include majority voting, weighted averaging, and ensemble algorithms. This method is particularly useful when heterogeneous sensors operate together and independent decision mechanisms must be synthesized at a higher level. The advantage of decision level fusion is that each sensor provides its optimal output based on its expertise, and the collective information derived from these outputs yields more reliable results.


In addition to these fundamental levels, conceptual models have been developed to define fusion processes more comprehensively. The JDL (Joint Directors of Laboratories) model is one of the best-known frameworks that hierarchically defines sensor fusion. According to this model, the fusion process is structured as follows: Level 0 for raw signal processing and source preprocessing, Level 1 for object refinement (position, identity, or classification), Level 2 for situation assessment (analysis of relationships between objects), Level 3 for threat assessment (deriving risks and predictions), and Level 4 for process monitoring and optimization. Although originally developed for military and security applications, this structure is now widely applied in civilian domains as well.


Alternatively, the DIKW (Data–Information–Knowledge–Wisdom) hierarchy emphasizes the levels of abstraction in the data processing chain. According to this model, raw data from sensors are first transformed into information; this information is then processed in relational contexts to form knowledge, and ultimately reaches the level of wisdom, which guides decision-making processes. The DIKW hierarchy reveals that sensor fusion is not merely a technical process but also a value chain extending from data to action.

Sensor Fusion Algorithms

Sensor fusion algorithms are core mechanisms that integrate data from diverse sources using mathematical and statistical methods. Their fundamental purpose is to combine the uncertain and limited information produced by individual sensors to generate more reliable and accurate state estimates. The literature contains various methods developed for this purpose, generally categorized as probabilistic approaches, rule-based methods, and artificial intelligence-based techniques.


Probabilistic methods represent uncertainty in sensor data using probability distributions. Among the most widely used algorithms in this framework are the Kalman filter and its derivatives. The Kalman filter produces an optimal state estimate by considering error covariances of measurements from different sensors under assumptions of linearity and Gaussian distribution. For nonlinear systems, the Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF) are preferred. Similarly, particle filters provide a powerful alternative in dynamic environments with high measurement noise by representing complex and multimodal distributions. Bayesian methods, meanwhile, combine prior knowledge with measurement likelihoods to process sensor information through an updated probability distribution.


Rule-based methods are used to integrate data with uncertain or partially reliable confidence. Dempster–Shafer evidence theory offers a non-probabilistic fusion mechanism by modeling different confidence levels of information from sensors. Similarly, fuzzy logic methods allow measurements to be expressed as linguistic variables such as “high,” “medium,” or “low” rather than precise numerical values. This enables the logical integration of information from different sensors within a set of rules.


In recent years, artificial intelligence-based algorithms have gained increasing importance in sensor fusion. In particular, deep learning methods can automatically extract high-level features from heterogeneous sensor data to perform fusion. For example, processing camera and LiDAR data in separate network layers and combining them in a higher-level network has achieved success in 3D object detection and environmental awareness. Additionally, ensemble methods are evaluated within the scope of decision level fusion algorithms, where outputs from multiple classifiers are combined to achieve higher accuracy.

Error Sources

The effectiveness of sensor fusion depends on the proper management of errors and uncertainties in sensor data. Measurement noise is a fundamental error source that introduces random uncertainty into nearly all sensor readings; this uncertainty is typically modeled as zero-mean Gaussian noise, and sensor fusion algorithms statistically combine multiple measurements to reduce this uncertainty. Measurement bias, on the other hand, refers to systematic deviations caused by calibration errors or environmental effects; for example, a distance sensor may consistently overstate measurements by 5 cm. Such error terms must be corrected either through pre-fusion calibration or by incorporating them into the fusion model. Additionally, each sensor has limited resolution (granularity); the inability of measurements to exceed a certain level of precision creates uncertainty, particularly in fusion tasks requiring high sensitivity.


Another issue arising from the use of multiple sensors concerns data consistency and inconsistency. Different sensors may sometimes produce conflicting information while observing the same event; this leads to data inconsistency or outlier problems. Fusion algorithms must incorporate techniques such as reliability weighting, outlier detection, or robust statistical methods to mitigate the impact of inconsistent data. Furthermore, it must not be forgotten that correlation may exist among multiple sensor readings; for instance, errors from two sensors measuring the same physical phenomenon may be correlated. If this is ignored, the fusion algorithm may overestimate the actual level of confidence. Therefore, the assumption of sensor independence is carefully evaluated, and if necessary, inter-sensor correlation information is incorporated into the fusion algorithm.


Time synchronization issues require special attention in sensor fusion. Different sensors may generate data at different rates or exhibit fixed or transient delays. If not corrected, this causes temporal misalignment of fused data, leading to incorrect results. For example, one sensor may report “an object is currently 10 units away,” while another sensor’s data, delayed by several milliseconds, reflects a previous state; this causes serious errors in applications tracking rapid motion. To overcome this, various methods are employed: external triggering mechanisms that synchronize all sensors to measure simultaneously, timestamping data upon reception at the center, or having each sensor timestamp its data at the moment of measurement. Additionally, interpolation/extrapolation techniques can convert data from different times to a common time reference; for instance, a sensor’s data can be shifted forward or backward using a small model to align with another sensor’s timing. Temporal disruptions also pose challenges for fusion algorithms; in such cases, algorithms must be designed to either correct the system state based on delayed measurements or disregard outdated measurements entirely.

Application Examples

Autonomous Vehicles

One of the most common and visible applications of sensor fusion is autonomous vehicle technology. A typical autonomous vehicle is equipped with multiple cameras, radar, LiDAR, ultrasonic sensors, and positioning sensors such as GPS/IMU. Through real-time fusion of data from these diverse sensors, the vehicle perceives its environment with 360° awareness. Sensor fusion is critical in autonomous systems because blind spots or erroneous measurements from a single sensor can be compensated by data from other sensors. For instance, a camera on the vehicle excels at identifying traffic light colors or pedestrian shapes, while a LiDAR sensor provides millimeter-precision distance and depth information. The combination of these two sensors exemplifies complementary fusion.


Meanwhile, if both radar and LiDAR sensors measure the distance to an object ahead, their data are combined using competitive fusion to enhance the accuracy and reliability of distance estimation; since both sensors measure the same physical quantity in different ways, the combined result reduces error margins. Autonomous vehicle sensor fusion is used in tasks such as 3D object detection, lane tracking, environmental mapping, and tracking obstacles and other vehicles, with each additional sensor improving the system’s overall performance and safety. For example, joint evaluation of camera and radar data enables detection of a pedestrian and estimation of their speed; fusion of LiDAR and GPS/IMU data enables precise vehicle positioning and mapping. Thus, sensor fusion forms the backbone of the perception module in autonomous vehicles, and the combined information is transferred to planning and control systems to enable safe driving.

Battery Management Systems

Modern electric vehicles and energy storage systems require advanced battery management systems (BMS) to ensure safe and efficient battery operation. Sensor fusion plays an effective role in estimating battery state in this domain. For example, the state of charge (SoC) and state of health (SoH) of a lithium-ion battery cannot be reliably determined from a single voltage or current measurement. BMSs typically process voltage, current, and temperature sensor data from battery cells together. A fusion algorithm uses this multi-source data to estimate the battery’s current capacity and performance in real time.


Through sensor fusion, measurement errors and noise are filtered, and the internal state of the battery—though not directly observable—is estimated using models. As a result, sensor fusion has become a critical technology in BMS for battery cell balancing, faulty sensor detection, fault tolerance, and lifespan prediction. For example, if a voltage sensor in one cell of an electric vehicle’s battery fails, a fusion algorithm supported by current and temperature data can continue estimating the cell’s condition by observing the behavior of other cells, ensuring the system continues to operate safely.

Spacecraft and Aviation

Sensor fusion has long been used in satellite and spacecraft navigation, target tracking, and state estimation. In space applications, limited measurement opportunities and high noise ratios make combining data from multiple sources critical. For example, in a space mission, radar or telescope data from ground-based tracking stations can be integrated with data from onboard star trackers and accelerometers through a fusion algorithm. This enables simultaneous and more accurate estimation of the spacecraft’s and nearby celestial bodies’ positions and velocities.


A similar need exists in aviation. As in space missions, sensor fusion plays a critical role in reliable navigation for aircraft. For instance, an aircraft’s inertial navigation system (INS) accumulates errors over time, while satellite positioning system (GPS) signals may be intermittent or noisy. Sensor fusion combines these two sources to provide continuous, accurate, and reliable position and velocity information. Similarly, in fighter jets, fusion of radar and infrared seeker data provides advantages unattainable with either sensor alone; radar tracks targets at long ranges, while infrared systems enable lock-on capability at close ranges.


In modern fighter jets, such integrations are centered on avionics systems. Avionics encompass all electronic subsystems necessary for the pilot to manage flight, execute missions, and survive. These include display, control, and recording systems; navigation systems; communication systems; sensors; and mission systems. All these components are coordinated by a central mission computer to function as a unified whole.


For example, Türkiye’s National Combat Aircraft (MMU) is being designed to meet fifth-generation standards, making sensor fusion a critical role on this platform. In previous-generation aircraft, sensors largely operated independently; in the MMU, AESA radar, passive infrared search and track (IRST) systems, electro-optical targeting systems (EOTS), electronic warfare components, distributed aperture systems (DAS), and data links are integrated under a single mission computer.


The primary goal of fusion among these systems is to align data from different sources in time and space to reduce pilot workload. For example, while radar detects an airborne target at long range, IRST can record the target’s infrared signature; EOTS provides detailed imagery to facilitate identification; DAS provides situational awareness and missile approach warnings; and electronic warfare systems detect electromagnetic emissions that radar cannot capture. Thus, the pilot does not have to manage complex data sets; the fusion system simplifies all information into a single coherent output.


The sensor fusion system in the MMU is not designed to be limited to its own systems. The aircraft is intended to share data with unmanned aerial vehicles such as ANKA-2 and KIZILELMA and even to direct their sensors. In this way, the MMU will transcend its role as an individual combat aircraft and become a command node in network-centric operations.

Citations

  • [1]

    Zaman damgalama (timestamping), sensörlerin topladığı her veriye ölçümün alındığı anı gösteren zaman etiketinin eklenmesidir.

  • [2]

    Enterpolasyon, iki bilinen ölçüm değeri arasındaki eksik verilerin, matematiksel yöntemlerle tahmin edilmesidir.

  • [3]

    Ekstrapolasyon, bilinen ölçüm aralığının dışındaki değerlerin, mevcut eğilim veya model kullanılarak tahmin edilmesidir.

  • [4]

    Deterministik modellerde, giriş (sensör ölçümleri) ile çıkış (sistemin durumu) arasındaki ilişkiyi kesin ve tek bir formülle tanımlanır. Yani aynı giriş değerleri her zaman aynı sonucu verir. Stokastik modeller ise gerçek dünyada ölçümlerin her zaman hatalar ve belirsizlikler içerdiğini kabul eder. Dolayısıyla giriş-çıkış ilişkisi olasılıksal olarak kabul edilir. Aynı giriş değeri tekrarlandığında sonuç, sensör gürültüsü veya çevresel faktörler nedeniyle her defasında biraz farklı olabilir.

Author Information

Avatar
AuthorBeyza Nur TürküDecember 1, 2025 at 9:08 AM

Discussions

No Discussion Added Yet

Start discussion for "Sensor Fusion" article

View Discussions

Contents

  • Implementation of Sensor Fusion

  • Types of Sensor Fusion

  • Architectural Approaches

  • Levels of Sensor Fusion

  • Sensor Fusion Algorithms

  • Error Sources

  • Application Examples

    • Autonomous Vehicles

    • Battery Management Systems

    • Spacecraft and Aviation

Ask to Küre