Sensor fusion
Sensor fusion is the combining of sensory data or data derived from sensory data from disparate sources such that the resulting information is in some sense better than would be possible when these sources were used individually. The term better in this case can mean more accurate, more complete, or more dependable, or refer to the result of an emerging view, such as stereoscopic vision (calculation of depth information by combining two-dimensional images from two cameras at slightly different viewpoints).[1]
The data sources for a fusion process are not specified to originate from identical sensors. One can distinguish direct fusion, indirect fusion and fusion of the outputs of the former two. Direct fusion is the fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values of sensor data, while indirect fusion uses information sources like a priori knowledge about the environment and human input.
Sensor fusion is also known as (multi-sensor) Data fusion and is a subset of information fusion.
Examples of sensors
- Radar
- Sonar and other acoustic
- Infra-red / thermal imaging camera
- TV cameras
- Sonobuoys
- Seismic sensors
- Magnetic sensors
- Electronic Support Measures (ESM)
- Phased array
- MEMS
- Accelerometers
Sensor fusion algorithms
Sensor fusion is a term that covers a number of methods and algorithms, including:
Centralized versus decentralized
In sensor fusion, centralized versus decentralized refers to where the fusion of the data occurs. In centralized fusion, the clients simply forward all of the data to a central location, and some entity at the central location is responsible for correlating and fusing the data. In decentralized, the clients take full responsibility for fusing the data. "In this case, every sensor or platform can be viewed as an intelligent asset having some degree of autonomy in decision-making."[2]
Multiple combinations of centralized and decentralized systems exist.
Levels
There are several categories or levels of sensor fusion that are commonly used.[citation needed]
- Level 0 – Data alignment
- Level 1 – Entity assessment (e.g. signal/feature/object)
- Tracking and object detection/recognition/identification
- Level 2 – Situation assessment
- Level 3 – Impact assessment
- Level 4 – Process refinement (i.e. sensor management)
- Level 5 – User refinement
See also
- Information integration
- Data mining
- Data fusion
- Image fusion
- Information: Information is not data
- Data (computing)
- multimodal integration
- Fisher's method for combining independent tests of significance
- Transducer Markup Language (TML) is an XML based markup language which enables sensor fusion.
- Brooks–Iyengar algorithm
- Inertial navigation system
- Sensor Grid
References
- ^ Elmenreich, W. (2002). Sensor Fusion in Time-Triggered Systems, PhD Thesis (PDF). Vienna, Austria: Vienna University of Technology. p. 173.
- ^ N. Xiong (2002). "Multi-sensor management for information fusion: issues and approaches". Information Fusion. p. 3(2):163–186.
{{cite web}}
: Cite has empty unknown parameter:|1=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)
- International Society of Information Fusion
- Rethinking JDL Data Fusion Levels
- E. P. Blasch and S. Plano, “Level 5: User Refinement to aid the Fusion Process”, Proc of the SPIE, Vol. 5099, 2003.
- J. Llinas, C. Bowman, G. Rogova, A. Steinberg, E. Waltz, and F. White, "Revisiting the JDL data fusion model II", Int. Conf. on Information Fusion, 2004.
- E. Blasch, "Sensor, user, mission (SUM) resource management and their interaction with level 2/3 fusion" Int. Conf. on Information Fusion, 2006.
- http://www-prima.inrialpes.fr/Prima/Homepages/jlc/papers/SigProc-Fusion.pdf