Sensor Fusion is the combining of sensory data or data derived from sensory data such that the resulting information is in some sense better than would be possible when these sources were used individually. (W. Elmenreich. Sensor Fusion in Time-Triggered Systems, p. 8 )
But not all sensor fusion applications are of the same kind or achieve the same benefits. There are three basic models how the data from multiple sensors can be fused:
- A sensor configuration is called complementary if the sensors do not directly depend on each other, but can be combined in order to give a more complete image of the phenomenon under observation. This resolves the incompleteness of sensor data. An example for a complementary configuration is the employment of multiple cameras each observing disjunct parts of a room. Generally, fusing complementary data is easy, since the data from independent sensors can be appended to each other.
- Sensors in a competitive configuration have each sensor delivering independent measurements of the same property. Competitive configurations are used for fault-tolerant and robust systems. An example would be the reduction of noise by combining two overlaying camera images.
- A cooperative sensor network uses the information provided by two independent sensors to derive information that would not be available from the single sensors. An example for a cooperative sensor configuration is stereoscopic vision – by combining two-dimensional images from two cameras at slightly different viewpoints a three-dimensional image of the observed scene is derived.
- W. Elmenreich. Sensor Fusion in Time-Triggered Systems. PhD thesis, Institut für Technische Informatik, 2002.
- H. F. Durrant-Whyte. Sensor Models and Multisensor Integration.
International Journal of Robotics Research, 7(6):97–113, Dec. 1988.