Simultaneous localization and mapping
Simultaneous localization and mapping (SLAM) is a technique used by robots and autonomous vehicles to build up a map within an unknown environment while at the same time keeping track of its current position. This is not as straightforward as it might sound due to inherent uncertainties in discerning the robot's relative movement from its various sensors.
If at the next iteration of map building the measured distance and direction travelled has a slight inaccuracy, then any features being added to the map will contain corresponding errors. If unchecked, these positional errors build cumulatively, grossly distorting the map and therefore the robot's ability to know its precise location. There are various techniques to compensate for this such as recognising features that it has come across previously and re-skewing recent parts of the map to make sure the two instances of that feature become one. Some of the statistical techniques used in SLAM include Kalman filters, particle filters (aka. Monte Carlo methods) and scan matching of range data. Pioneering work in this field was conducted by the research group of Hugh F. Durrant-Whyte.
Much of the SLAM work is based on concepts imported from computer vision, where an important goal is to track moving targets in visually cluttered environments. For example, the particle filter-based condensation algorithm (ECCV, 1996) by Michael Isard and vision pioneer Andrew Blake uses a probabilistic (Bayesian) framework for achieving this goal. It can be easily adapted to mobile robot applications, where the closely related goal is to track a moving robot.
SLAM in the mobile robotics community generally refers to the process of creating geometrically accurate maps of the environment. Topological maps is another method of environment representation which capture the connectivity (i.e. topology) of the environment rather than creating a geometrically accurate map. As a result, algorithms that create topological maps are not refered to as SLAM.
SLAM has not yet been fully perfected, but it is starting to be employed in unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers and newly emerging domestic robots.
SLAM usually uses laser range finders or sonar sensors to build the map. However VSLAM (visual simultaneous localization and mapping) uses entirely visual means.
See also
- Kalman filter
- Particle filter
- Registration of range images
- vSLAM