User:Bazuz/sandbox/Simultaneous localization and mapping
![]() | This is not a Wikipedia article: It is an individual user's work-in-progress page, and may be incomplete and/or unreliable. For guidance on developing this draft, see Wikipedia:So you made a userspace draft. Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |

In navigation, robotic mapping and odometry for virtual reality or augmented reality, simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it.[1][2][3][4] While this initially appears to be a chicken-and-egg problem there are several algorithms known for solving it, at least approximately, in tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, Covariance intersection, and GraphSLAM.
SLAM algorithms are tailored to the available resources, hence not aimed at perfection, but at operational compliance. Published approaches are employed in self-driving cars, unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, newer domestic robots and even inside the human body.[5]
See also
- Computational photography
- Visual odometry
- Kalman filter
- Inverse depth parametrization
- List of SLAM Methods
- The Mobile Robot Programming Toolkit (MRPT) project: A set of open-source, cross-platform libraries covering SLAM through particle filtering and Kalman Filtering.
- Monte Carlo localization
- Multi Autonomous Ground-robotic International Challenge: A $1.6 million international challenge requiring multiple vehicles to collaboratively map a large area
- Neato Robotics
- Particle filter
- Project Tango
- Robotic mapping
- Stanley, a DARPA Grand Challenge vehicle winner using SLAM techniques
- Stereophotogrammetry
- Structure from motion
- 3D Scanner
History
A seminal work in SLAM is the research of R.C. Smith and P. Cheeseman on the representation and estimation of spatial uncertainty in 1986.[6][7] Other pioneering work in this field was conducted by the research group of Hugh F. Durrant-Whyte in the early 1990s.[8] which showed that solutions to SLAM exist in the infinite data limit. This finding motivates the search for algorithms which are computationally tractable and approximate the solution.
The self-driving STANLEY and JUNIOR cars, led by Sebastian Thrun, won the DARPA Grand Challenge and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot vacuum cleaners.[9]
Sensors
SLAM will always use several different types of sensors, and the powers and limits of various sensor types have been a major driver of new algorithms.[11] Statistical independence is the mandatory requirement to cope with metric bias and with noise in measurements. Different types of sensors give rise to different SLAM algorithms whose assumptions are most appropriate to the sensors. At one extreme, laser scans or visual features provide details of many points within an area, sometimes rendering SLAM inference is unnecessary because shapes in these point clouds can be easily and unambiguously aligned at each step via image registration. At the opposite extreme, tactile sensors are extremely sparse as they contain only information about points very close to the agent, so they require strong prior models to compensate in purely tactile SLAM. Most practical SLAM tasks fall somewhere between these visual and tactile extremes.
Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely identifiable objects in the world whose location can be estimated by a sensor—such as wifi access points or radio beacons. Raw-data approaches make no assumption that landmarks can be identified, and instead model {\displaystyle P(o_{t}|x_{t})}P(o_{t}|x_{t}) directly as a function of the location.
Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) laser rangefinders, 3D High Definition LiDAR, 3D Flash LIDAR, 2D or 3D sonar sensors and one or more 2D cameras.[11] Since 2005, there has been intense research into VSLAM (visual SLAM) using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices.[12] Visual and LIDAR sensors are informative enough to allow for landmark extraction in many cases. Other recent forms of SLAM include tactile SLAM[13] (sensing by local touch only), radar SLAM,[14] acoustic SLAM,[15] and wifi-SLAM (sensing by strengths of nearby wifi access points).[16] Recent approaches apply quasi-optical wireless ranging for multi-lateration (RTLS) or multi-angulation in conjunction with SLAM as a tribute to erratic wireless measures. A kind of SLAM for human pedestrians uses a shoe mounted inertial measurement unit as the main sensor and relies on the fact that pedestrians are able to avoid walls to automatically build floor plans of buildings. by an indoor positioning system.[17]
For some outdoor applications, the need for SLAM has been almost entirely removed due to high precision differential GPS sensors. From a SLAM perspective, these may be viewed as location sensors whose likelihoods are so sharp that they completely dominate the inference. However GPS sensors may go down entirely or in performance on occasions, especially during times of military conflict which are of particular interest to some robotics applications.
Localization and mapping as separate problems
The two main approaches
Fitering
Graph-based
The impact of deep learning
Optional: mathematical formulation
Given a series of controls and sensor observations over discrete time steps , the SLAM problem is to compute an estimate of the agent's location and a map of the environment . All quantities are usually probabilistic, so the objective is to compute:
Applying Bayes' rule gives a framework for sequentially updating the location posteriors, given a map and a transition function ,
Similarly the map can be updated sequentially by
Like many inference problems, the solutions to inferring the two variables together can be found, to a local optimum solution, by alternating updates of the two beliefs in a form of EM algorithm.
References
- ^ Durrant-Whyte, H.; Bailey, T. (2006). "Simultaneous localization and mapping: part I". IEEE Robotics & Automation Magazine. 13 (2): 99–110. CiteSeerX 10.1.1.135.9810. doi:10.1109/mra.2006.1638022. ISSN 1070-9932.
- ^ Bailey, T.; Durrant-Whyte, H. (2006). "Simultaneous localization and mapping (SLAM): part II". IEEE Robotics & Automation Magazine. 13 (3): 108–117. doi:10.1109/mra.2006.1678144. ISSN 1070-9932.
- ^ Cadena, Cesar; Carlone, Luca; Carrillo, Henry; Latif, Yasir; Scaramuzza, Davide; Neira, Jose; Reid, Ian; Leonard, John J. (2016). "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age". IEEE Transactions on Robotics. 32 (6): 1309–1332. arXiv:1606.05830. Bibcode:2016arXiv160605830C. doi:10.1109/tro.2016.2624754. hdl:2440/107554. ISSN 1552-3098.
- ^ Perera, Samunda; Barnes, Dr.Nick; Zelinsky, Dr.Alexander (2014), Ikeuchi, Katsushi (ed.), "Exploration: Simultaneous Localization and Mapping (SLAM)", Computer Vision: A Reference Guide, Springer US, pp. 268–275, doi:10.1007/978-0-387-31439-6_280, ISBN 9780387314396
- ^ Mountney, P.; et al. (Stoyanov, D.; Davison, A.; Yang, G-Z.) (2006). "Simultaneous Stereoscope Localization and Soft-Tissue Mapping for Minimal Invasive Surgery". MICCAI. Lecture Notes in Computer Science. 1 (Pt 1): 347–354. doi:10.1007/11866565_43. ISBN 978-3-540-44707-8. PMID 17354909. Retrieved 2010-07-30.
- ^ Smith, R.C.; Cheeseman, P. (1986). "On the Representation and Estimation of Spatial Uncertainty" (PDF). The International Journal of Robotics Research. 5 (4): 56–68. doi:10.1177/027836498600500404. Retrieved 2008-04-08.
- ^ Smith, R.C.; Self, M.; Cheeseman, P. (1986). "Estimating Uncertain Spatial Relationships in Robotics" (PDF). Proceedings of the Second Annual Conference on Uncertainty in Artificial Intelligence. UAI '86. University of Pennsylvania, Philadelphia, PA, USA: Elsevier. pp. 435–461. Archived from the original (PDF) on 2010-07-02.
{{cite conference}}
: Unknown parameter|booktitle=
ignored (|book-title=
suggested) (help) - ^ Leonard, J.J.; Durrant-whyte, H.F. (1991). "Simultaneous map building and localization for an autonomous mobile robot". Intelligent Robots and Systems' 91.'Intelligence for Mechanical Systems, Proceedings IROS'91. IEEE/RSJ International Workshop on: 1442–1447. doi:10.1109/IROS.1991.174711. ISBN 978-0-7803-0067-5. Retrieved 2008-04-08.
- ^ Knight, Will. "With a Roomba Capable of Navigation, iRobot Eyes Advanced Home Robots". MIT Technology Review. Retrieved 2018-04-25.
External links
- Probabilistic Robotics by Sebastian Thrun, Wolfram Burgard and Dieter Fox with a clear overview of SLAM.
- SLAM For Dummies (A Tutorial Approach to Simultaneous Localization and Mapping).
- Andrew Davison research page at the Department of Computing, Imperial College London about SLAM using vision.
- openslam.org A good collection of open source code and explanations of SLAM.
- Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping Vehicle moving in 1D, 2D and 3D.
- FootSLAM research page at DLR including the related Wifi SLAM and PlaceSLAM approaches.
- SLAM lecture Online SLAM lecture based on Python.