Today I read a paper titled “Accurate 3D maps from depth images and motion sensors via nonlinear Kalman filtering”
The abstract is:
This paper investigates the use of depth images as localisation sensors for 3D map building.
The localisation information is derived from the 3D data thanks to the ICP (Iterative Closest Point) algorithm.
The covariance of the ICP, and thus of the localization error, is analysed, and described by a Fisher Information Matrix.
It is advocated this error can be much reduced if the data is fused with measurements from other motion sensors, or even with prior knowledge on the motion.
The data fusion is performed by a recently introduced specific extended Kalman filter, the so-called Invariant EKF, and is directly based on the estimated covariance of the ICP.
The resulting filter is very natural, and is proved to possess strong properties.
Experiments with a Kinect sensor and a three-axis gyroscope prove clear improvement in the accuracy of the localization, and thus in the accuracy of the built 3D map.