Today I read a paper titled “Noise in Structured-Light Stereo Depth Cameras: Modeling and its Applications”
My initial thoughts: I’ve done extensive experiments with the Kinect and one of the things that has always annoyed me is just how darn noisy the RGBD image data is coming off the device. I’ve developed some algorithms that denoise it, but a lot of how well your software works is completely dependent on environmental issues. Light bloom from an outside window can totally throw off your application and render t all but useless during a critical demo (trust me, I know this first hand).
I think I can make use of some of the denoising techniques covered in this paper, combined with a 3D planes segmentation algorithm.
The abstract is:
Depth maps obtained from commercially available structured-light stereo based depth cameras, such as the Kinect, are easy to use but are affected by significant amounts of noise.
This paper is devoted to a study of the intrinsic noise characteristics of such depth maps, i.e.
the standard deviation of noise in estimated depth varies quadratically with the distance of the object from the depth camera.
We validate this theoretical model against empirical observations and demonstrate the utility of this noise model in three popular applications: depth map denoising, volumetric scan merging for 3D modeling, and identification of 3D planes in depth maps.