Today I read a paper titled “Creating Simplified 3D Models with High Quality Textures”
Initial thoughts: More Kinect research. It’s a fun little toy, and hooking up multiple Kinects together gives me a full 360 degree scanning system with very little occlusion. Have been integrating the Kinect scanners in to some Hololens AR experiments where I process the Kinect data off-board on a workstation, then feed that to the Hololens via a dedicated data server so that I can overlay a transformational image on to the user’s body. Think of it like being able to see yourself wearing an IronMan suit, or that scene in BladeRunner 2049 where the digital waifu overlays herself with the human sex worker for a weird Replicant, hologram, human three way.
Combining the ideas from this paper, with the observer motion predictive ideas from another Kinect paper I am working through opens up a huge opportunity to “body match” the hologram that the HoloLens is displaying with the human user. It is going to take some work, but I think I am on to something here.
This paper is sort of tangentially related because what I am trying to do is scan (without a lot of noise) the user, get high resolution, non-occluded data, process it on a “lots of cores” workstation with a couple of high-end GPUs, and then feed that pre-processed model data in to the HoloLens for real-time holographic overlay and real-time model distortion of the user. Basically, I am making a “digital fat suit/fun house mirror” that mimics the human as they move around.
The abstract is:
This paper presents an extension to the KinectFusion algorithm which allows creating simplified 3D models with high quality RGB textures.
This is achieved through (i) creating model textures using images from an HD RGB camera that is calibrated with Kinect depth camera, (ii) using a modified scheme to update model textures in an asymmetrical colour volume that contains a higher number of voxels than that of the geometry volume, (iii) simplifying dense polygon mesh model using quadric-based mesh decimation algorithm, and (iv) creating and mapping 2D textures to every polygon in the output 3D model.
The proposed method is implemented in real-time by means of GPU parallel processing.
Visualization via ray casting of both geometry and colour volumes provides users with a real-time feedback of the currently scanned 3D model.
Experimental results show that the proposed method is capable of keeping the model texture quality even for a heavily decimated model and that, when reconstructing small objects, photorealistic RGB textures can still be reconstructed.