Today I read a paper titled “To Know Where We Are: Vision-Based Positioning in Outdoor Environments”
The abstract is:
Augmented reality (AR) displays become more and more popular recently, because of its high intuitiveness for humans and high-quality head-mounted display have rapidly developed.
To achieve such displays with augmented information, highly accurate image registration or ego-positioning are required, but little attention have been paid for out-door environments.
This paper presents a method for ego-positioning in outdoor environments with low cost monocular cameras.
To reduce the computational and memory requirements as well as the communication overheads, we formulate the model compression algorithm as a weighted k-cover problem for better preserving model structures.
Specifically for real-world vision-based positioning applications, we consider the issues with large scene change and propose a model update algorithm to tackle these problems.
A long- term positioning dataset with more than one month, 106 sessions, and 14,275 images is constructed.
Based on both local and up-to-date models constructed in our approach, extensive experimental results show that high positioning accuracy (mean ~ 30.9cm, stdev.
~ 15.4cm) can be achieved, which outperforms existing vision-based algorithms.