Today I read a paper titled “Look-ahead before you leap: end-to-end active recognition by forecasting the effect of motion”
My initial thoughts: A lot of vision recognition systems, especially the simpler ones, are based on static imagery. If recognition of a moving scene is deployed it is rarely predictive and rarely (if ever) takes in to account the motion of the observer. Actually having demonstrable theories about how to deploy vision recognition in an observer system that is itself moving is hugely beneficial in all sorts of field applications.
Update: I’ve read stuff about this previously, in some other papers. Hmmm… need to go back to the cited works to see who these guys are referencing when I get home this evening.
The abstract is:
Visual recognition systems mounted on autonomous moving agents face the challenge of unconstrained data, but simultaneously have the opportunity to improve their performance by moving to acquire new views of test data.
In this work, we first show how a recurrent neural network-based system may be trained to perform end-to-end learning of motion policies suited for the “active recognition” setting.
Further, we hypothesize that active vision requires an agent to have the capacity to reason about the effects of its motions on its view of the world.
To verify this hypothesis, we attempt to induce this capacity in our active recognition pipeline, by simultaneously learning to forecast the effects of the agent’s motions on its internal representation of its cumulative knowledge obtained from all past views.
Results across two challenging datasets confirm both that our end-to-end system successfully learns meaningful policies for active recognition, and that “learning to look ahead” further boosts recognition performance.