Today I read a paper titled “Generating Natural Questions About an Image”
The abstract is:
There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images.
These tasks have focused on literal descriptions of the image.
To move beyond the literal, we choose to explore how questions about an image are often directed at commonsense inference and the abstract events evoked by objects in the image.
In this paper, we introduce the novel task of Visual Question Generation (VQG), where the system is tasked with asking a natural and engaging question when shown an image.
We provide three datasets which cover a variety of images from object-centric to event-centric, with considerably more abstract training data than provided to state-of-the-art captioning systems thus far.
We train and test several generative and retrieval models to tackle the task of VQG.
Evaluation results show that while such models ask reasonable questions for a variety of images, there is still a wide gap with human performance which motivates further work on connecting images with commonsense knowledge and pragmatics.
Our proposed task offers a new challenge to the community which we hope furthers interest in exploring deeper connections between vision & language.