3.8 Proceedings Paper

An Empirical Study on the Generalization Power of Neural Representations Learned via Visual Guessing Games

Guessing games serve as a prototypical example of the learning by interacting paradigm, and this research investigates how artificial agents can benefit from playing such games in the context of NLP tasks. The study proposes two methods, supervised learning and self-play via SPIEL, to exploit guessing games, and evaluates their generalization ability to improve performance in downstream NLP tasks. The results show increased accuracy in both in-domain and transfer evaluations, with SPIEL leading to more fine-grained object representations for improved performance in VQA.
Guessing games are a prototypical instance of the learning by interacting paradigm. This work investigates how well an artificial agent can benefit from playing guessing games when later asked to perform on novel NLP downstream tasks such as Visual Question Answering (VQA). We propose two ways to exploit playing guessing games: 1) a supervised learning scenario in which the agent learns to mimic successful guessing games and 2) a novel way for an agent to play by itself, called Self-play via Iterated Experience Learning (SPIEL). We evaluate the ability of both procedures to generalise: an in-domain evaluation shows an increased accuracy (+7:79) compared with competitors on the evaluation suite CompGuessWhat?!; a transfer evaluation shows improved performance for VQA on the TDIUC dataset in terms of harmonic average accuracy (+5:31) thanks to more fine-grained object representations learned via SPIEL.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available