4.7 Article

Language-Guided Navigation via Cross-Modal Grounding and Alternate Adversarial Learning

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2020.3039522

Keywords

Navigation; Training; Trajectory; Visualization; Task analysis; Grounding; Generators; Vision-and-language; embodied navigation; attention mechanism; adversarial learning

Funding

  1. National Key Research and Development Program of China [2016YFB1001003]
  2. NSFC [61901262, U19B2035, 61527804, 61906119]
  3. Science and Technology of Shanghai Municipality (STCSM) [18DZ1112300]
  4. Shanghai Pujiang Program

Ask authors/readers for more resources

The main challenges of the emerging vision-and-language navigation (VLN) problem arise from the combination of language instructions and visual environments, as well as the discrepancy in action selection between training and inference.
The emerging vision-and-language navigation (VLN) problem aims at learning to navigate an agent to the target location in unseen photo-realistic environments according to the given language instruction. The main challenges of VLN arise mainly from two aspects: first, the agent needs to attend to the meaningful paragraphs of the language instruction corresponding to the dynamically-varying visual environments; second, during the training process, the agent usually imitate the expert demonstrations, i.e., the shortest-path to the target location specified by associated language instructions. Due to the discrepancy of action selection between training and inference, the agent solely on the basis of imitation learning does not perform well. Existing VLN approaches address this issue by sampling the next action from its predicted probability distribution during the training process. This allows the agent to explore diverse routes from the environments, yielding higher success rates. Nevertheless, without being presented with the golden shortest navigation paths during the training process, the agent may arrive at the target location through an unexpected longer route. To overcome these challenges, we design a cross-modal grounding module, which is composed of two complementary attention mechanisms, to equip the agent with a better ability to track the correspondence between the textual and visual modalities. We then propose to recursively alternate the learning schemes of imitation and exploration to narrow the discrepancy between training and inference. We further exploit the advantages of both these two learning schemes via adversarial learning. Extensive experimental results on the Room-to-Room (R2R) benchmark dataset demonstrate that the proposed learning scheme is generalized and complementary to prior arts. Our method performs well against state-of-the-art approaches in terms of effectiveness and efficiency.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available