4.8 Article

Dual Encoding for Video Retrieval by Text

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3059295

Keywords

Encoding; Visualization; Feature extraction; Computational modeling; Linguistics; Electronic mail; Recurrent neural networks; Video retrieval; cross-modal representation learning; dual encoding; hybrid space learning

Funding

  1. NSFC [61902347, 61672523]
  2. BJNSF [4202033]
  3. ZJNSF [LQ19F020002]
  4. Public Welfare Technology Research Project of Zhejiang Province [LGF21F020010]
  5. Fundamental Research Funds for the Central Universities
  6. Research Funds of Renmin University of China [18XNLG19]
  7. Research Program of Zhejiang Lab [2019KD0AC02]
  8. Public Computing Cloud of Renmin University of China
  9. Alibaba-ZJU Joint Research Institute of Frontier Technologies

Ask authors/readers for more resources

This paper introduces a novel method for retrieving videos by text, utilizing a dual deep encoding network to encode videos and queries into dense vector representations. The approach incorporates multi-level encoding and hybrid space learning to achieve effective cross-modal matching between videos and queries. Extensive experiments demonstrate the feasibility and effectiveness of the proposed method in four challenging video datasets.
This paper attacks the challenging problem of video retrieval by text. In such a retrieval paradigm, an end user searches for unlabeled videos by ad-hoc queries described exclusively in the form of a natural-language sentence, with no visual example provided. Given videos as sequences of frames and queries as sequences of words, an effective sequence-to-sequence cross-modal matching is crucial. To that end, the two modalities need to be first encoded into real-valued vectors and then projected into a common space. In this paper we achieve this by proposing a dual deep encoding network that encodes videos and queries into powerful dense representations of their own. Our novelty is two-fold. First, different from prior art that resorts to a specific single-level encoder, the proposed network performs multi-level encoding that represents the rich content of both modalities in a coarse-to-fine fashion. Second, different from a conventional common space learning algorithm which is either concept based or latent space based, we introduce hybrid space learning which combines the high performance of the latent space and the good interpretability of the concept space. Dual encoding is conceptually simple, practically effective and end-to-end trained with hybrid space learning. Extensive experiments on four challenging video datasets show the viability of the new method. Code and data are available at https://github.com/danieljf24/hybrid_space.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available