4.7 Article

Human-Centric Spatio-Temporal Video Grounding With Visual Transformers

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCSVT.2021.3085907

Keywords

Task analysis; Grounding; Visualization; Electron tubes; Location awareness; Annotations; Proposals; Spatio-temporal grounding; transformer; dataset

Funding

  1. National Key Research and Development Project of China [2018AAA0101900]
  2. National Natural Science Foundation of China [61876177]
  3. Beijing Natural Science Foundation [4202034]
  4. Guangdong Basic and Applied Basic Research Foundation [2020B1515020048]
  5. Fundamental Research Funds for the Central Universities, Zhejiang Lab [2019KD0AB04]

Ask authors/readers for more resources

In this work, a novel task called Human-centric Spatio-Temporal Video Grounding (HC-STVG) is introduced. HC-STVG aims to localize a spatio-temporal tube of the target person from an untrimmed video based on a given textual description, focusing on humans. This task is useful for healthcare and security applications.
In this work, we introduce a novel task - Human-centric Spatio-Temporal Video Grounding (HC-STVG). Unlike the existing referring expression tasks in images or videos, by focusing on humans, HC-STVG aims to localize a spatio-temporal tube of the target person from an untrimmed video based on a given textural description. This task is useful, especially for healthcare and security related applications, where the surveillance videos can be extremely long but only a specific person during a specific period is concerned. HC-STVG is a video grounding task that requires both spatial (where) and temporal (when) localization. Unfortunately, the existing grounding methods cannot handle this task well. We tackle this task by proposing an effective baseline method named Spatio-Temporal Grounding with Visual Transformers (STGVT), which utilizes Visual Transformers to extract cross-modal representations for video-sentence matching and temporal localization. To facilitate this task, we also contribute an HC-STVG datasetThe new dataset is available at https://github.com/tzhhhh123/HC-STVG. consisting of 5,660 video-sentence pairs on complex multi-person scenes. Specifically, each video lasts for 20 seconds, pairing with a natural query sentence with an average of 17.25 words. Extensive experiments are conducted on this dataset, demonstrating that the newly-proposed method outperforms the existing baseline methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available