4.7 Article

Egocentric video co-summarization using transfer learning and refined random walk on a constrained graph

Journal

PATTERN RECOGNITION
Volume 134, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2022.109128

Keywords

Egocentric video; Transfer learning; Constrained graph; Random walks; Label refinement

Ask authors/readers for more resources

This paper addresses the problem of egocentric video co-summarization. It proposes a method based on random walk on a constrained graph in transfer learned feature space to obtain accurate summaries for each shot. The method shows advantages over state-of-the-art methods in both short and long duration videos.
In this paper, we address the problem of egocentric video co-summarization. We show how a shot level accurate summary can be obtained in a time-efficient manner using random walk on a constrained graph in transfer learned feature space with label refinement. While applying transfer learning, we propose a new loss function capturing egocentric characteristics in a pre-trained ResNet on the set of auxiliary egocentric videos. Transfer learning is used to generate i) an improved feature space and ii) a set of labels to be used as seeds for the test egocentric video. A complete weighted graph is created for a test video in the new transfer learned feature space with shots as the vertices. We derive two types of cluster label constraints in form of Must-Link (ML) and Cannot-link (CL) based on the similarity of the shots. ML constraints are used to prune the complete graph which is shown to result in substantial computational advantage, especially, for the long duration videos. We derive expressions for the number of vertices and edges for the ML-constrained graph and show that this graph remains connected. Random walk is applied to obtain labels of the unmarked shots in this new graph. CL constraints are applied to refine the cluster labels. Finally, shots closest to individual cluster centres are used to build the summary. Experiments on the short duration videos as in CoSum and TVSum datasets and long duration videos as in ADL and EPIC-Kitchens datasets clearly demonstrate the advantage of our solution over several state-of-the-art methods.(c) 2022 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available