Journal
NEUROCOMPUTING
Volume 461, Issue -, Pages 162-170Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2021.07.004
Keywords
Disassembling; Object representation; Unsupervised
Categories
Funding
- National Key R&D Program of China [2018AAA0101503]
- Science and technology project of SGCC (State Grid Corporation of China)
Ask authors/readers for more resources
In this paper, the authors study a new representation-learning task, disassembling object representations. They propose an unsupervised approach named UDOR, which follows a double auto-encoder architecture and involves fuzzy classification and object-removing operations to achieve disassembling. Experimental results show that UDOR achieves encouraging results comparable to supervised methods.
In this paper, we study a new representation-learning task, which we termed as disassembling object representations. Given an image featuring multiple objects, the goal of disassembling is to acquire a latent representation, of which each part corresponds to one category of objects. Disassembling thus finds its application in a wide domain such as image editing and few-or zero-shot learning, as it enables category-specific modularity in the learned representations. To this end, we propose an unsupervised approach to achieving disassembling, named Unsupervised Disassembling Object Representation (UDOR). UDOR follows a double auto-encoder architecture, in which a fuzzy classification and an object-removing operation are imposed. The fuzzy classification constrains each part of the latent representation to encode features of up to one object category, while the object-removing, combined with a generative adversarial network, enforces the modularity of the representations and integrity of the reconstructed image. Furthermore, we devise two metrics to respectively measure the modularity of disassembled representations and the visual integrity of reconstructed images. Experimental results demonstrate that the proposed UDOR, despite unsupervised, achieves truly encouraging results on par with those of supervised methods. (c) 2021 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available