4.0 Article

Structured (De)composable Representations Trained with Neural Networks

Journal

COMPUTERS
Volume 9, Issue 4, Pages -

Publisher

MDPI
DOI: 10.3390/computers9040079

Keywords

structured representations; composition; deep learning; multimodal

Funding

  1. FWO
  2. SNSF [G078618N, 176004]
  3. ERC [788506]
  4. European Research Council (ERC) [788506] Funding Source: European Research Council (ERC)

Ask authors/readers for more resources

This paper proposes a novel technique for representing templates and instances of concept classes. A template representation refers to the generic representation that captures the characteristics of an entire class. The proposed technique uses end-to-end deep learning to learn structured and composable representations from input images and discrete labels. The obtained representations are based on distance estimates between the distributions given by the class label and those given by contextual information, which are modeled as environments. We prove that the representations have a clear structure allowing decomposing the representation into factors that represent classes and environments. We evaluate our novel technique on classification and retrieval tasks involving different modalities (visual and language data). In various experiments, we show how the representations can be compressed and how different hyperparameters impact performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.0
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available