4.7 Article

Learning to Synthesize Compatible Fashion Items Using Semantic Alignment and Collocation Classification: An Outfit Generation Framework

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2022.3202842

Keywords

Feature extraction; Generative adversarial networks; Task analysis; Biological system modeling; Semantics; Footwear; Training; Fashion compatibility learning; fashion synthesis; generative adversarial network (GAN); image-to-image translation; outfit generation

Funding

  1. National Natural Science Foundation of China [61972112, 61832004]
  2. Guangdong Basic and Applied Basic Research Foundation [2021B1515020088]
  3. Shenzhen Science and Technology Program [JCYJ20210324131203009]
  4. HITSZ-J&A Joint Laboratory of Digital Design and Intelligent Fabrication [HITSZ-JA-2021A01]

Ask authors/readers for more resources

The article introduces a novel outfit generation framework OutfitGAN, aiming to synthesize a set of complementary items to compose an entire outfit. Through extensive experiments on a large-scale dataset, OutfitGAN demonstrates superior performance in synthesizing photo-realistic outfits and improving compatibility.
The field of fashion compatibility learning has attracted great attention from both the academic and industrial communities in recent years. Many studies have been carried out for fashion compatibility prediction, collocated outfit recommendation, artificial intelligence (AI)-enabled compatible fashion design, and related topics. In particular, AI-enabled compatible fashion design can be used to synthesize compatible fashion items or outfits to improve the design experience for designers or the efficacy of recommendations for customers. However, previous generative models for collocated fashion synthesis have generally focused on the image-to-image translation between fashion items of upper and lower clothing. In this article, we propose a novel outfit generation framework, i.e., OutfitGAN, with the aim of synthesizing a set of complementary items to compose an entire outfit, given one extant fashion item and reference masks of target synthesized items. OutfitGAN includes a semantic alignment module (SAM), which is responsible for characterizing the mapping correspondence between the existing fashion items and the synthesized ones, to improve the quality of the synthesized images, and a collocation classification module (CCM), which is used to improve the compatibility of a synthesized outfit. To evaluate the performance of our proposed models, we built a large-scale dataset consisting of 20 000 fashion outfits. Extensive experimental results on this dataset show that our OutfitGAN can synthesize photo-realistic outfits and outperform the state-of-the-art methods in terms of similarity, authenticity, and compatibility measurements.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available