Journal
EXPERT SYSTEMS WITH APPLICATIONS
Volume 203, Issue -, Pages -Publisher
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.eswa.2022.117367
Keywords
Deep learning; Metric learning; Mathematical optimization; Computer vision
Categories
Funding
- Korea Research Institute of Chemical Technology (KRICT) [SI2151-10]
Ask authors/readers for more resources
This paper proposes a new approach to improve the optimization performance of gradient-based optimizers in metric learning by using eigenguidance, which is calculated based on eigenvalue decomposition. The experiments showed that the gradient-based optimizers with eigenguidance converged significantly faster than those without in metric learning tasks.
This paper proposes a new approach to improve the optimization performance of gradient-based optimizers for training neural networks in metric learning. In metric learning, the neural networks use a linear activation in their output layer to generate real-valued data embeddings. Hence, the optimal model parameters can be deterministically calculated with respect to the output layer on the basis of the eigenvalue decomposition. These optimal model parameters that were computed by the eigenvalue decomposition were referred to eigenguidance. The training performance of a gradient-based optimizer is boosted by replacing the current model parameters with the eigen-guidance at each pre-defined number of epochs. In addition, the optimal number of embedding dimensions is determined directly while computing the eigen-guidance without additional complexities. In our experiments, gradient-based optimizers with the eigen-guidance converged significantly faster than the gradient-based optimizers alone in metric learning tasks.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available