Journal
APPLIED INTELLIGENCE
Volume 53, Issue 5, Pages 5808-5822Publisher
SPRINGER
DOI: 10.1007/s10489-022-03695-x
Keywords
Adaptive multitask learning; Neural networks; Machine learning; Default prediction; Mortality prediction
Categories
Ask authors/readers for more resources
Multitask learning can enhance the performance of a task by sharing representations with related auxiliary tasks. However, static loss weights often lead to poor results. This paper introduces an intelligent weighting algorithm called HydaLearn that addresses the shortcomings of static loss weights by connecting the main-task gain to individual task gradients, allowing for dynamic loss weighting at the minibatch level. Experiments show significant performance improvements on synthetic and real-world datasets.
Multitask learning (MTL) can improve performance on one task by sharing representations with one or more related auxiliary tasks. Usually, MTL networks are trained on a composite loss function formed by a fixed weighted combination of separate task losses. In practice, however, static loss weights lead to poor results for two reasons. First, the relevance of the auxiliary tasks gradually drifts throughout the learning process. Second, for minibatch-based optimization, the optimal task weights vary significantly from one update to the next depending on the minibatch sample composition. Here, we introduce HydaLearn, an intelligent weighting algorithm that connects the main-task gain to the individual task gradients, to inform dynamic loss weighting at the minibatch level, addressing the two above shortcomings. We demonstrate significant performance increases on synthetic data and two real-world data sets.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available