Journal
MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE
Volume -, Issue -, Pages 685-688Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/2733373.2807410
Keywords
Deep learning; Distributed training
Ask authors/readers for more resources
Deep learning has shown outstanding performance in various machine learning tasks. However, the deep complex model structure and massive training data make it expensive to train. In this paper, we present a distributed deep learning system, called SINGA, for training big models over large datasets. An intuitive programming model based on the layer abstraction is provided, which supports a variety of popular deep learning models. SINGA architecture supports both synchronous and asynchronous training frameworks. Hybrid training frameworks can also be customized to achieve good scalability. SINGA provides different neural net partitioning schemes for training large models. SINGA is an Apache Incubator project released under Apache License 2.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available