3.8 Review

A survey of methods for distributed machine learning

Journal

PROGRESS IN ARTIFICIAL INTELLIGENCE
Volume 2, Issue 1, Pages 1-11

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s13748-012-0035-5

Keywords

Machine learning; Large-scale learning; Data fragmentation; Distributed learning; Scalability

Funding

  1. Secretaria de Estado de Investigacion of the Spanish Goverment [TIN2009-10748]
  2. Xunta de Galicia [CN2011/007]
  3. European Union by FEDER funds
  4. Xunta de Galicia under Plan I2C Grant Program

Ask authors/readers for more resources

Traditionally, a bottleneck preventing the development of more intelligent systems was the limited amount of data available. Nowadays, the total amount of information is almost incalculable and automatic data analyzers are even more needed. However, the limiting factor is the inability of learning algorithms to use all the data to learn within a reasonable time. In order to handle this problem, a new field in machine learning has emerged: large-scale learning. In this context, distributed learning seems to be a promising line of research since allocating the learning process among several workstations is a natural way of scaling up learning algorithms. Moreover, it allows to deal with data sets that are naturally distributed, a frequent situation in many real applications. This study provides some background regarding the advantages of distributed environments as well as an overview of distributed learning for dealing with very large data sets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available