4.6 Article

Evaluating parameters for ligand-based modeling with random forest on sparse data sets

Journal

JOURNAL OF CHEMINFORMATICS
Volume 10, Issue -, Pages -

Publisher

BMC
DOI: 10.1186/s13321-018-0304-9

Keywords

Random forest; Support vector machines; Sparse representation; Fingerprint; Machine learning

Funding

  1. Knut and Alice Wallenberg Foundation
  2. Swedish Research Council FORMAS
  3. SNIC through Uppsala Multidisciplinary Center for Advanced Computational Science (UPPMAX) [SNIC 2017/7-241]

Ask authors/readers for more resources

Ligand-based predictive modeling is widely used to generate predictive models aiding decision making in e.g. drug discovery projects. With growing data sets and requirements on low modeling time comes the necessity to analyze data sets efficiently to support rapid and robust modeling. In this study we analyzed four data sets and studied the efficiency of machine learning methods on sparse data structures, utilizing Morgan fingerprints of different radii and hash sizes, and compared with molecular signatures descriptor of different height. We specifically evaluated the effect these parameters had on modeling time, predictive performance, and memory requirements using two implementations of random forest; Scikit-learn as well as FEST. We also compared with a support vector machine implementation. Our results showed that unhashed fingerprints yield significantly better accuracy than hashed fingerprints (p <= 0.05), with no pronounced deterioration in modeling time and memory usage. Furthermore, the fast execution and low memory usage of the FEST algorithm suggest that it is a good alternative for large, high dimensional sparse data. Both support vector machines and random forest performed equally well but results indicate that the support vector machine was better at using the extra information from larger values of the Morgan fingerprint's radius.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available