Journal
STATISTICAL ANALYSIS AND DATA MINING
Volume 15, Issue 3, Pages 303-313Publisher
WILEY
DOI: 10.1002/sam.11561
Keywords
kernels; missing data; penalized estimation
Categories
Funding
- National Institutes of Health [R01EB026936, R01GM135928]
- National Science Foundation [DMS-1752692]
Ask authors/readers for more resources
The paper introduces a new method for constructing row and column affinities even when data are missing by leveraging a co-clustering technique. It exploits solving the optimization problem for multiple pairs of cost parameters and filling in missing values with increasingly smooth estimates. This approach takes advantage of the coupled similarity structure among both the rows and columns of a data matrix.
Many machine learning algorithms depend on weights that quantify row and column similarities of a data matrix. The choice of weights can dramatically impact the effectiveness of the algorithm. Nonetheless, the problem of choosing weights has arguably not been given enough study. When a data matrix is completely observed, Gaussian kernel affinities can be used to quantify the local similarity between pairs of rows and pairs of columns. Computing weights in the presence of missing data, however, becomes challenging. In this paper, we propose a new method to construct row and column affinities even when data are missing by building off a co-clustering technique. This method takes advantage of solving the optimization problem for multiple pairs of cost parameters and filling in the missing values with increasingly smooth estimates. It exploits the coupled similarity structure among both the rows and columns of a data matrix. We show these affinities can be used to perform tasks such as data imputation, clustering, and matrix completion on graphs.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available