4.6 Article

Demixed Sparse Principal Component Analysis Through Hybrid Structural Regularizers

Journal

IEEE ACCESS
Volume 9, Issue -, Pages 103075-103090

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2021.3098614

Keywords

Task analysis; Principal component analysis; Optimization; Neurons; Neural activity; Decoding; Matrix decomposition; Dimensionality reduction; sparse principal component analysis; demixed principal component analysis; parallel proximal algorithms

Funding

  1. National Key Research and Development Program of China [2018AAA0100500]
  2. Fundamental Research Funds for the Central Universities
  3. Jiangsu Provincial Key Laboratory of Network and Information Security [BM2003201]
  4. Key Laboratory of Computer Network and Information Integration of Ministry of Education of China [93K-9]

Ask authors/readers for more resources

The novel approach of demixed sparse principal component analysis (dSPCA) proposed in this research greatly improves the interpretability of multivariate data and addresses some issues existing in traditional SPCA. By optimizing the loss function to demix dependencies between various task parameters and feature responses, and accelerating the algorithm optimization, significant progress has been made in separating neural activity into different task parameters and visualizing the demixed components.
Recently, the sparse representation of multivariate data has gained great popularity in real-world applications like neural activity analysis. Many previous analyses for these data utilize sparse principal component analysis (SPCA) to obtain a sparse representation. However, l(0)-norm based SPCA suffers from non-differentiability and local optimum problems due to non-convex regularization. Additionally, extracting dependencies between task parameters and feature responses is essential for further analysis while SPCA usually generates components without demixing these dependencies. To address these problems, we propose a novel approach, demixed sparse principal component analysis (dSPCA), that relaxes the non-convex constraints into convex regularizers, e.g., l(1)-norm and nuclear norm, and demixes dependencies of feature response on various task parameters by optimizing the loss function with marginalized data. The sparse and demixed components greatly improve the interpretability of the multivariate data. We also develop a parallel proximal algorithm to accelerate the optimization for hybrid regularizers based on our method. We provide theoretical analyses for error bound and convergency. We apply our method on simulated datasets to evaluate its time cost, the ability to explain the demixed information, and the ability to recover sparsity for the reconstructed data. Finally, we successfully separate the neural activity into different task parameters like stimulus or decision, and visualize the demixed components based on the real-world dataset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available