Journal
RISK ANALYSIS
Volume 37, Issue 8, Pages 1532-1549Publisher
WILEY
DOI: 10.1111/risa.12801
Keywords
constant-rebalancing portfolio; constrained (1) minimization; continuous-time mean-variance portfolio; high-dimensional portfolio selection; machine learning; sparse portfolio
Categories
Funding
- Research Grant Council of Hong Kong via ECS project [809913]
- Research Grant Council of Hong Kong via GRF project [18200114, 14303915]
Ask authors/readers for more resources
Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained (1) minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available