4.7 Article

Least squares solution with the minimum-norm to general matrix equations via iteration

期刊

APPLIED MATHEMATICS AND COMPUTATION
卷 215, 期 10, 页码 3547-3562

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.amc.2009.10.052

关键词

Iterative algorithm; Gradient; General linear matrix equations; Minimal norm least squares; Optimal step size; Convergence rate

资金

  1. National Natural Science Foundation of China [60904007, 60710002, 10771044]
  2. Development Program for Outstanding Young Teachers at Harbin Institute of Technology [HITQNJS.2009.054]
  3. Natural Science Foundation of Heilongjiang Province [200605]

向作者/读者索取更多资源

Two iterative algorithms are presented in this paper to solve the minimal norm least squares solution to a general linear matrix equations including the well-known Sylvester matrix equation and Lyapunov matrix equation as special cases. The first algorithm is based on the gradient based searching principle and the other one can be viewed as its dual form. Necessary and sufficient conditions for the step sizes in these two algorithms are proposed to guarantee the convergence of the algorithms for arbitrary initial conditions. Sufficient condition that is easy to compute is also given. Moreover, two methods are proposed to choose the optimal step sizes such that the convergence speeds of the algorithms are maximized. Between these two methods, the first one is to minimize the spectral radius of the iteration matrix and explicit expression for the optimal step size is obtained. The second method is to minimize the square sum of the F-norm of the error matrices produced by the algorithm and it is shown that the optimal step size exits uniquely and lies in an interval. Several numerical examples are given to illustrate the efficiency of the proposed approach. (C) 2009 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据