4.6 Article

Newton's Method in Mixed Precision

期刊

SIAM REVIEW
卷 64, 期 1, 页码 191-211

出版社

SIAM PUBLICATIONS
DOI: 10.1137/20M1342902

关键词

Newton's method; mixed precision arithmetic; backward error; probabilistic rounding analysis

资金

  1. Army Research Office [W911NF-16-1-0504]
  2. Department of Energy [DE-NA003967]
  3. National Science Foundation [OAC-1740309, DMS-1745654, DMS-1906446]

向作者/读者索取更多资源

This study investigates the application of reduced precision arithmetic in solving linear equations for the Newton step. The experiments show that using single precision in the linear solve has little negative effect on the nonlinear convergence rate, if the backward error is neglected. However, considering the effects of backward error, the usual textbook estimates are overly pessimistic and even the state-of-the-art estimates using probabilistic rounding analysis do not fully conform to experiments. In specific examples, it is observed that using single precision does not degrade the convergence rates for the nonlinear iteration as the dimension increases, and similar results are seen in the half precision case.
We investigate the use of reduced precision arithmetic to solve the linear equation for the Newton step. If one neglects the backward error in the linear solve, then well-known convergence theory implies that using single precision in the linear solve has very little negative effect on the nonlinear convergence rate. However, if one considers the effects of backward error, then the usual textbook estimates are very pessimistic and even the state-of-the-art estimates using probabilistic rounding analysis do not fully conform to experiments. We report on experiments with a specific example. We store and factor Jacobians in double, single, and half precision. In the single precision case we observe that the convergence rates for the nonlinear iteration do not degrade as the dimension increases and that the nonlinear iteration statistics are essentially identical to the double precision computation. In half precision we see that the nonlinear convergence rates, while poor, do not degrade as the dimension increases. Audience. This paper is intended for students who have completed or are taking an entry-level graduate course in numerical analysis and for faculty who teach numerical analysis. The important ideas in the paper are O notation, floating point precision, backward error in linear solvers, and Newton's method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据