4.7 Article

Numerical computation of rare events via large deviation theory

Journal

CHAOS
Volume 29, Issue 6, Pages -

Publisher

AMER INST PHYSICS
DOI: 10.1063/1.5084025

Keywords

-

Funding

  1. Materials Research Science and Engineering Center (MRSEC) program of the National Science Foundation (NSF) [DMR-1420073]
  2. NSF [DMS-1522767]

Ask authors/readers for more resources

An overview of rare event algorithms based on large deviation theory (LDT) is presented. It covers a range of numerical schemes to compute the large deviation minimizer in various setups and discusses best practices, common pitfalls, and implementation tradeoffs. Generalizations, extensions, and improvements of the minimum action methods are proposed. These algorithms are tested on example problems which illustrate several common difficulties which arise, e.g., when the forcing is degenerate or multiplicative, or the systems are infinite-dimensional. Generalizations to processes driven by non-Gaussian noises or random initial data and parameters are also discussed, along with the connection between the LDT-based approach reviewed here and other methods, such as stochastic field theory and optimal control. Finally, the integration of this approach in importance sampling methods using, e.g., genealogical algorithms, is explored. Published under license by AIP Publishing.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available