4.6 Article

Large deviation principle for a stochastic process with random reinforced relocations

出版社

IOP Publishing Ltd
DOI: 10.1088/1742-5468/aceb50

关键词

random walk with random relocations; reinforcement; memory; large deviations; quenched

向作者/读者索取更多资源

Stochastic processes with random reinforced relocations have been used to model animal foraging behaviour. A quenched large deviation principle is proved for the value of the process at large times. The non-Markovian nature of the process due to relocations and the random inter-relocation times acting as a random environment pose challenges in proving this result.
Stochastic processes with random reinforced relocations have been introduced in a series of papers by Boyer and co-authors (Boyer and Solis Salas 2014, Boyer and Pineda 2016, Boyer, Evans and Majumdar 2017) to model animal foraging behaviour. Such a process evolves as a Markov process, except at random relocation times, when it chooses a time at random in its whole past according to some 'memory kernel', and jumps to its value at that random time. We prove a quenched large deviation principle for the value of the process at large times. The difficulty in proving this result comes from the fact that the process is not Markovian due to relocations. Furthermore, the random inter-relocation times act as a random environment.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据