Journal
FRONTIERS IN ARTIFICIAL INTELLIGENCE
Volume 2, Issue -, Pages -Publisher
FRONTIERS MEDIA SA
DOI: 10.3389/frai.2019.00018
Keywords
bayesian inference; delusions; consciousness; generative adversarial networks; perception
Funding
- Center for Brains, Minds & Machines (CBMM) - NSF STC award [CCF-1231216]
- Alfred P. Sloan foundation
Ask authors/readers for more resources
The idea that the brain learns generative models of the world has been widely promulgated. Most approaches have assumed that the brain learns an explicit density model that assigns a probability to each possible state of the world. However, explicit density models are difficult to learn, requiring approximate inference techniques that may find poor solutions. An alternative approach is to learn an implicit density model that can sample from the generative model without evaluating the probabilities of those samples. The implicit model can be trained to fool a discriminator into believing that the samples are real. This is the idea behind generative adversarial algorithms, which have proven adept at learning realistic generative models. This paper develops an adversarial framework for probabilistic computation in the brain. It first considers how generative adversarial algorithms overcome some of the problems that vex prior theories based on explicit density models. It then discusses the psychological and neural evidence for this framework, as well as how the breakdown of the generator and discriminator could lead to delusions observed in some mental disorders.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available