4.7 Article

Reward is enough

Journal

ARTIFICIAL INTELLIGENCE
Volume 299, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.artint.2021.103535

Keywords

Artificial intelligence; Artificial general intelligence; Reinforcement learning; Reward

Ask authors/readers for more resources

In this article, the hypothesis is made that intelligence and its associated abilities serve the maximization of reward, driving behavior that exhibits various abilities. It is suggested that agents learning through trial and error to maximize reward could exhibit most, if not all, of these abilities, potentially constituting a solution to artificial general intelligence.
In this article we hypothesise that intelligence, and its associated abilities, can be understood as subserving the maximisation of reward. Accordingly, reward is enough to drive behaviour that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalisation and imitation. This is in contrast to the view that specialised problem formulations are needed for each ability, based on other signals or objectives. Furthermore, we suggest that agents that learn through trial and error experience to maximise reward could learn behaviour that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence. (C) 2021 The Authors. Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available