Confidence resets reveal hierarchical adaptive learning in humans
Micha Heilbron and
Florent Meyniel
PLOS Computational Biology, 2019, vol. 15, issue 4, 1-24
Abstract:
Hierarchical processing is pervasive in the brain, but its computational significance for learning under uncertainty is disputed. On the one hand, hierarchical models provide an optimal framework and are becoming increasingly popular to study cognition. On the other hand, non-hierarchical (flat) models remain influential and can learn efficiently, even in uncertain and changing environments. Here, we show that previously proposed hallmarks of hierarchical learning, which relied on reports of learned quantities or choices in simple experiments, are insufficient to categorically distinguish hierarchical from flat models. Instead, we present a novel test which leverages a more complex task, whose hierarchical structure allows generalization between different statistics tracked in parallel. We use reports of confidence to quantitatively and qualitatively arbitrate between the two accounts of learning. Our results support the hierarchical learning framework, and demonstrate how confidence can be a useful metric in learning theory.Author summary: Learning and predicting in every-day life is made difficult by the fact that our world is both uncertain (e.g. will it rain tonight?) and changing (e.g. climate change shakes up weather). When a change occurs, what has been learned must be revised: learning should therefore be flexible. One possibility that ensures flexibility is to constantly forget about the remote past and to rely on recent observations. This solution is computationally cheap but effective, and is at the core of many popular learning algorithms. Another possibility is to monitor the occurrence of changes themselves, and revise what has been learned accordingly. This solution requires a hierarchical representation, in which some factors like changes modify other aspects of learning. This solution is computational more complicated but it allows more sophisticated inferences. Here, we provide a direct way to test experimentally whether or not learners use a hierarchical learning strategy. Our results show that humans revise their beliefs and the confidence they hold in their beliefs in a way that is only compatible with hierarchical inference. Our results contribute to the characterization of the putative algorithms our brain may use to learn, and the neural network models that may implement these algorithms.
Date: 2019
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (4) Track citations by RSS feed
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006972 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 06972&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1006972
DOI: 10.1371/journal.pcbi.1006972
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().