Deep Reinforcement Learning in a Monetary Model
Mingli Chen,
Andreas Joseph,
Michael Kumhof,
Xinlei Pan and
Xuan Zhou
Papers from arXiv.org
Abstract:
We propose using deep reinforcement learning to solve dynamic stochastic general equilibrium models. Agents are represented by deep artificial neural networks and learn to solve their dynamic optimisation problem by interacting with the model environment, of which they have no a priori knowledge. Deep reinforcement learning offers a flexible yet principled way to model bounded rationality within this general class of models. We apply our proposed approach to a classical model from the adaptive learning literature in macroeconomics which looks at the interaction of monetary and fiscal policy. We find that, contrary to adaptive learning, the artificially intelligent household can solve the model in all policy regimes.
Date: 2021-04, Revised 2023-01
New Economics Papers: this item is included in nep-big, nep-cba, nep-cmp, nep-dge and nep-mon
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (7)
Downloads: (external link)
http://arxiv.org/pdf/2104.09368 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2104.09368
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().