My article “An Agentic Perspective in Experimental Economics” has been published in MDPI Games. The article outlines my position on behavioural economics, suggesting that critiques of neoclassical approaches regarding nonrationality require specification of both the deviation and the mechanism for scaling into aggregate outcomes.

The difference between Economics and the rest of social science is commitment to “reductionism” (that in Economics is named “methodological individualism”), so here I simply describe the straightforward workflow from the identification of behavioral anomalies to economic relevance: see Gabaix, 2020 for a “high level” application and read the article for more “micro” cases.

This paper is mostly a literature review while the perspective is based on my EA Forum posts (this and this) on artificial intelligence, and it is aligned with the literature on “taming” the curse of dimensionality. See Jesús Fernandez Villaverde here and here from a macroeconomics view point, while in my view the problem is even more acute in Game Theory. Compared with the half-backed pre-print version, this is a far better-informed piece (thanks to the referees!), so if you liked the pre-print, please, read this version. The conclusions: 

Mainstream experimental economics is characterized by its focus on theory testing and “treatment effects” on aggregate outcomes. The “agentic” alternative is concerned with the econometric specification of individual behavior. In this study, first, a literature review of agentic experimental economics was provided. Furthermore, a stylized workflow was proposed to produce and validate an econometric estimation of individual behavior based on experimental data, detailed as follows: (i) create a baseline (“optimal”) behavioral benchmark (via analytical means or reinforcement learning) for the considered multi-agent game, (ii) conduct experiments with human subjects, (iii) use the experimental results to characterize the (heterogeneous) deviations from baseline behavior, and (iv) re-run the experiment with artificial agents calibrated in the previous step and compare the outcomes of the artificial and the human experiment. When the outcomes of the human experiment closely match those of the experiment conducted with calibrated artificial agents, we consider the “human version” of the multi-agent game solved.

To this date, the most successful econometric specifications for individual behavior have been based on individual evolutionary learning (IEL), where, first, a finite number of simple heuristics are identified and then used in a discrete choice model, where the probability of changing the current heuristic in the next period is proportional to its historical performance in terms of utility. This success could be related to cognitive limitations leading to a strategy of heuristic simplification by participants in most experimental settings.

Deep reinforcement learning tames the curse of dimensionality, enabling more advanced modeling in economics and game theory. Additionally, the availability of superhuman artificial agents creates benchmarks for human performance, revealing the “human gap” in experimental contexts. Finally, experimental methods can play a prominent role in integrating human and machine intelligence.

2

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities