This paper considers learning rules for environments in which little prior and feedback information is available to the decision maker. Two properties of such learning rules are studied: absolute expediency and monotonicity. Both require that some aspect of the decision maker's performance improves from the current period to the next. The paper provides some necessary, and some sufficient conditions for these properties. It turns out that there is a large variety of learning rules that have the properties. However, all learning rules that have these properties are related to the replicator dynamics of evolutionary game theory. For the case in which there are only two actions, it is shown that one of the absolutely expedient learning rules dominates all others.
MLA
Börgers, Tilman, et al. “Expedient and Monotone Learning Rules.” Econometrica, vol. 72, .no 2, Econometric Society, 2004, pp. 383-405, https://doi.org/10.1111/j.1468-0262.2004.00495.x
Chicago
Börgers, Tilman, Antonio J. Morales, and Rajiv Sarin. “Expedient and Monotone Learning Rules.” Econometrica, 72, .no 2, (Econometric Society: 2004), 383-405. https://doi.org/10.1111/j.1468-0262.2004.00495.x
APA
Börgers, T., Morales, A. J., & Sarin, R. (2004). Expedient and Monotone Learning Rules. Econometrica, 72(2), 383-405. https://doi.org/10.1111/j.1468-0262.2004.00495.x
By clicking the "Accept" button or continuing to browse our site, you agree to first-party and session-only cookies being stored on your device. Cookies are used to optimize your experience and anonymously analyze website performance and traffic.