Accessibility statement

Learning with Limited Memory: Bayesianism vs Heuristics

Wednesday 3 February 2021, 1.00PM to 2.00 pm

Speaker(s): Tai-Wei Hu (Bristol)

Host: Zaifu Yang

Abstract: We study the classical sequential hypothesis testing problem (Wald, 1947), but add the memory constraint modelled by finite automata. Generically, the Bayesian optimum is impossible to implement with any finite-state automaton. We introduce stochastic finite-state automata, and prove structural theorems regarding the constrained optimal rules. In a model of breakthroughs where one more informative signal fully reveals the state of nature (other signals provide imperfect information), we show that randomization is strictly optimal whenever the memory constraint is binding and the optimum requires some learning. For information structures without fully revealing signals, we give conditions under which the optimal finite-automaton uses simple approximation rules to update beliefs with qualitative probabilities.

 

Location: ZOOM (details to follow)

Admission: All welcome