We present new algorithms for solving Partially Observed Markov Decision Processes. The general idea is to use information given by recent past event to build an extended space, thus designing a new Markovian Process which can be more easily solved. We formalize these notions by introducing "exhaustive observable" and giving two theorems underlying the algorithms. Experiments are then conducted with each algorithms to show their validity.
Keywords: Reinforcement Learning, Planning
Citation: Alain Dutech: Solving POMDPs Using Selected Past Events. In W.Horn (ed.): ECAI2000, Proceedings of the 14th European Conference on Artificial Intelligence, IOS Press, Amsterdam, 2000, pp.281-285.