Stefan Arnborg, Gunnar Sjödin
Bayesianism has a strong normative claim for uncertainty management, but is not undisputed. Most foundational justifications involve some regularity assumption (continuity of an auxiliary function) that has been recently criticized, particularly for finite models. We show how such assumptions can be replaced by precise and weaker assumptions for finite but extensible domains. These assumptions are weaker than those used in alternative justifications, which is shown by their inadequacy for infinite domains. They are also more compelling. We propose and prove sufficient the following common sense assumtions: Refinability: If we have already made a particular splitting of a statement into sub-cases, by adding new statements implying it, it should always be possible to refine another statement in the same way, and with the same plausibilities in the new refinement. This modification should not lead to inconsistency. Information Independence: If a statement is refined by several new symbols, it should be possible to state that they are information independent, so that knowledge of one does not affect the plausibility of the other. Any real-valued, strictly monotone and consistent plausibility measure of a finite model satisfying these assumptions is equivalent to probability. For any non-equivalent measure, there is a finite extension that is inconsistent. We conjecture that a similar analysis is applicable to Lindleys betting paradigm justification of probability as canonical uncertainty measure.
Keywords: Uncertainty in AI, Philosophical Foundations, Bayesian Learning
Citation: Stefan Arnborg, Gunnar Sjödin: Bayes Rules in Finite Models. In W.Horn (ed.): ECAI2000, Proceedings of the 14th European Conference on Artificial Intelligence, IOS Press, Amsterdam, 2000, pp.571-575.