On Tuesday, January 24 2012 from 4 p.m. to 6 p.m., the new monthly LogiCIC seminar series (organized within the ERC project on “The Logical Structure of Correlated Information Change”) will start: Kevin T. Kelly and Hanti Lin from Carnegie Mellon University will present their work (program below).
The seminar will take place in Science Park 904, room A1.10.
16:00-16:50 Kevin T. Kelly (joint with Hanti Lin), “Propositional Reasoning that Tracks Probabilistic Reasoning”
16:50-17:10 Coffee Break
17:10-18:00 Hanti Lin (joint with Kevin T. Kelly), “Uncertain Acceptance and Contextual Dependence on Questions”
Kevin T. Kelly: Propositional Reasoning that Tracks Probabilistic Reasoning
This paper concerns the extent to which propositional reasoning can track probabilistic reasoning, which addresses kinematic problems that extend the familiar Lottery paradox. An acceptance rule (Leitgeb 2010) assigns to each Bayesian credal state p a propositional belief revision method B_p, which specifies an initial belief state B_p(\top), that is revised into the new propositional belief state B(E) upon receipt of information E. The acceptance rule *tracks* Bayesian conditioning when B_p(E) = B_p|_E(\top), for every E such that p(E) > 0; namely, when acceptance by propositional belief revision equals Bayesian conditioning followed by acceptance. Standard proposals for acceptance and belief revision do not track Bayesian conditioning. The “Lockean” rule that accepts propositions above a probability threshold is subject to the familiar lottery paradox (Kyburg 1961), and we show that it is also subject to new and more stubborn paradoxes when the tracking property is taken into account. Moreover, we show that the familiar AGM approach to belief revision (Harper 1975 and Alchourrón, Gärdenfors, and Makinson 1985) cannot be realized in a sensible way by an acceptance rule that tracks Bayesian conditioning. Finally, we present a plausible, alternative approach that tracks Bayesian conditioning and avoids all of the paradoxes. It combines an odds-based acceptance rule proposed originally by Levi (1996) with a non-AGM belief revision method proposed originally by Shoham (1987). As an application, the Lottery paradox turns out to receive a new solution motivated by dynamic concerns.
Hanti Lin: Uncertain Acceptance and Contextual Dependence on Questions
The preface paradox goes like this: an author may argue for a thesis in each chapter of her book, but in the preface she does not want to be committed to the conjunction of all theses, allowing for the possibility of error. The paradox illustrate a problem about acceptance of uncertain propositions across questions: for each chapter, there is the binary question whether its conclusion is correct; the preface asks a more complex question, namely, which theses are correct. The paradox is that asking for more can yield less. This paper addresses the extent to which acceptance of uncertain propositions depends on the question in context, by providing two impossibility results formulated in the following. Let uncertainty be modeled by subjective probability. Understand a *question* as having potential, complete answers that are mutually exclusive and jointly exhaustive; understand *answers* as disjunctions of complete answers. Assume that accepted answers within each question are closed under entailment. Assume, further, that acceptance is *sensible* in the sense that contradiction is never accepted, that answers of certainty are always accepted, and that every answer can be accepted without certainty. Then, as our first result, it is impossible that acceptance is *independent of questions*, namely, that if a proposition is accepted as an answer to a question, then it is accepted in every question to which it is an answer.
In light of the preceding result, one might settle on a weaker sense of question-independence. Say that a question is *refined* by another question if and only if each answer to the former question continues to be an answer to the latter question. As a weakening of question-independence, *refinement-monotonicity* requires that when an answer is accepted in a question, that answer is also accepted in every refined question. But refinement-monotonicity is too strong to be plausible, because, due to our second result, it is inconsistent with two intuitive principles for reasoning within each individual question. These two principles are: *cautious monotonicity* (i.e., do not retract accepted propositions when you learn what you already accept), and *case reasoning* (i.e., accept a proposition if it would be accepted no matter whether information E or its negation is learned), where information learning is assumed to follow the Bayesian ideal of conditioning.