News

LogiCIC mini-workshop on Formal Epistemology

***Please note that the mini-workshop will take place after the questions and answers session of the tutorial by Branden Fitelson on Coherence***

On Tuesday, June 10, we will have a joint LogiCIC/LIRa mini-workshop on Formal Epistemology.

First of all, we will have a special “Questions and answers” session as part of Branden Fitelson‘s tutorial on Coherence, from 13:00 to 14:30.

Then we will have 3 talks of half an hour each (with a 10 min. break in between) by Jason Konek, Ben Levinstein, and Krzysztof Mierzewski.

See below for the schedule of the afternoon, titles and abstracts.

Everyone is cordially invited!

—————————————————————-
Date and Time: Tuesday, June 10, 2014, 13:00-17:30
Venue: Science Park 107, Room F1.15
—————————————————————-

Time: 13:00-14:30:
Speaker: Branden Fitelson (Rutgers University)
Questions and answers session of the Coherence Tutorial

Time: 15:00-15:40
Speaker: Jason Konek (University of Bristol)
Title: Non-additive scoring rules for comparative belief
Abstract:
According to Bayesian orthodoxy, an agent’s comparative beliefs must be representable by probability function p, in the sense that she thinks that X is no more likely than Y (or X is strictly less likely than Y) only if p(X) is less than or equal to p(Y) (or p(X) is strictly less than p(Y)). Inspired by the accuracy-dominance arguments of Joyce (1998, 2009), Predd et al. (2009) and Schervish et al. (2009), Fitelson and McCarthy (2014) ground coherence requirements for comparative belief by showing that violating these requirements amounts to squandering accuracy. But their requirements are weaker than the Bayesian’s. In this talk, we propose amending one of Fitelson and McCarthy’s core assumptions, in hopes of providing an accuracy-dominance argument for full-throated Bayesian orthodoxy about comparative belief. In particular, Fitelson and McCarthy restrict their attention to “additive” measures of inaccuracy, which first (i) measure the inaccuracy of individual judgments between pairs of propositions — X ≺ Y , X ≈ Y or X ≻ Y — and then (ii) weigh up these individual scores in some way, to provide a “summary statistic” which captures her comparative belief ordering’s overall accuracy. We will motivate and explore inaccuracy measures for comparative belief that do not take this additive form.

Time: 15:50-16:30
Speakers: Jason Konek and Ben Levinstein (University of Bristol)
Title: The Foundations of Epistemic Decision Theory
Abstract:
According to accuracy-first epistemology, accuracy is the fundamental epistemic good. Epistemic norms — Probabilism, Conditionalization, the Principal Principle, etc. — have their binding force in virtue of helping to secure this good. To make this idea precise, accuracy-firsters invoke Epistemic Decision Theory (EpDT) to determine which epistemic policies are the best means toward the end of accuracy. Hilary Greaves and others have recently challenged the tenability of this programme. Their arguments purport to show that EpDT encourages obviously epistemically irrational behavior. We develop firmer conceptual foundations for EpDT. First, we detail a theory of praxic and epistemic good. Then we show that, in light of their very different good-making features, EpDT will evaluate epistemic states and epistemic acts according to different criteria. So, in general, rational preference over states and acts won’t agree. Finally, we argue that based on direction-of-fit considerations, it’s preferences over the former that matter for normative epistemology, and that EpDT, properly spelt out, arrives at the correct verdicts in a range of putative problem cases.

Time: 16:40-17:20
Speaker: Krzysztof Mierzewski (University of Amsterdam)
Title: Bridging Bayesian Probability and AGM Revision via Stability Principles
(joint work with Alexandru Baltag)
Abstract:
This talk concerns the relationship between probabilistic (Bayesian) and qualitative (AGM-based) models of belief dynamics. I address the question of how AGM belief revision operators can be related to Bayesian conditioning, in order to flesh out some (in)compatibilities between the Bayesian and AGM-based formalisms.
This is done by analysing the behaviour of acceptance rules, which map probabilistic credal states to qualitative representations of belief. Given an acceptance rule, the ideal of compatibility between Bayesian conditioning and qualitative revision is embodied by the tracking property, which imposes a commutativity requirement to ensure that conditioning and revision agree modulo the acceptance map.
I focus on an acceptance rule based on the notion of stably high probability, due to Leitgeb. As a consequence of a ‘No-Go’ theorem by Lin & Kelly, Leitgeb’s rule does not allow AGM revision to track conditioning. Nonetheless, given this rule’s inherent attractiveness as an acceptance principle and its close connection to AGM revision, I consider some ways in which one may circumvent the No-Go Theorem and use the rule so as to approximate agreement between AGM and Bayesian conditioning.
One rather natural such method – threshold-raising – fails, which poses some difficulties for the ‘peace project’ between Bayesian and AGM-compliant operators. However, another interesting connection exists: I show that there is a sense in which AGM revision derives from (1) Leitgeb’s rule, (2) Bayesian conditioning, and (3) a version of the maximum entropy principle. This suggests that one could study qualitative revision operators as special cases of Bayesian reasoning which naturally arise in situations of information loss or incomplete probabilistic specification of the agent’s credal state.