Thursday January 12 we will have a special session on Logic and Learning Theory with the following program:
10:00 – 10:15 
Coffee


10:15 – 11:00  Johan van Benthem (Amsterdam & Stanford)  Learning Theory Meets Temporal Logics of Agency.
The semantic arena of learning theory is shared with many
systems of epistemic and doxastic temporal logic. I will look
at some logical aspects of this encounter, making reference
to work by Degremont, Gierasimczuk, Hendricks, and Kelly.

11:00 – 11:45  Jan van Eijck (Amsterdam)  Probabilistic Epistemic Logic and Concept Learning
(joint work with Shalom Lappin) 
11:45 – 13:15  Lunch  
13:15 – 14:15 
Kevin Kelly (Pittsburgh)

A Learningtheoretic Derivation of Ockham’s Razor. Formal learning theory studies methods that converge to the truth and, more importantly, the complexitytheoretic necessary and sufficient conditions under which convergence to the truth is possible. One standard objection to convergence to the truth as a foundation for the philosophy of science is that it imposes no constraints whatever on what a scientist should say in the short run. For consider any method that converges to the truth. Modify it in any arbitrary way up to stage one billion, and the resulting method also converges to the truth. Therefore, it is now thought that stronger assumptions, such as Bayesian updating, are required to explain scientific method. One crucial feature of scientific method is a systematic bias toward simpler theories. Bayesians account for that bias in terms of lowinformation prior probabilities. But that explanation is subtly circular—it does not assume that simpler theoriees are more probable, but it does essentially involve the assumption that simpler possibilities are more probable. Any satisfactory explanation of Ockham’s razor must avoid circular appeal to prior simplicity biases. I will argue that formal learning theory can provide such an explanation if convergent inductive methods are required, in addition, to be as deductive as any convergent solution to a given theory choice problem can possibly be. Optimal deductiveness is explicated in terms of two familiar features of deductive inference: monotonicity (refusal to retract earlier conclusions) and patience (refusal to decide matters that future information is guaranteed to decide). The proposed explanation requires some careful attention to the topological structure of empirical simplicity. 
14:15 – 14:50  Nina Gierasimczuk (Amsterdam)  Learningtheoretical analysis of iterated belief revision.
In this talk we propose a way to use the framework of learning theory to evaluate beliefrevision policies. On the inductive inference side, we are interested in the paradigm of language learning. As possible concepts that are inferred we take sets of atomic propositions. Therefore, receiving new data corresponds to getting to know about facts. On the side of belief revision we follow the lines of dynamic epistemic logic (see van Benthem, 2007). Hence, we interpret current beliefs of the agent (hypothesis) as the content of those possible worlds that he considers most plausible. The revision does not only result in the change of the current hypothesis, but can also induce modification of the agent’s plausibility order. We are mainly concerned with identifiability in the limit (Gold, 1967).
The results that are obtained in this approach concern mostly the conditions for universality of a belief revision policy (i.e., for a belief revision method being as powerful as full identification in the limit). This leads to identifying factors that influence the (non) universality of a beliefrevision policy: the prior conditions for belief revision (e.g., standard beliefrevision models); type of incoming information (e.g., entirely truthful as opposed to partially erroneous); properties of beliefrevisionbased learning functions (e.g., conservatism). Overall, our results can be interpreted as showing that applying certain types of belief revision rules in certain contexts can be analyzed in terms of whether they can be relied upon in the ‘quest for the truth’ (the analysis of inductive inference in terms of reliability has been for the first time provided by Kelly, 1996). In our framework we can naturally treat the procedural aspect of iterated belief revision, address some intermediate stages of such iterations and relate them to the ultimate success of a beliefrevision policy.
The results presented in this talk come from a joint work with Alexandru Baltag and Sonja Smets (Baltag et al. 2011).
References: Baltag, A., Gierasimczuk, N., Smets, S., Belief Revision as a TruthTracking Process, in: Krzysztof R. Apt (Ed.): Proceedings of the 13th Conference on Theoretical Aspects of Rationality and Knowledge (TARK2011), ACM 2011.
van Benthem, J. (2007). Dynamic logic for belief revision. Journal of Applied NonClassical Logics, 2:129–155. Gold, E. (1967). Language identification in the limit. Information and Control, 10:447–474. Kelly, K. (1996). The Logic of Reliable Inquiry. Oxford University Press, Oxford.

14:50 – 15:25  Sonja Smets (Amsterdam)  The Landscape of Epistemic Topology: shapes and types of “knowing”, “learning” and “answering” a question. 
15:25 – 16:00  Alexandru Baltag (Amsterdam)  The TruthTracking Power and the Limits of (Bayesian and Qualitative) Conditioning. 
16:00 – 16:15  Coffee Break  
16:15 – 17:00  Peter Grunwald (Amsterdam & Leiden)  Generalization to Unseen Cases: GoodTuring provides a free lunch, if you’re lucky. We analyze the generalization error of learning algorithms on unseen cases, i.e. data that are strictly different from those in the training set. This offtrainingset error may differ significantly from the standard generalization error. We derive a datadependent bound on the difference between the two notions. Our result is based on extending the missing mass estimator developed by Jack Good and Alan Turing during WW II. In light of these results, we argue that certain claims made in the No Free Lunch literature (‘no learning algorithm is inherently superior over any other’) are overly pessimistic. Based on joint work with T. Roos, P. Myllymaki and H. Tirri. 
17:00 – 18:00  Discussion 
The session will take place in room F1.02 from Science Park 904.