News

LIRa session: Caleb Schultz Kisby

Speaker: Caleb Schultz Kisby (Indiana University, Bloomington)

Date and Time: Thursday, September 5th 2024, 16:30-18:00

Venue: online

Title: The Modeling Power of Neural Networks

Abstract. Neural networks are very good at learning without human guidance, yet they’re also known for making blunders that seem silly from the point of view of logic. (And this situation hasn’t changed, despite modern neural network systems like GPT-4). This is a long-standing problem in artificial intelligence: How can we better understand and control neural networks using logic? In response, there have been countless proposals for “neuro-symbolic” systems that incorporate logic into neural networks, or vice versa.

In this talk I will present one such proposal that is close to the hearts of modal and epistemic logicians: Treat (binary) neural networks as a class of models in modal logic by (1) adding a valuation of propositions (as sets of neurons), and (2) interpreting ◇φ as the forward propagation (or diffusion) of input φ through the net. We can then do “business as usual,” using neural networks as our models. To cement this idea, I will compare the modeling power of neural networks with other classes of models, in particular: relational, plausibility, neighborhood, and social network models. If time permits, I will mention recent work in which we “dynamify” this logic, in the spirit of modeling neural network update and learning.

This talk is based on joint work (in progress) with Saúl Blanco and Larry Moss. Our work on the dynamics of neural network update appears in AAAI 2024 and FLAIRS 2022.