The Logic of Conceivability


Understanding conceivability


1. What conceivability is, why it matters


“Conceiving”, as well as “imagining”, refer here to a range of intentional phenomena. Intentionality is the feature of those states of the mind, which are directed to – and involve the representation of – configurations of objects, situations, or circumstances. One key feature of the human mind is its ability to conceive or imagine rich and detailed alternatives to actuality in order to gain information through them. We cannot experience in advance which scenarios are or will be actual for us to face in reality. So we explore them by modelling them in our mind, leaving our perceptions “offline”: How will the financial markets react if Greece defaults? What would you do if you failed your logic class? Would Mr. Jones show the symptoms he shows, had he taken arsenic? A vast literature on counterfactual imagination in cognitive science (Kahneman et al. [1982], Byrne [2005]) shows how such an activity expands our cognitive skills and practical performances.

But what is its logic? Research on intentionality in logic – the theory of valid inference – flourished via modal logic’s possible worlds semantics. After Hintikka [1962], the analysis of intentional-representational mental states (such as knowledge, belief, information) via modal logic was taken up by philosophy, linguistics, and Artificial Intelligence (Meyer & van der Hoek [1995]). The key insight was: cognitive agent x ®’s that P, with ® the relevant mental state (knows, believes, is informed), when P holds throughout a set of possible worlds (scenarios, circumstances): those compatible with x’s evidence, beliefs, etc. Accessibility relations single out such worlds: the accessible worlds are the scenarios x entertains. Let R be one such accessibility: “wRw1” means “World w1 is an epistemic/doxastic possibility from the viewpoint of world w”. Let “®P” be “It is represented [believed, known] that P”. The (non-agent-indexed) truth conditions for ® are (“iff” being short for “if and only if”):

  • ‘®P’ is true at w iff P is true at all w1, such that wRw1.

 Now this approach faces problems, which have prompted an enormous amount of research in the last forty years. The problems follow from interpreting representational mental states via quantification over possible, maximally consistent and logically closed, worlds (scenarios, circumstances).


2. Open problems


2.1. Logical omniscience

In the standard framework agents represent (know, believe, are informed of) all the logical consequences of what they represent (Closure under consequence: If ®P, and P entails Q, then ®Q). All logically valid formulae are represented (Validity: If P is valid, then ®P). And contradictory mental states are banned (Consistency: ~(®P & ®~P)). Such principles deliver idealized models, having little to do with human intelligence (Fagin et al. [1995]). We experience having inconsistent beliefs. Excluded Middle, P v ~P, is (let us suppose) valid, but intuitionist logicians do not believe it. We know basic arithmetic truths like Peano’s postulates; and these entail (suppose) Goldbach’s Conjecture; but we don’t know whether Goldbach’s Conjecture is true.


2.2. Information overload

The information overload issue (Jago [2014]), a version of the classic Bar-Hillel-Carnap paradox (Floridi [2004]), is a key problem of semantic information theory. Classical theories like Carnap and Bar-Hillel’s take the informative content of P to be its partitioning the totality of possibilities: that agent x is informed that P means that x rules out the non-P worlds. One who is informed that it snows rules out the scenarios where it does not snow.

But then unrestrictedly necessary propositions, like logical and mathematical truths, are all identified as the total set of worlds. So they are uninformative: they rule out no possibility. This is highly implausible: mathematical proofs can obviously be informative. “xn + yn = zn has no solutions in positive integers for n larger than two”: it took a 130 page proof to establish such a clearly informative result.


2.3. The Dilemma

Intentional mental states draw distinctions between intensionally (necessarily) equivalent contents: ®P may differ from ®Q even if P and Q are necessarily equivalent. The possible worlds apparatus cannot draw such hyperintensional distinctions. Some approaches to hyperintensionality, then, give up worlds semantics altogether (Jago [2014] includes a critical survey). One alternative strategy (Rantala [1982], Priest [2005]) expands the world machinery by adding non-normal or impossible worlds. What are these? If possible worlds are ways things could be, then impossible worlds are ways things could not be: they represent absolute (logical, mathematical) impossibilities as obtaining. Introduced by Kripke [1965], such worlds are taken in epistemic logic as alternatives for imperfect agents. By accessing them, one refutes Validity, Closure, Consistency (e.g. for Closure: take a w where P holds, but P v Q fails: if w is accessible, we have ®P without ®(P v Q), although P classically entails P v Q).

However, if arbitrary worlds are accessible, the approach is idle. Arbitrary worlds correspond to arbitrary sets of formulae. Given world w, let S = {w1| wRw1}, the set of worlds accessible from w. Let C = {P| w1 makes true P for all w∈ S}, the set of formulae true at all of them. The agent’s state is a merely syntactic structure: ®P (it is represented that P) iff P ∈ C, and C is just an arbitrary set of formulae. Levesque [1984] then selects non-normal worlds closed under weaker-than-classical consequence, as in paraconsistent logics (Berto et al. [2012]) where contradictions do not have arbitrary consequences: ®(P & ~P) does not entail ®Q for arbitrary Q. Thus, paraconsistency has been claimed to model occasionally inconsistent agents.

But logical omniscience strikes back (Fagin et al. [1995]): representation is now closed under paraconsistent consequence; ®P, and Q’s being a paraconsistent consequence of P, give ®Q. Here is the Dilemma: either mental representations are closed under some logical consequence, or not. If they are, omniscience backfires. If not, any world may be accessible: then mental states are semantically unanalysed. Some AI researchers have conjectured that there is no solution to the Dilemma (Meyer and van der Hoek [1995]: 88-9). But the LoC aims at solving it.


2.4. Conceivability and possibility

One further issue about conceivability concerns the entailment from it to so-called metaphysical or absolute possibility, at the core of “thought experiments” in theoretical philosophy. Modal rationalists (Chalmers [1996]), for instance, move from the conceivability of a functional duplicate of a human devoid of consciousness to its possibility, and from this (via the necessity of identity and difference) to the actual distinction of consciousness from brain faculties. Such thought experiments are often met with scepticism, as wild speculation (Dennett [2005]). The debate lacks a precise formal approach to the phenomenon of at issue: human imagination and its logic. The LoC will provide a logical framework within which the connection between imagination and knowledge of absolute possibility can be assessed.

 Based on a single, simple insight now to be introduced, the LoC Project is structured into four sub-projects. These will be described below and will all be led by the PI Franz Berto, together with a team of researchers with complementary scientific expertise: 1) Foundations; 2) Core Theory and applications; 3) Conceivability and possibility; 4) The LoC Book. And the project has the fourfold Philosophical objective of solving the Problems 1-4 listed above. It also has two objectives of Research Dissemination outside philosophy: firstly, to provide computationally tractable models of conceivability for Artificial Intelligence. Secondly, to introduce non-academics to its topic.



3. Conceiving, Fast and Slow


This plays on Nobel Prize winner Daniel Kahneman’s [2011] book Thinking: Fast and Slow. In spite of its theoretical character, the research will assume as a working hypothesis a distinction from cognitive science, whereby our mind implements two reasoning systems (Stanovich & West [2000]). Our Fast System does not apply formal rules: it is relatively undemanding, context-sensitive, and it integrates what we conceive via background beliefs. Our Slow System is rule-based, analytic, and can be formally trained. A twofold connection between this distinction and conceivability for resource-bound agents emerges:

  1. We conceive of things not logically entailed by what is explicitly included in the act of imagining a scenario. Agent x reads a Conan Doyle novel portraying Sherlock Holmes as a man who is active in London. Given this input, x forms a mental representation of the situation. In reality, London is in Europe and normally endowed men have lungs; but Doyle’s stories (let us assume) do not claim this explicitly. Now x takes such information as holding in the represented situation: x does imagine Holmes as a normally endowed man with lungs, living in Europe. Such integration, typical of the Fast Way, is close to a procedure described in the mental models approach proposed by cognitive scientists (Johnson-Laird & Byrne [1993]): we do not conceive such additional details by inferring them logically from the explicitly given data, but by importing background beliefs and information, which we retain in the non-actual scenario we build a mental representation of.
  2. We do not derive all the logical consequences of our explicit mental representations. Some inconsistencies, for instance, may be too complex to detect. But a rational conceiving agent should draw at least some logical consequences in its imagination: the cognitively reachable ones. This is typical of the Slow Way. David Lewis put the point thus, addressing paraconsistent modelling of inconsistent conceivers:

“Paraconsistent logic [...] allows reasoning about blatantly impossible situations. Whereas what I find myself doing is reasoning about subtly impossible situations, and rejecting suppositions that lead to blatant impossibilities.” (Lewis [2004]: 163)

What counts as “blatant” is context- and agent- dependent: capturing this logically is difficult. Our Fast and Slow conceiving states will be represented as sets of possible and non-normal worlds, logically structured by two kinds of accessibility so far unexplored in epistemic logic research:

  • In the Fast Way, accessibility is shaped by world similarity or closeness: it is Fast-conceived that P when P holds in scenarios where the explicit content holds and information from actuality is preserved in certain ways.
  •  In the Slow Way, accessibility is shaped by logical rules: it is Slow-conceived that P when P holds in scenarios accessed via elementary logical steps starting from the explicitly represented content.

Once formally developed, this will solve Problems 1 to 3 from Section 2, and will pave the way to deal with Problem 4: (a) in both Ways, the conceiving states are not reducible to arbitrary sets of formulas. Unlike with the naïve impossible worlds approach, it is not the case that arbitrary worlds are accessible: blatantly impossible scenarios, and scenarios too unlike actuality, are ruled out. But (b) in neither Way do we have logical omniscience and overload.THe insight will be systematically developed in the LoC’s four sub-projects, which we now outline.



4. Sub-Projects


1. Foundations (post-doc 1)

The LoC research faces foundational issues. Addressing them is the goal of Sub-Project 1. This will not directly deal with the Four Problems, but will lay the philosophical ground for the other sub-projects, which will solve them. It will be carried out by the PI Franz Berto and by one post-doctoral fellow with expertise in logic, ontology, and foundational philosophy. It will investigate two main issues:

  1. The metaphysics of non-normal worlds. The formal framework of the LoC will use non-normal worlds. But what are these? Some logicians have a deflationary attitude on worlds in general, taking them as points in frames where formulae are evaluated. For applications, e.g., in Artificial Intelligence, this may be enough. But not in philosophy. The first task for Sub-Project 1 is to account for the ontological status of worlds, normal or not.
  2. World similarity and counterfactual imagination. Fast conceivability relies on world closeness or similarity (subjectively understood as plausibility): in our Fast conceiving, we rule out worlds not similar enough to how we take the actual world to be. Stalnaker as well as Lewis [1973] used world similarity to provide semantics for ceteris paribus conditionals: “If it were the case that P, then it would be the case that Q” is true at world w iff the world(s) most similar to w where P holds also make Q true. Fast conceivability works accordingly: we conceive a situation making the antecedent P true, and develop the conception in our imagination, to check whether Q obtains. However, world similarity has often been criticized as a desperately vague and problematic notion. The second task of Sub-Project 1 is to provide an innovative treatment of similarity for non-normal worlds, developing an account first sketched in Berto [2014].


2. Core Theory and applications (post-docs 2 and 3)

This will develop the Core Theory of Slow and Fast conceiving and solve Problems 1-3 above: Logical Omniscience, Information Overload, and the Dilemma. It will also provide formal LoC-models suitable for AI research. It will therefore be carried out by the PI and by two post-doctoral fellows: one, with expertise in cognitive science, will be needed to support the Core Theory in the light of cognitive science results; the other, with expertise in in mathematical logic and Artificial Intelligence, will be needed to develop the mathematical applications of the LoC models in computer science.

  1. Core Theory 1: Slow Conceiving. The most innovative results here are due to Jago [2014]. Sub-Project 2 will develop them. Jago uses non-normal worlds not closed under any non-trivial logical consequence. However, his world-spaces discriminate obvious from subtle impossibilities. A limited but rational conceiving agent will rule out blatantly impossible scenarios (worlds where 1 + 1 = 3, or where something is not self-identical), although it may not rule out subtle impossibilities (e.g. Fermat's Last Theorem’s being false). Jago constructs epistemic accessibilities structured by rules of deduction. Let “S” (for Slow) be one such relation. In the simplest version, world w S-accesses worlds w1 (“wSw1”) iff all the formulas holding at w also hold at w1, plus one formula deduced from formulas holding at w via some elementary inference rule. If |w| = {P1 ... Pn} is the set of formulas holding at w, wSw1 iff |w1| = |w| U {Q}, where Q follows from a subset of |w| via some rule, say, modus ponens or Conjunction Introduction. The Jago approach may be developed in the LoC by factoring in substructural logical rules concerning mental premise recombination (substructural logics are rightly labelled as “resource-conscious” logics: see Restall [2000]).
  2. Core Theory 2: Fast Conceiving. In the Fast Way, worlds are ruled out via similarity or closeness. The approach is outlined in Berto [2014]: we integrate non-inferentially the represented content with background information. This brings Fast conceiving close to ceteris paribus conditionals, where the explicit content works like a conditional antecedent. Research on counterfactual imagination in the mental models tradition (Byrne [2005]) explains how this works psychologically: one imagines the antecedent and develops the supposition by importing background information. Let “F” (for Fast) be the relevant accessibility. In the simplest version, world w F-accesses world w1 (“wFw1”) iff w1 is one of the worlds closest to how things are represented at w. “It is Fast-Conceived that P” holds at w iff P holds at all the F-accessible worlds. Agent x imagines to win the lottery; in an accessible world, what x imagines obtains, and things are ceteris paribus as x takes them to be in reality. In spite of its logical character, the Core Theory is based on an insight taken from cognitive science. Therefore, this part of the research will be developed with the help of a cognitive scientist (post-doc 2), who will ensure the consistency of the developed LoC logical theory with its cognitive basis.
  3. Applications: post-doc 3 will instead work with the PI on the formal modelling of the insights in (1) and (2). Artificial Intelligence researchers rightly aim at computationally tractable models, but the computational complexity of Fast and Slow conceivability models is all to be explored. For instance, the system proposed in Berto [2014] is a weak relevant logic, which is decidable. Overall, positive (tractability) and negative (undecidability-incompleteness-intractability) results are expected.


3. Conceivability and possibility (PhD candidate)

This will address our final Problem 4 from Section 2. If our imagination is not closed under logical consequence, then what we imagine can be logically impossible: agent x imagines that P, P has Q as a (remote) logical consequence, but x does not imagine that Q. Then the scenario x conceives is not a logical possibility overall. It is an assumption of the LoC that human agents do represent to themselves the impossible. “Impossible” means here: what obtains at no possible world.

But modal rationalists (e.g. Chalmers [1996]) claim that there exist conceivability-possibility links such that, when we conceive (in a certain way) that such-and-such is the case, we establish its (absolute) possibility. This methodology is at the core of thought experiments in armchair philosophy, whereby we supposedly know what is absolutely or metaphysically possible via exercises of imagination. The LoC Project needs to address the issue: conceivability may provide prima facie heuristic evidence of possibility, but we are not prevented from representing absolute impossibilities. Arguing for this requires a survey of contemporary views on the links between conceivability and possibility. As Sub-Project 3 is within the PI’s expertise, a promising PhD student will be hired to write his/her dissertation on this topic under the PI’s supervision.

In particular, the LoC will investigate an alternative approach to the epistemology of metaphysical modality: the counterfactual approach due to Williamson [2007]. Our knowledge of metaphysical-absolute modalities is taken by Williamson just as a special case of our everyday engaging in counterfactual imagination. The methodology outlined by Williamson has it that, to establish that P is metaphysically necessary, we begin by conceiving that ~P in a counterfactual scenario, and develop this in our imagination to reach an inconsistent conclusion. If this is the case, the relevant ~P turns out to be a conceived absolute impossibility. Both Fast and Slow conceiving are at work: we develop our supposition partly via logical inferences, but also via fast similarity judgments.


4. The LoC book

The conclusive part of the LoC research will consist in the PI’s spending the last two years of the project writing a book, which will gather the Project’s ground-breaking results into a general, paradigm-shifting theory. The structure of the book will mirror the structures and sub-structures of Sub-Projects 1, 2, and 3. The book will be submitted for peer-review to a top academic publisher by the end of the research. 





  • Berto F. [2014], “On Conceiving the Inconsistent”, Proceedings of the Aristotelian Society, 114: 101-19.
  • Berto F., Tanaka K., Mares E., Paoli F. (eds.) [2012], Paraconsistency: Logic and Applications, Dordrecht: Springer.
  • Byrne R. [2005], The Rational Imagination, Cambridge, Mass: MIT Press.
  • Chalmers D. [1996], The Conscious Mind, Oxford: Oxford UP.
  • Dennett D. [2005], Sweet Dreams: Philosophical Obstacles to a Science of Consciousness, Cambridge, MA: MIT Press.
  • Fagin R., Halpern J.Y., Moses Y., Vardi M. [1995], Reasoning About Knowledge, Cambridge, Mass: MIT press.
  • Floridi L. [2004], “Open Problems in the Philosophy of Information”, Metaphilosophy, 35: 554-82.
  • Hintikka J. [1962], Knowledge and Belief: an Introduction to the Logic of the two Notions, Ithaca, N.Y.: Cornell UP.
  • Jago M. [2014], The Impossible: An Essay on Hyperintensionality, Oxford: Oxford UP.
  • Johnson-Laird P., Byrne R. [1993], “Models and Deductive Rationality”, in Rationality, London: Routledge.
  • Kahneman D. [2011], Thinking. Fast and Slow, MacMillan.
  • Kahneman D., Slovic P., Twersky A. [1982], Judgment Under Uncertainty, Cambridge: Cambridge UP.
  • Kripke S. [1965], “Semantical Analysis of Modal Logic II”, in The Theory of Models, Amsterdam: North-Holland: 206-20.
  • Levesque H. [1984], “A Logic of Implicit and Explicit Belief”, National Conference on AI (AAAI-84): 198-202
  • Lewis D. [1973], Counterfactuals, Blackwell: Oxford.
  • Lewis D. [2004], “Letters”, in Armour-Garb et al. (eds.) [2004], The Law of Non-Contradiction, Oxford: Oxford UP: 176-7.
  • Meyer R., van der Hoek W. [1995], Epistemic Logic for AI and Computer Science, Cambridge: Cambridge UP.
  • Priest G. [2005], Towards Non-Being: the Logic and Metaphysics of Intentionality, Oxford: Oxford UP.
  • Rantala V. [1982], “Impossible Worlds Semantics and Logical Omniscience”, Acta Philosophica Fennica, 35: 106-15.
  • Restall G. [2000], An Introduction to Substructural Logics, London-New York: Routledge.
  • Stanovich K., West R. [2000], “Advancing the Rationality Debate”, Behavioral and Brain Sciences, 23: 701-17.
  • Williamson T. [2007], The Philosophy of Philosophy, Oxford: Wiley-Blackwell.