A while back, I wrote an essay here called “The Liar Paradox and the Philosophy of Logic*.”* It focused on the problem that sentences like (1) seem to pose for classical logic (the kind of logic you learn in any standard-issue introductory college class on symbolic logic).

(1) The sentence labeled (1) in the essay “My Preferred Solution to the Liar Paradox” is not true.

One of the basic principles of classical logic is the Law of the Excluded Middle (LEM), which says that every proposition is either true or untrue. Another of those basic principles is the Law of Non-Contradiction (LNC), which says that no proposition is both true and untrue. In classical logic, all inferences from contradictory premises are taken as valid (meaning that they can never take us from true premises to false conclusions), since the underlying assumption is that all contradictions are definitionally false. Conversely, every instance of the LEM is treated as definitionally true, which means they can be validly inferred from any premises whatsoever (or no premises at all).

But (1) seems to blow all that up. It looks like, by the LEM, (1) must be one or the other. Either way, though, it seems like it would have to be both!

Some high-profile analytic philosophers, like Graham Priest and Hartry Field, solve this problem with extreme forms of logical surgery. Priest thinks sentences like (1) are genuine counter-examples to the LNC, and thus rejects classical logic in favor of an inconsistency-tolerant (or “paraconsistent”) logic in which some contradictions can be true without . Field rejects certain instances of the LEM, and thus rejects classical logic in favor of a “paracomplete” alternative.

In the earlier essay, I argued that (a) these solutions are extremely implausible and (b) they don’t actually help with the paradox, since we can construct (1)-like sentence that create problems even on these views. If I’m right about all that (and of course you might disagree!), then where should we go looking for a solution to the paradox?

First, an analogy might be helpful. In an essay here, a couple years ago, I wrote about Russell’s Paradox. To set that one up, I gave a little background on set theory:

Popular-level explanations of set theory almost always start by explaining it wrong. Authors easing readers into the idea of a “set” on p. 1 will say that herds of cattle or piles of stones or packs of wild dogs or clusters of galaxies are examples of “sets” of rocks, cattle, wild dogs, or galaxies. But, at the admitted risk of sounding like an annoying pedant, that’s just wrong.A set is a particular kind of mathematical object—like a number or a point.It’s an abstract collection. The herds and piles and packs and clusters are all concrete collections. To see the difference, start with a particular pack of wild dogs. There are four members of the pack—they’re wild dogs, no one’s given them names, so let’s call them Dog A, Dog B, Dog C, and Dog D. But one day Dog B gets separated from the pack and he doesn’t find his way back. If you’re someone who hates stories where bad things happen to dogs, don’t worry—we’ll give Dog B a happy ending. He’s adopted by a nice family who name him Friedrich and buy him lots of squeaky toys and walk him every day. The same thing happens to Dogs A, C, and D, who are adopted by geographically dispersed human families.

At the beginning of the story, we had a pack of wild dogs. We also had an abstract set whose members were the four members of the pack. At the end of the story, the pack no longer exists, but the set constituted by Dogs A-D (who are now, let’s say, known as Karl, Friedrich, Vladimir, and Leon) still exists because sets are defined by their members and all of the members still exist.The way the story of set theory is usually told, back in the 19th century there was something called “naive set theory.” (Obviously, they didn’t call it that at the time.) Naive set theorists accepted an axiom that says that for every property, there’s a set of all and only the objects that have that property (or maybe for every description, there’s a set of all and only the objects it describes). So for the property “being a dog,” there’s a Set of All Dogs. For the property of being a set, there’s the Set of All Sets. For the property of being one of the original four members of Led Zeppelin, there’s a set whose only members are Jimmy Page, John Paul Jones, Robert Plant, and John Bonham. You get the idea.

As I acknowledged in that essay, this is a bit of an over-simplification of the actual history of set theory. Some earlier set theorists had more complicated views that don’t quite fit the pat story about “naive set theory.” But, that’’s good enough for our purposes today. The important point is that, whatever anyone thought before, this next part, no one accepted the axiom that for every property/description there’s a set of objects that have that property/match that description after it.

The key intervention here comes from Bertrand Russell, and to see his point, let’s first consider a few sets that exist if this axiom holds. For the property of being a coffee cup, there’s the set of all coffee cups. For the property of being a set, there’s the set of all sets. For the property of being an iPhone, there’s the set of all iPhones. And for the property of being a set that’s being used as an example in an explanations of Russell’s Paradox, there’s the set of all sets that have been used as examples in explanations of Russell’s Paradox.

For each of these, we can ask, is it a member of itself?

The set of all coffee cups isn’t a coffee cup. (It’s a set—an abstract mathematical object!) So, it’s not a member of itself.

The set of all sets, on the other hand, is a set, so it must be a member of itself.The set of all iPhones is a set rather than an iPhone, so it doesn’t contain itself as a member. But, the sets of all sets that have been used as examples in explanations of Russell’s Paradox is a set that’s been used as an example in an explanation of Russell’s Paradox. (I just used it as one!) So, it does contain itself as a member.

Now, we can get to Russell’s question?What about the set of all sets that don’t contain themselves as members?

If there is such a set, then by the LEM, it must either contain itself as a member or not. But if it does, it doesn’t, and if it doesn’t, it does. (If you don’t see this, stop reading and take a few minutes to work it through!) Either way, the LNC has been violated.

One interesting thing to notice about the Liar Paradox and Russell’s Paradox (which are sometimes clumped together as “paradoxes of self-reference”) is that they have a common structure. In both cases, the paradox arises out of two premises:

Something (a set or a proposition exists) which

Has a property (being true or being a member of itself) if and only if it doesn’t have that property

In the case of Russell’s Paradox, nearly everyone but Graham Priest (who applies his “paraconsistent” solution here too) agrees that the right answer is to reject the existence premise. The set of all sets which do not contain themselves as members (“the Russell Set”) would be a logically impossible object, so it doesn’t exist. That’s why practically all set theorists for the last century and change have rejected the axiom that for every property, there’s a set of all and only the objects that have that property. Instead, they stipulate various more complicated axioms about which sets there are.

If the parallel move to denying the existence of the Russell Set would be denying the existence of Liar Sentences like (1), that seems absurd. They clearly exist. Scroll up and you’ll see one!

Arguably, though, “truth” an “untruth” don’t apply to sentences exactly but the propositions expressed by those sentences. So, for example, the English sentence “Snow is white” and the German sentence “Schnee ist Weiss” are two different sentences, but they express the same proposition. That’s what’s true or false.

Or, take sentence (2), spoken or written by some unknown person in unknown circumstances.(2) I just got there.

Is (2) true? Who knows? It asserts of whoever spoke it that he or she is in some location at some time, but we don’t know either who said or wrote it or what location they were claiming to be in or when they were claiming to have gotten there.

So, the equivalent of the standard solution to Russell’s Paradox would be to deny that the Liar Proposition exists. A proposition that would be true if and only if it were untrue would be a logically impossible object, like the Russell Set or a round square. So, it doesn’t exist.

This is exactly the solution I argued for in my Ph.D. dissertation. You can find that dissertation online if you really want to, although I vastly prefer the version of the argument I presented in this book for Springer. (The core points are all the same, but I wrote it from scratch several years after I finished grad school, and the several extra years of thinking about it really helped.) The book (Logic Without Gaps or Gluts: How to Solve the Paradoxes Without Sacrificing Classical Logic) was originally written in 2018, although after many delays in revision, it wasn’t published until 2022—by hilarious coincidence, the same week I went on Rogan, which meant that the three hours of my life the most people have ever watched happened the same week as the publication of the book of mine the fewest people are ever going to read, since it’s not about politics or socialism but a fairly obscure academic debate.

A lot of people who are interested in that debate reject the idea of a non-existence solution to the Liar Paradox as an obvious absurdity. After all, they say, (1) is not, like, say:(3) Blork glork deblork.

Or even, say:(4) Colorless green ideas sleep furiously.In his book Syntactic Structures, Noam Chomsky mentions (4) as a sentence that might be plausibly regarded as meaningless despite being composed of real English words arranged according to the rules of correct syntax. Whether it really is meaningless is neither her nor there for my purposes. The point is just that (3) seems obviously meaningless, and you might look at (4) and not feel certain whether you understand it (or even whether there’s anything there to understand) but when you look at (1), you don’t have that experience. You feel certain that you understand what it means.

And, in a sense, you do. The relevant distinction here is one I’m pulling from David Kaplan. The sentence of “meaning” in which you know what it means is precisely the sense in which you know what (2) means. You grasp its linguistic character. But that’s a different thing than its propositional content—i.e., from what proposition it expresses.

Many people would push back here too and say, of course they understand what proposition it expresses. They know what object the sentence is about (itself) and they know what property is being attributed to that object (untruth).

Subscribe now

I disagree because I don’t think truth is a property. I subscribe to a view called “disquotationalism,” according to which the effect of saying, e.g. that the sentence “Snow is white” is true is simply to assert that snow is white. In other words, the effect of adding “is true” to a quoted sentence is just to remove the quotation marks.

There are a few different ways of cashing out the disquotation metaphor. Not all of them would get me where I want to go here. The particular version of disquotationalsim I argue for in Logic Without Gaps or Gluts is Content Inheritance Disquotationalism (CID). On CID, “‘Snow is white’ is true” doesn’t attribute some property called “truth” to the quoted sentence. It attributes whiteness to actual snow out there in the external world.1 In other words, sentences of the form “‘P’ is true” inherit 100% of their propositional content from whatever proposition is expressed by P. Sentences of the form "‘P’ is not true” inherit their propositional content from the negation of P.So, for example, in this series:

(5) Snow is white.

(6) (5) is not true.

(7) (6) is true.

…the propositional content of (6) is “It’s not the case that ‘snow is white’” and the propositional content of (7) is “Snow is white.”

Since (1) is semantically orphaned (there’s no sentence that doesn’t use the truth predicate from which it can inherit its propositional content), I hold that it doesn’t express a proposition at all. It’s as if (2) were formed by accidentally pushing around refridgerator magnets, or by my cat walking over my keyboard, or by, say, me making it up as an example without actually having any person or location at all in mind.

One way of nudging your intuitions in the direction of thinking (1) might not express a proposition is to consider the following pair:

(8) (9) is true.

(9) (8) is true.

Are (8) and (9) true, or untrue? This question seems unanswerable. Nor is this a matter of ignorance. We know everything there is to know about these sentences.2 My view is that there’s nothing to know. It’s not like reading “I am there now” as a text message without knowing who sent it or what they were talking about. There simply never was a proposition expressed by either (8) or (9).

Hartry Field argues against non-existence solutions with what’s probably my all time favorite version of the Liar Paradox:(10) What the least intelligent person in the room right now is thinking is untrue.

….where, of course, (10) is being arrogantly thought by someone who has no idea he’s the least intelligent person in the room right now. Isn’t it a bit absurd to say that, because of this fact that the person thinking it had no way of knowing, (10) is meaningless (even in the sense of lacking propositional content)? If someone even dumber than the person thinking (10) had slipped into the back of the room a few seconds earlier (unbeknownst to the person thinking (10)), and was at that moment thinking some drivel about chemtrails, (10) would be true! Surely, this is ridiculous. It’s one thing to say that facts of which I’m unaware can make a sentence I say or think false, but it seems very odd to say that facts of which I’m unaware can make it meaningless.

I see the force of Field’s intuition. I’m ultimately unmoved, though, because it seems to me that part of the core function of the truth predicate, part of why it’s such a handy thing for a language to include, is that it gives us the power or blind assertion. In other words, it allows us to assert (or deny) things we wouldn’t be able to otherwise, like a particularly devoted Catholic saying “all the stuff the Pope says is true” without knowing what in particular the Pope said. What blind assertion means is that you don’t know which proposition you’re asserting. I don’t see why it couldn’t also mean that you could sometimes not know whether you’re asserting anything at all.

So, in this case, the person thinking what we can call the Arrogant Thought doesn’t know whether he’s asserting the negation or an insane thought about chemtrails or the negation of “2 + 2 = 4” or whether there’s no proposition for him to negate at all because his sentence turns out to be semantically orphaned. As Barney Stinson would say, it’s one of the many perils of the blind approach.3

Thanks for reading Philosophy for the People w/Ben Burgis! This post is public so feel free to share it.

Share

1

This means by the way that I don’t strictly believe in a correspondence theory of truth, in the sense of believing that truth is the property of corresponding to reality. But, CID is a different way of cashing out what I’d think of as the core correspondence claim, which is that propositions are true if and only if reality is the way the propositions say it is.

2

We could deny the LEM, of course, but then you reap all the implausibilities I talked about in the previous essay.

3

Seemed appropriate somehow to close out with a reference that was culturally current way back when I was first developing these arguments in my dissertation!In any case, that’s enough for today. I’ll be back next week, and back on much more typical ground for this Substack, with a response to some of the claims in this articthis article in Iacobin.


From Philosophy for the People w/Ben Burgis via this RSS feed