Conditionals and Implication

The elementary theory of mathematical structure distinguishes between relations, such equality and order, that may be true or false, and operations such as addition and subtraction, that yield other objects of the type. In the study of classical two-valued logic, since the objects of the type are truth values, this distinction between relations and operations becomes invisible and they can be treated the same. Thus, the relation of implication, “if P then Q” can be expressed and even defined as (P -> Q) = (~P v Q). This is known as the material conditional, and it has been much debated and challenged. However, it continues to be used in classical two-valued logic, because it works.

It works because it expresses an ordering relationship among the truth values of P and Q: in mathematical terms, the truth value of P is less than or equal to that of Q. This is more easily understood if it is reversed: The truth value of Q is greater than or equal to P; Q is no less true than P, or Q is at least at true as P. This is a fundamental criterion of valid reasoning. We want to guarantee that if we start with true premises (summarized as P), and use valid reasoning (P->Q), we can safely draw conclusion Q without introducing error. This gives a succinct understanding that often baffles elementary logic students. If a statement P is false, the conditional P->Q is true, because any other statement at all is at least as true as a known falsehood. if Q is true, the conditional ) ->Q is true, because a known truth is at least as true as any other statement. This applies strictly to the truth values and holds regardless of the content of P or Q. However, conditionals derived from this process are in practice useless. If P is known to be false, p -> Q may be true, but it tells us nothing about Q, which may be true or false. If Q is true, P -> Q is true, but we have already reached our conclusion, and P could be equally well be or false.

But this only works in the constrained world of two-valued logic. Intuition and experience suggest that these aren’t necessarily true in the wider world of reasoning which includes uncertainty. For nearly a century, the development of logic beyond classical logic has been limited by a lack of a conditional that has equal power. This is not longer the case. I consider four cases.

In two valued logic, the laws of logic are expressed in the form of tautologies, statements that are always true, regardless of the values of the variables. The law of bivalence (P \/ ~P) is always true, whether P is true or false. The law of the excluded middle ~(P & ~P) is always true. More usefully and importantly, such statements as (P v Q) <=> (Q v P) are always true.

The first is the straightforward extension of the definition of material conditional, ~P or Q. This doesn’t work in three values, because even the simplest implications, (P -> P) fails…it has the value U when P does, so that using this definition, one cannot even establish the basic principle, If P, then P, for instance “If Harvey is a giant white rabbit, then Harvey is a giant white rabbit” as a general rule. This is very bad.

C.I. Lewis, who was much concerned with the deficiencies of the material conditional when applied outside the limited context of classical two-valued logic, proposed a strict conditional, which he defined as “It is not possible for P to be true and Q to be false”, and used this as the basis for modal logic. This also doesn’t work in a three valued system. Lewis assumed it anyway, and this assumption breaks his systems. One cannot use the truth table methods in his systems to establish whether a given formula is valid. Thether a formula is a logical law or not has to be derived using the methods of deductive inference from the axioms. Although he was concerned with what he called the paradoxes of the material conditional and thought it unreasonable that the P -> Q should follow from the mere falsity of P, and proposed the strict conditional as an alternative, it is also possible to form similar paradoxes of the strict conditional.

Lukasiewicz took a different approach. He defined the conditional with a truth table:
This has the same table as the material conditional, except that the central entry, corresponding to U -> U, is regarded as T. There is no obvious reason why this should be so, except that it seems to work. With this conditional, it is now possible to formulate some basic laws, such as “P -> P” as tautologies. The problem is that it fails as an implication. One of the basic rules of logical inference, modus ponens, which can be expressed as “(P & (P -> Q)) -> Q”, is almost a tautology, but fails in one case. It has the value of U in the case where P has the value of U and Q of F, although it is T in every other case. When Lewis and Langford discussed Lukasiewicz’ three valued logic, they observed other failures of this type, and speculated on laws that would work, but ultimately dismissed the logic as practically unworkable. They missed the truth by a whisker. Modus ponens fails because it should fail, and it should fail because Lukasiewicz allows doubtful conditionals. If P is doubtful, and P -> Q is also doubtful, using unrestricted modus ponens could lead to reaching false conclusions from a combination of doubtful premises and doubtful conditionals. This is not and should not be valid logic.

What went unobserved is that this deficiency could be repaired. What is truly remarkable is that this solution has gone unnoticed, unpublished, and unpublicized for nearly a century. It is simply to apply the “definite” modal operator to his conditional and define a strict Lukasiewicz conditional as [](P ->Q). This is where, to borrow and adapt Mark Twain’s phrase, “The difference between the right conditional and the almost right conditional is like the difference between lightning and lightning bug”. Shazam!

Leave a Reply