Advanced Search

I have two questions about logic that have vexed me for a long time.

I have two questions about logic that have vexed me for a long time. Smith has written two great books of philosophy. Now he has come out with a third book. Therefore, that book will probably be good too. Smith has flipped a coin twice, and both times it has come up tails. Now Smith will flip the coin a third time. Therefore, that flip with probably end up 'tails' too. The logical form of inductive arguments seems to contribute nothing; the premises seem to do no logical work supporting the conclusion - is that right? Smith has written two great books of philosophy. Now he has written a third. Any author that has written two great books of philosophy, and then writes a third, has probably written a third great book. Therefore, Smith has probably written a third great book. That seems a deductive argument, because the general premise was added. And if true, the premises do seem to support with conclusion with necessity, even though the conclusion is probable; it is the knowledge of the world and not...

I think both arguments can be analyzed as inductive arguments and still distinguished in terms of their quality. The book argument is a stronger inductive argument than the coin-toss argument for a simple reason: the probability that Smith's book C is great isn't independent of whether Smith's books A and B are great.

That is, Smith's having written great books A and B makes the probability that Smith's book C is great higher than it would be had Smith not already written two great books. Important: higher than it would be otherwise, which needn't mean higher than one-half. Even though Smith's track-record raises the probability that book C is great, the track-record needn't make it more probable than not that book C is great.

By contrast, the probability of tails on any given toss of a fair coin is independent of whether the coin came up tails twice already: that history of tosses neither increases nor decreases the probability of tails on a third toss.

Are the laws of logic invented or are they independent of human reason? If they

Are the laws of logic invented or are they independent of human reason? If they are independent, how can they exist immaterially? What does it mean for such laws to exist in a nonphysical way?

Good question, and as fundamental a question as anyone could ask. I think that the laws of logic must be not only independent of human minds but independent of any minds, including God's mind if such exists. At any rate, I don't think anyone can see how it could be otherwise.

To say that the laws of logic depend on human or divine minds is to imply that the following conditional statement is nontrivially true:

If (1) human or divine minds had been different enough, then (2) all of the laws of logic would be different from what they are.

(By "nontrivially true," I mean that the statement is true not merely on the ground that (1), its antecedent, is logically impossible. If (1) is logically impossible, then the conditional statement is trivially true, even if (2), its consequent, is also logically impossible.)

We can't make sense of the italicized statement without presupposing that (2) is false. If the italicized statement means anything, then it doesn't mean this: If (1) human or divine minds had been different enough, then (~ 2) not all of the laws of logic would be different from what they are. But, of course, my assertion just now about the statement's meaning itself depends on holding fixed at least some of the laws of logic, i.e., it depends on presupposing (~ 2) even on the assumption that (1) is true. Therefore, we understand the italicized statement only if we presuppose that it can't be nontrivially true.

As for the nonphysical existence of the laws of logic, you might look at what I wrote in reply to Question 24874.

Is it possible to translate a syllogism into propositional logic? This is the

Is it possible to translate a syllogism into propositional logic? This is the example: All doctors went to medical school. Hanna is a doctor. Hanna went to medical school. Thanks a lot, Sebastiano

For any syllogism containing quantifiers such as "all," "some," and "no"/"none," you'll need predicate logic for the translation. Propositional logic alone won't suffice. But you could use propositional logic to translate a non-quantified argument that's at least similar to the syllogism: "If Hanna is a doctor, then she went to medical school. Hanna is a doctor. Therefore, Hanna went to medical school."

P1. If today is February 29th, then it is a leap year

P1. If today is February 29th, then it is a leap year P2. Today is not February 29th C. It is not a Leap Year Is this argument sound or unsound? From what I can tell it is invalid because it is possible for it to be a leap year and today not being February 29th. If it’s invalid then it should be unsound. However neither of the premises are false so it can’t be unsound? Even if it were sound, wouldn’t it technically become unsound if it happened to be February 29th in real life?

The argument is unsound because, as you say, it's invalid. It commits the well-known fallacy of denying the antecedent.

Validity is necessary (but not sufficient) for soundness. So the argument is unsound regardless of the truth or falsity of its premises.

Is there any single genuinely correct logic or so called all-purpose logic? If

Is there any single genuinely correct logic or so called all-purpose logic? If not, why should we find it?

I presume that you would dismiss out of hand the following answer to your first question: "Yes, there is a single genuinely correct, all-purpose logic, and there is no such logic, and there is more than one such logic." So I take it that your question presupposes that no correct logic could allow that answer to be true.

If you're asking whether there's any good reason to abandon the standard, two-valued, "classical" logic routinely taught to university students in favor of some non-classical logic, then I'd answer no. Some philosophers say that we ought to adopt a non-classical logic in response to such things as the Liar paradox or the Sorites paradox, but their arguments for that conclusion have never struck me as persuasive. I think that the Liar and the Sorites can be solved using only classical logic (and bivalent semantics), or at least it's too early to conclude that they can't be.

For a much more detailed answer, you might consult Susan Haack's book Deviant Logic, Fuzzy Logic: Beyond the Formalism (University of Chicago Press, 1996).

Although I am aware of the distinction between deduction and induction in logic,

Although I am aware of the distinction between deduction and induction in logic, which relies on the strength of the link between premises and conclusion, with deduction a matter of necessity and induction a matter of probability, I find the distinction problematic. For instance, the argument "All men are mortal. Socrates is a man. So, Socrates is a mortal" is a classic example of a deductive argument. But the first premise is based on particular cases, so it cannot be universally guaranteed that it would be always true. But the fact that it may not always be true makes it one of probability and not necessity. Would this consideration make a difference as to the argument is deductive or inductive?

Whether an argument is deductive or inductive depends on the nature of the link between its premises and its conclusion. As you say, a deductive argument is one in which the premises entail the conclusion as a matter of necessity, i.e., that its conclusion must be true if its premises are. In contrast, an inductive argument is one in which its premises putatively support, but do not entail, its conclusion. As you say, the premises are supposed to make the conclusion more probable, but the conclusion could still be false despite the premises being true.

Deductive and inductive are therefore properties of arguments, not properties of their premises. What your example, the classic Socrates syllogism, highlights is that the premises of an argument can be justifiedin different ways. Certainly

All men are mortal

is a premise we would justify inductively: We observe that every man [sic] who's ever lived dies eventually, and so on the basis of inductive reasoning (person 1 died, person 2 died, ... person N died) conclude that 'All men are mortal.' Such a conclusion is, in light of our evidence, highly probable. The other premise

Socrates is a man

likely is not justified by induction — more like straightforward perceptual observation. It is also possible for a premise to be justified deductively. We could do so with 'Socrates is a man': Socrates is a featherless biped. All feather bipeds are men [sic]. Therefore, Socrates is a man.

The important point is that deduction and induction categorize arguments in terms of their modes of reasoning. The classic Socrates syllogism you cite is therefore deductive, regardless of how its premises might be justified (inductively, deductively, or otherwise).

I am reading "The Philosopher's Toolkit" by Baggily and Fosl, and in section 1

I am reading "The Philosopher's Toolkit" by Baggily and Fosl, and in section 1.12 is the following: "As it turns out, all valid arguments can be restated as tautologies - that is, hypothetical statements in which the antecedent is the conjunction of the premises and the conclusion." My understanding is the truth table for a tautology must yield a value of true for ALL combinations of true and false of its variables. I don't understand how all valid arguments can be stated as a tautology. The requirement for validity is the conclusion MUST be true when all the premises are true. I must be missing something. Thanx - Charlie

I don't have Baggily and Fosl's book handy but if your quote is accurate, there's clearly a mistake—almost certainly a typo or proof-reading error. The tautology that goes with a valid argument is the hypothetical whose antecedent is the conjunction of the premises and whose consequent is the conclusion. Thus, if

P, Q therefore R

is valid, then

(P & Q) → R

is a tautology, or better, a truth of logic. So if the text reads as you say, good catch! You found an error.

However, your question suggests that you're puzzled about how a valid argument could be stated as a tautology at all. So think about our example. Since we've assumed that the argument is valid, we've assumed that there's no row where the premises 'P' and 'Q' are true and the conclusion 'R' false. That means: in every row, either 'P & Q' is false or 'R' is true. (We've ruled out rows where 'P & Q' true and 'R' is false.) So the conditional '(P & Q) → R' is true in every row, and hence is a truth of logic.

Recently I asked a question about logic, and the answer directed me to an SEP

Recently I asked a question about logic, and the answer directed me to an SEP entry, which then took me to two other SEP entries, on Russell's paradox and on the Liar's paradox. Frankly, after having read through those explanations, there was a glaring omission from every cited philosopher, and I wondered if everyone was overcomplicating things: I don't see how there is any "paradox" at all. Consider the concept of a "round square" or a "six-sided pentagon." Those are nonsensical terms, because of the structural nature of the underlying grammar. They are neither logical nor illogical, they are merely grammatically inconsistent at the fundamental level of linguistic definition. The so-called "paradox" of Russell and the Liar seem to me to be exactly the same kind of nonsensical formulations: the so-called "paradox" is merely a feature of the language, these concepts also are grammatically inconsistent at the fundamental level of linguistic definition. Russell's "paradox" is just as "paradoxical" as...

If I may, I think you're being a bit too dismissive of Russell's paradox.

We start with the observation that some sets aren't members of themselves: the set of stars in the Milky Way galaxy isn't itself a star in the Milky Way galaxy; the set of regular polyhedra isn't itself a regular polyhedron; and so on. It seems that we've easily found two items that answer to the well-defined predicate

S: is a set that isn't a member of itself.

Naively, we might assume that a set exists for every well-defined predicate. (For some of those predicates, it will be the empty set.) But what about the set corresponding to the predicate S? This question doesn't seem, on the face of things, to be nonsensical or ungrammatical. But the question shows that our naive assumption implies a contradiction, and therefore our naive assumption can't possibly be true.

What is the difference between "either A is true or A is false" and "either A is

What is the difference between "either A is true or A is false" and "either A is true or ~A is true?" I have an intuitive sense that they are two very different statements but I am having a hard time putting why they are different into words. Thank you.

Perhaps I could add something here too—and perhaps it will be useful: You are right that there is a difference between the two statements that you offer, and the difference has become more significant with the rise of many-valued logics in the 20th and 21st centuries.

If one says, “A is either true or false,” then there are only two possible values that A can have—true or false. But if one says, “either A or not-A is true,” then there might be all sorts of values that A could have: true, false, indeterminate, probably true, slightly true, kind of true, true in Euclidean but not Riemannian geometry, and so on. The first formulation allows only one alternative to “true” (namely, “false”), but the second formulation allows many alternatives. The second formulation does indeed require that at least A or not-A be true, but it puts no further restrictions on what other values might substitute for “true.” (For example, perhaps A is true, and yet not-A is merely indeterminate.)

The advantage of sticking to the first formulation (often called the principle of bivalence) is that it forces us to reason from propositions that describe what is definitely so or not so, and as a result, we can actually prove things. (After all, if we were to give reasons that were neither true nor false, then our reasons would seem to end up proving nothing. Imagine, for example, someone saying, “I believe this conclusion for a good reason, but my reason is neither true nor false.” Moreover, if the conclusions we wanted to prove were also to turn out to be neither true nor false, then they would remain unprovable; what would it mean, one might ask, to “prove” the untrue?) Considerations of this sort led Aristotle to believe that scientific knowledge always depended crucially on propositions of argument that had to be true or false.

On the other hand, there are many situations in life where our ideas are so vague and indefinite that the best we can say is that a particular proposition seems somewhat true, or true to a certain degree, or true for the most part. (For example, Aristotle held that propositions of ethics were sometimes only “true for the most part.” In the Middle Ages, a number of logicians wanted to use “indeterminate” as a truth value, in addition to true and false, and in the 20th and 21st centuries, logicians have experimented increasingly with the idea that there could be many truth values, in addition to true and false. As a result, there are now various systems of many-valued logic, including so-called fuzzy logic, which assigns numerical degrees of truth to different propositions.)

All the same, the principle of bivalence still plays a fundamental role even in systems of many-valued logic, albeit at a higher level. (The second formulation that you have cited is now termed the law of excluded middle, though before the development of many-valued logics, the two formulations would have amounted to the same thing.)

Specifically, many-valued logics assign different values to various propositions and then draw conclusions from the assignments. (For example, if A is “somewhat true,” then one can conclude that A is not “entirely false.”) Nevertheless, such systems always rely on at least two crucial assumptions: (1) the propositions in question, such as A, do indeed have the assigned values or they do not, and (2) these propositions cannot both have the assigned values and not have them. The first assumption is the principle of bivalence all over again, though at the “meta” level (meaning that it applies, not to A, but to statements about A, that is, to the statements of A’s truth value). And the second assumption is the traditional law of contradiction. (For more on the law of contradiction, you might see Questions 5536 and 5777.)

In other words, the propositions treated by a system of many-valued logic are typically imprecise and indefinite, and what a many-valued logic then does is allow us to talk in a precise and definite way about the imprecise and indefinite. To achieve this result, however, the system’s own statements must be definite, and to achieve coherence, the system’s own statements must also be noncontradictory. By contrast, if one were to relax these restrictions on the system, then all one would get would be an indefinite discussion of the indefinite, or an incoherent discussion. And if this last result were all that one hoped to achieve, then there would be no need to build the system in the first place. Instead, just leap from bed in the morning, and without drinking any tea or coffee, start talking. If you are like me, you will then arrive almost instantly at the appropriate level of grogginess.

For the philosophically unsophisticated, why is it significant that logic cannot

For the philosophically unsophisticated, why is it significant that logic cannot be reduced to mathematics? What difference would it have made if that project had succeeded; what is import that it failed?

Your ability to balance your checkbook, or to draw logical inferences in everyday life, won’t be affected in the least by difficulties in figuring out just how logic and higher mathematics are connected. Nevertheless, the relationship between logic and mathematics has been an intriguing conundrum for the better part of two centuries.

There have been many attempts to understand various aspects of logic mathematically, and perhaps the most famous is George Boole’s Mathematical Analysis of Logic (1847), which laid the foundation for Boolean algebra. Far from being a failure, Boole’s effort seems to have been a smashing success, especially when we consider the extent to which Boolean algebra underlies modern digital computing.

Nevertheless, the relationship between logic and mathematics can go in two directions, not just one, and so, just as one might try to understand various parts of logic mathematically, one can also try to understand various parts of mathematics logically. It is this further possibility, I suspect, that has prompted your question about a “failure.”

Late in the nineteenth century, the German logician Gottlob Frege sought to understand part of mathematics in terms of logic. Frege wanted to reduce arithmetic to logic, and later writers tried to reduce other parts of mathematics to logic too. Today, this approach is usually called “logicism,” and the primary motivation behind it is to discover exactly what kinds of entities mathematical objects are.

When we do arithmetic, for example, we add numbers, but what exactly is a number? Is it a physical object? Is it just an idea in our heads? Is it a mere symbol? Is it a timeless, placeless eternal entity that exists even if no one thinks about it? These questions about numbers are as old as Plato (maybe older), and they generally fall under the heading of ontology—which asks what kinds of objects exist. Logicism is an attempt to answer these ontological questions, and this is why it seeks to “reduce” mathematics to logic.

In 1931, Kurt Gödel demonstrated that no logical system rich enough to include arithmetic as a consequence could be shown within that system to be both consistent and complete. Either some statements of the system would have to remain unprovable, or if provable, the system would be inconsistent. Many have argued that Gödel’s result showed that logicism must fail, or at least that some versions of it must fail, but it is important to add that the exact impact of his result on logicism is a complicated question, and subject to different interpretations. However this may be, all these discussions concern logic and mathematics as expressed through formalized symbolic systems, which were developed in the late nineteenth century, and in the twentieth century, and these discussions have had, in fact, no real effect on our ordinary reasoning in daily life, or on our everyday ability to add and subtract correctly.

Lest these last remarks seem philosophically controversial, let me say a bit more to explain them.

In Isaac Newton’s day, none of these formalized systems—systems of mathematical logic or of fully symbolic logic—existed, and yet hardly anyone would say, I think, that without these systems Newton was unable to add simple sums correctly or to draw logical inferences correctly. It follows that his ability to do these things was quite independent of such systems. More broadly, formal systems of logic and mathematics can certainly improve and refine our logical and mathematical abilities, but the abilities are already partially present in us without the systems, and it is precisely because we have some of these abilities antecedently that the formal systems can be constructed at all.

These same logical and mathematical abilities are also antecedent to our musings about ontology, or to our disagreements about ontology. Historically, there have been many different theories of what numbers are, just as there have been many different theories of what kinds of entities the propositions of logical argumentation are. Nevertheless, two and three have always made five, and the Barbara syllogism, at least in ordinary cases, has always been valid. Ontology is still a fascinating subject, to be sure, but its practical effects are often quite limited. Ontology certainly does affect higher mathematics, but there are large stretches of ordinary reasoning and arithmetical reckoning that are essentially immune to it.

So whatever results might be derived from various oddities in the further reaches of logic and mathematics, you can still be tolerably sure that, if all cats are cool, and Felix is a cat, then Felix is cool. And you can be equally sure that, if you have five rabbits, and you then add seven rabbits, you will have twelve rabbits, at least for a while.