I am reading "The Philosopher's Toolkit" by Baggily and Fosl, and in section 1

I am reading "The Philosopher's Toolkit" by Baggily and Fosl, and in section 1.12 is the following: "As it turns out, all valid arguments can be restated as tautologies - that is, hypothetical statements in which the antecedent is the conjunction of the premises and the conclusion." My understanding is the truth table for a tautology must yield a value of true for ALL combinations of true and false of its variables. I don't understand how all valid arguments can be stated as a tautology. The requirement for validity is the conclusion MUST be true when all the premises are true. I must be missing something. Thanx - Charlie

I don't have Baggily and Fosl

I don't have Baggily and Fosl's book handy but if your quote is accurate, there's clearly a mistake—almost certainly a typo or proof-reading error. The tautology that goes with a valid argument is the hypothetical whose antecedent is the conjunction of the premises and whose consequent is the conclusion. Thus, if

P, Q therefore R

is valid, then

(P & Q) → R

is a tautology, or better, a truth of logic. So if the text reads as you say, good catch! You found an error.

However, your question suggests that you're puzzled about how a valid argument could be stated as a tautology at all. So think about our example. Since we've assumed that the argument is valid, we've assumed that there's no row where the premises 'P' and 'Q' are true and the conclusion 'R' false. That means: in every row, either 'P & Q' is false or 'R' is true. (We've ruled out rows where 'P & Q' true and 'R' is false.) So the conditional '(P & Q) → R' is true in every row, and hence is a truth of logic.

I don't have Baggily and Fosl

I don't have Baggily and Fosl's book handy but if your quote is accurate, there's clearly a mistake—almost certainly a typo or proof-reading error. The tautology that goes with a valid argument is the hypothetical whose antecedent is the conjunction of the premises and whose consequent is the conclusion. Thus, if P, Q therefore R is valid, then (P & Q) → R is a tautology, or better, a truth of logic. So if the text reads as you say, good catch! You found an error. However, your question suggests that you're puzzled about how a valid argument could be stated as a tautology at all. So think about our example. Since we've assumed that the argument is valid, we've assumed that there's no row where the premises 'P' and 'Q' are true and the conclusion 'R' false. That means: in every row, either 'P & Q' is false or 'R' is true. (We've ruled out rows where 'P & Q' true and 'R' is false.) So the conditional '(P & Q) → R' is true in every row, and hence is a truth of logic.

What is the difference between "either A is true or A is false" and "either A is

What is the difference between "either A is true or A is false" and "either A is true or ~A is true?" I have an intuitive sense that they are two very different statements but I am having a hard time putting why they are different into words. Thank you.

Perhaps I could add something here too—and perhaps it will be useful: You are right that there is a difference between the two statements that you offer, and the difference has become more significant with the rise of many-valued logics in the 20th and 21st centuries.

If one says, “A is either true or false,” then there are only two possible values that A can have—true or false. But if one says, “either A or not-A is true,” then there might be all sorts of values that A could have: true, false, indeterminate, probably true, slightly true, kind of true, true in Euclidean but not Riemannian geometry, and so on. The first formulation allows only one alternative to “true” (namely, “false”), but the second formulation allows many alternatives. The second formulation does indeed require that at least A or not-A be true, but it puts no further restrictions on what other values might substitute for “true.” (For example, perhaps A is true, and yet not-A is merely indeterminate.)

The advantage of sticking to the first formulation (often called the principle of bivalence) is that it forces us to reason from propositions that describe what is definitely so or not so, and as a result, we can actually prove things. (After all, if we were to give reasons that were neither true nor false, then our reasons would seem to end up proving nothing. Imagine, for example, someone saying, “I believe this conclusion for a good reason, but my reason is neither true nor false.” Moreover, if the conclusions we wanted to prove were also to turn out to be neither true nor false, then they would remain unprovable; what would it mean, one might ask, to “prove” the untrue?) Considerations of this sort led Aristotle to believe that scientific knowledge always depended crucially on propositions of argument that had to be true or false.

On the other hand, there are many situations in life where our ideas are so vague and indefinite that the best we can say is that a particular proposition seems somewhat true, or true to a certain degree, or true for the most part. (For example, Aristotle held that propositions of ethics were sometimes only “true for the most part.” In the Middle Ages, a number of logicians wanted to use “indeterminate” as a truth value, in addition to true and false, and in the 20th and 21st centuries, logicians have experimented increasingly with the idea that there could be many truth values, in addition to true and false. As a result, there are now various systems of many-valued logic, including so-called fuzzy logic, which assigns numerical degrees of truth to different propositions.)

All the same, the principle of bivalence still plays a fundamental role even in systems of many-valued logic, albeit at a higher level. (The second formulation that you have cited is now termed the law of excluded middle, though before the development of many-valued logics, the two formulations would have amounted to the same thing.)

Specifically, many-valued logics assign different values to various propositions and then draw conclusions from the assignments. (For example, if A is “somewhat true,” then one can conclude that A is not “entirely false.”) Nevertheless, such systems always rely on at least two crucial assumptions: (1) the propositions in question, such as A, do indeed have the assigned values or they do not, and (2) these propositions cannot both have the assigned values and not have them. The first assumption is the principle of bivalence all over again, though at the “meta” level (meaning that it applies, not to A, but to statements about A, that is, to the statements of A’s truth value). And the second assumption is the traditional law of contradiction. (For more on the law of contradiction, you might see Questions 5536 and 5777.)

In other words, the propositions treated by a system of many-valued logic are typically imprecise and indefinite, and what a many-valued logic then does is allow us to talk in a precise and definite way about the imprecise and indefinite. To achieve this result, however, the system’s own statements must be definite, and to achieve coherence, the system’s own statements must also be noncontradictory. By contrast, if one were to relax these restrictions on the system, then all one would get would be an indefinite discussion of the indefinite, or an incoherent discussion. And if this last result were all that one hoped to achieve, then there would be no need to build the system in the first place. Instead, just leap from bed in the morning, and without drinking any tea or coffee, start talking. If you are like me, you will then arrive almost instantly at the appropriate level of grogginess.

I think you're getting at the

I think you're getting at the difference between the principle of Bivalence (there are only two truth values—true and false) and the Law of Excluded Middle: 'P or not-P' is always true. Suppose there are some sentences that are neither true nor false. That might be because they are vague, for example. It might not be true to say that Smith is bald, but it might not be false either; it might be indeterminate. So if S stands for "Smith is bald," then "Either S is true or S is false" would not be correct. Our assumption is that S isn't true, but also isn't false. However, if by "not- S " we mean " S isn't true," then " S or not- S " is true. That is, bivalence would fail, but excluded middle wouldn't. But as you might imagine, there's a good deal of argument about the right thing to say here.

An elementary precept of logic says that where there are two propositions, P and

An elementary precept of logic says that where there are two propositions, P and Q, there are four possible "truth values," P~Q, Q~P, P&Q, ~P~Q, where ~ means "not."   Do people ever apply this to pairs of philosophy propositions? For example, has anyone applied it to positive and negative liberty, or to equality of opportunity and equality of condition, or to just process and just outcome? On these topics I can find treatments of the first two truth values but none of the second two.   If this precept of logic is not applied, has anyone set out the reasons?

I'm not entirely sure I

I'm not entirely sure I follow, but perhaps this will be of some use.

Whether two propositions really have four possible combinations of truth values depends on the propositions. Non-philosophical examples make the point easier to follow.

Suppose P is "Paula is Canadian" and Q is "Quincy is Australian." In this case, the two propositions are logically independent, and all four combinations P&Q, P&~Q, ~P&Q and ~P&~Q represent genuine possibilities. But not all propositions are independent in this way; it depends on their content.

P and Q might be contradictories, that is, one might be the denial of the other. (If P means that Paula is Canadian and Q means that she is not Canadian, then we have this situation.) In that case, the only two possibilities are P&~Q and ~P&Q.

Or P and Q might be contraries, meaning that they can't both be true though they could both be false. For example: if P is "Paula is over 6 feet tall" and Q is "Paula is under 5 feet tall," then we only have three possibilities: P&~Q, ~P&Q, and ~P&~Q. The fourth case, P&Q, isn't possible.

Or P and Q might be subcontraries, meaning that they can both be true, but can't both be false. For example: if P is "Paula is under 6 feet tall" and Q is "Paula is over 5 feet tall," then the only possibilities are P&Q, P&~Q and ~P and Q. ~P&~Q isn't possible.

Or P might imply Q. If P is "Paula is over 6 feet tall" and Q is "Paula is over 5 feet tall," then the possibilities are P&Q, ~P&Q, and ~P&~Q. Here, P&~Q isn't possible.

Finally, P and Q might be equivalent. Suppose P is "The temperature is 32 degrees Fahrenheit" and Q is "The temperature is 0 degrees Celsius." In that case, P and Q are in effect the same proposition, expressed by different sentences. They are either both true or both false, leaving P&Q and ~P&~Q as the only possibilities.

All of this applies across the board, and in particular it applies in philosophy. Not all philosophical claims are independent, and so for some philosophical propositions, one or more of the four combinations won't represent possibilities. But at least some philosophical disputes are over the very question of what the logical relationship between two claims actually is. For example: consider "Paula's behavior is determined" and "Paula is responsible for her behavior." One important view is that these are contraries; they can't both be true. Other philosophers deny this, claiming, for example, that responsibility entails determinism, in which case "Paula is responsible, and her behavior is not determined" doesn't represent a genuine possibility. Other philosophers would claim that the two are independent, and so all four combinations represent genuine possibilities.

This kind of disagreement about the logical relations among philosophical claims is common in philosophy. But the larger point is that we can't simply assume in all cases that all four combinations represent genuine possibilities.

I'm not entirely sure I

I'm not entirely sure I follow, but perhaps this will be of some use. Whether two propositions really have four possible combinations of truth values depends on the propositions. Non-philosophical examples make the point easier to follow. Suppose P is "Paula is Canadian" and Q is "Quincy is Australian." In this case, the two propositions are logically independent, and all four combinations P&Q, P&~Q, ~P&Q and ~P&~Q represent genuine possibilities. But not all propositions are independent in this way; it depends on their content. P and Q might be contradictories, that is, one might be the denial of the other. (If P means that Paula is Canadian and Q means that she is not Canadian, then we have this situation.) In that case, the only two possibilities are P&~Q and ~P&Q. Or P and Q might be contraries, meaning that they can't both be true though they could both be false. For example: if P is "Paula is over 6 feet tall" and Q is "Paula is under 5 feet tall," then we only have three...

I'm still puzzled by the answers to question 5792, on whether it is true that

I'm still puzzled by the answers to question 5792, on whether it is true that Mary won all the games of chess she played, when Mary never played any game of chess. Both respondents said that it is true. But is it meaningful to say "I won all the games I played, and I never played any game."? It seems to me that someone saying this would be contradicting himself.

I think you're right to at least this extent. If I say to someone "I won all the games of chess I played," the normal rules of conversation (in particular, the "pragmatics" of speech) make it reasonable for the other person to infer that I have actually played at least one game. Whether my statement literally implies this, however, is trickier.

Think about statements of the form "All P are Q." Although it may take a bit of reflection to see it, this seems to be equivalent to saying that nothing is simultaneously a P and a non-Q. We can labor the point a bit further by turning to something closer to the lingo of logic: there does not exist an x such that x is a P and also a non-Q. For example: all dogs are mammals. That is, there does not exist a dog that is a non-mammal.

Now go back go the games. If Mary says "All games I played are games I won," then by the little exercise we just went through, this becomes "There does not exist a game that I played and lost." But if Mary played no games at all, then that's true. No game is a game she played and lost because no game is a game she played.

It turns out that avoiding this conclusion isn't as easy as it might seem. We usually agree that "No X are Y" and "No Y are X" amount to the same thing. We can also agree that no animals are unicorns, because there aren't any unicorns at all. But if no animals are unicorns, then the principle we just noted entails that no unicorns are animals. which is already starting to sound awkward. Worse, we also usually agree that "No X are Y" amounts to "All X are non-Y," and so we get "All unicorns are non-animals."

There are approaches to logic that find ways around this sort of thing. But the carpet will have to bulge somewhere. Either the rules of inference will be a bit more complicated or we'll have to give up principles that seem appealing or we'll end up with some cases of "correct" inferences that seem peculiar. Different people will see the costs and benefits differently. My own view, which would not win me friends in certain circles, is that there's nothing deeply deep here. But not everyone agrees.

I think you're right to at least this extent. If I say to someone "I won all the games of chess I played," the normal rules of conversation (in particular, the "pragmatics" of speech) make it reasonable for the other person to infer that I have actually played at least one game. Whether my statement literally implies this, however, is trickier. Think about statements of the form "All P are Q." Although it may take a bit of reflection to see it, this seems to be equivalent to saying that nothing is simultaneously a P and a non-Q. We can labor the point a bit further by turning to something closer to the lingo of logic: there does not exist an x such that x is a P and also a non-Q. For example: all dogs are mammals. That is, there does not exist a dog that is a non-mammal. Now go back go the games. If Mary says "All games I played are games I won," then by the little exercise we just went through, this becomes "There does not exist a game that I played and lost." But if Mary played no games at all,...

Are there any books or videos or blogs or anything easily accessible that

Are there any books or videos or blogs or anything easily accessible that provide actual English translations of symbolic logic? If I could just read some straight-up translations it would be far easier for me to learn symbolic logic. I have some textbooks, but that's not what I'm looking for: I just want translations of sentences. (This was inspired by a reading of Alexander Pruss's "Incompatiblism Proved" of which I tried to paste an example sentence but was unable to do so).

More or less every textbook I can think of has many, many translations of symbolic sentences into English. Many, though by no means all, of the translations are in the exercises, and often you need to work from answer to question, but any good text will include lots and lots of examples.

What I mean by "work from answer to question", by the way, is this: the more common kind of symbolization problem goes from English into symbols. The question will give you the English sentence, and the answer—often at the end of the chapter—will give the symbolic version. But if you look at the answer and trace it back to the question, you have just what you want. The question might ask you to put "No man is his own brother" into symbols. The answer might look like this:

~∃x(Mx ∧ Bxx)

But if you are given the answer and you know what question it answered, then you have your translation. Bear in mind that for this to work, you have to know what the letters stand for; that's often given in the question. There are many English sentences that have the same logical form, and therefore look similar or the same when translated into symbols. Notice that our symbolic sentence above could equally well be a way to say "No moose is bigger than itself."

That said, two further comments. The first is that you will get much better at reading the symbols if you spend a lot of time working in the usual order: going from English into symbols. Second, philosophy went through a patch where it was way too quick to use symbols, often without actually making things any clearer.

More or less every textbook I can think of has many, many translations of symbolic sentences into English. Many, though by no means all, of the translations are in the exercises, and often you need to work from answer to question, but any good text will include lots and lots of examples. What I mean by "work from answer to question", by the way, is this: the more common kind of symbolization problem goes from English into symbols. The question will give you the English sentence, and the answer—often at the end of the chapter—will give the symbolic version. But if you look at the answer and trace it back to the question, you have just what you want. The question might ask you to put "No man is his own brother" into symbols. The answer might look like this:           ~∃x(Mx ∧ Bxx) But if you are given the answer and you know what question it answered, then you have your translation. Bear in mind that for this to work, you have to know what the letters stand for; that's often given in the question....

Is there a name for a logical fallacy where person A criticizes X, and person B

Is there a name for a logical fallacy where person A criticizes X, and person B fallaciously assumes that because A criticizes X he must therefore subscribe to position Y, the presumed opposition of X, although A does not, in fact, take that position? For example, if A criticizes a Republican policy then B assumes that A must be a Democrat and staunch Obama-supporter,even though A is in fact a Republican himself, or else an Undeclared who regularly criticizes Obama as well.

It seems to be a special case of a fallacy with many names: 'false dichotomy,' 'false dilemma,' 'black-and-white thinking' and 'either/or fallacy' are among the more common. When someone commits the fallacy of the false dichotomy, they overlook alternatives. Schematically, they assume that either X or Y must be true, and therefore that if X is false, Y must be true. The fallacy is in failing to notice that X and Y aren't the only alternatives. Your example makes the point. You've imagined someone assuming that either I accept a particular Republican policy X or I am a Democrat, when -- as you point out -- there are other possibilities.

The situation you describe is a little more specific: the fallacious reasoner is making an inference from what someone is prepared to criticize. As far as I know, there's no special name for this special case, but the mistake is the same: overlooking relevant alternatives.

It seems to be a special case of a fallacy with many names: 'false dichotomy,' 'false dilemma,' 'black-and-white thinking' and 'either/or fallacy' are among the more common. When someone commits the fallacy of the false dichotomy, they overlook alternatives. Schematically, they assume that either X or Y must be true, and therefore that if X is false, Y must be true. The fallacy is in failing to notice that X and Y aren't the only alternatives. Your example makes the point. You've imagined someone assuming that either I accept a particular Republican policy X or I am a Democrat, when -- as you point out -- there are other possibilities. The situation you describe is a little more specific: the fallacious reasoner is making an inference from what someone is prepared to criticize. As far as I know, there's no special name for this special case, but the mistake is the same: overlooking relevant alternatives.

If the sentence "q because p" is true, must the sentence "If p then q" also be

If the sentence "q because p" is true, must the sentence "If p then q" also be true? For example, "the streets are wet because it is raining," and the sentence "if it is raining, then the streets are wet." Are there any counter-examples where "q because p" could be true while "If p then q" could be false?

I agree with my co-panelist: "q because p" implies that "q" and "p" are both true. And on more than one reading of "if.. then" sentences, it will follow that "if p than q" as well as "if q then p" are true. It may be worth noting, though: not everyone agrees that when "p" and "q" are both true, so are "if p then q" and "if q then p." There's a different sort of point that may be relevant to your worry. Suppose Peter's smoking caused his emphysema. We can't conclude that if Petra smokes, she'll develop emphysema. Causes needn't be fail-proof. A bit more formally:

Qa because Pa

(which says, more or less that a has property Q because a has property P) doesn't allow us to conclude

∀x(If Px then Qx)

(that is, for every thing x, if x has property P then x has property Q.) The truth of a "because" statement doesn't require the truth of a generalized "if...then" statement.

I agree with my co-panelist: "q because p" implies that "q" and "p" are both true. And on more than one reading of "if.. then" sentences, it will follow that "if p than q" as well as "if q then p" are true. It may be worth noting, though: not everyone agrees that when "p" and "q" are both true, so are "if p then q" and "if q then p." There's a different sort of point that may be relevant to your worry. Suppose Peter's smoking caused his emphysema. We can't conclude that if Petra smokes, she'll develop emphysema. Causes needn't be fail-proof. A bit more formally: Qa because Pa (which says, more or less that a has property Q because a has property P) doesn't allow us to conclude ∀x(If Px then Qx) (that is, for every thing x, if x has property P then x has property Q.) The truth of a "because" statement doesn't require the truth of a generalized "if...then" statement.

Is there a logical explanation for why one ought to be altruistic?

Is there a logical explanation for why one ought to be altruistic? Someone tried to logically prove to me why one ought to be altruistic. I found a list of logical fallacies here http://en.wikipedia.org/wiki/List_of_fallacies and I'd like to know which one's apply to what he wrote. This is what he wrote... "You should be altruistic because in the long run it will be beneficial not only to society, but also to yourself. Being altruistic fosters and encourages a society in which people help those in need of help, which ultimately means you will be helped when you need it. Conversely, altruism also encourages a society where negative acts against others are discouraged, meaning for yourself that you are less likely to be attacked, stolen from, killed, raped, etc. On the evolutionary level it means that a society that protects and helps each other, and does not ransack his fellow man whenever he deems it beneficial to himself in the short run, has a greater chance of survival, both for the group as a...

There are lots of questions we can ask about this argument, but I'd suggest that trying to shoehorn the issues into specific named fallacies isn't as helpful as just looking for places where the argument raises questions.. (It's interesting that in my experience, at least, philosophers invoke the names of fallacies only slightly more often than the average educated person does.) That said, here are a couple of quick thoughts.

The first sentence offers two broad reasons for being altruistic: because in the long run it benefits both society and yourself. Take the first bit: if someone didn't already think they should be altruistic, how persuasive would they find being told "You should be altruistic because it benefits society"? If you want to turn to fallacy lists, is this a case of begging the question? (Don't be too quick just to answer yes. Think about the ways in which wanting to benefit society and acting altruistically might differ.) Turning to the next reason, is it incoherent to think someone might decide to be altruistic in order to benefit themselves? (Once again, don't answer too quickly. There are some subtleties here.) And finally, think about the last sentence. It tells us that an increased probability of reproduction is the "ultimate evolutionary goal of any individual being." Ask yourself: is there a clear or simple connection between evolutionary "goals," whatever exactly those may be, and an individual's own goals? Could a reasonable person have goals that differed from supposed evolutionary goals?

The paragraph you quoted sounds like the kind of thing a philosophy teacher might set for her students as an analytic exercise. For that reason, I've treated your question in the way I'd treat it if I had set the exercise and if you were my student: I haven't told you how to answer; I've suggested what you might find it useful to think about. Whether you're a student or not, I hope that actually is useful.

There are lots of questions we can ask about this argument, but I'd suggest that trying to shoehorn the issues into specific named fallacies isn't as helpful as just looking for places where the argument raises questions.. (It's interesting that in my experience, at least, philosophers invoke the names of fallacies only slightly more often than the average educated person does.) That said, here are a couple of quick thoughts. The first sentence offers two broad reasons for being altruistic: because in the long run it benefits both society and yourself. Take the first bit: if someone didn't already think they should be altruistic, how persuasive would they find being told "You should be altruistic because it benefits society"? If you want to turn to fallacy lists, is this a case of begging the question? (Don't be too quick just to answer yes. Think about the ways in which wanting to benefit society and acting altruistically might differ.) Turning to the next reason, is it incoherent to think someone...

We use logic to structure the system of mathematics. Lord Russell was described

We use logic to structure the system of mathematics. Lord Russell was described as bewildered upon learning that original premises must be accepted on some human's "say so". Since human knowledge is so fragile (it cannot have all conclusions backed up by premises), is the final justification "It works, based on axioms accepted on faith"? In short, where do you recommend that "evidence for evidence" might be found, if such exists in the anterior phases of syllogistic construction. Somewhere I have read (if I can rely upon what little recall I still have) Lord Russell, even to the end, did not desire to rely on inductive reasoning to advance knowledge, preferring to rely on deductive reasoning. Thanks. Your individual and panel contributions make our world better.

I was intrigued that you take human knowledge to be very fragile. The reason you gave was that there's no way for all conclusions to be backed by premises, which I take to be a way of saying that not all of the things we take ourselves to be know can be based on reasoning from other things we take ourselves to know- at least, not if we rule out infinite regresses and circles. But why should that fact of logic (for that's what it seems to be) amount to a reason to think that knowledge is fragile?

Most of us - including most philosophers and even most epistemologists - take it for granted that we know a great deal. I know that I just ate lunch; you know that there are people who write answers to questions on askphilosophers.org. More or less all of us know that there are trees and rocks and that 1+1 = 2 and that cheap wine can give you a headache. Some of the things we know call for complicated justifications; others don't call for anything other than what we see when we open our eyes or (as in the case of things like 1 =1) understanding what we've been told.

This sort of reply is likely to prompt someone to ask "But how do you know that you know all those things?" That question will make some people fret, but here's a perfectly good answer: I don't know how I know all those things. Coming up with a good theory of knowledge is hard work and tends to produce controversial answers. But knowing things doesn't call for a theory of how we know things. People knew things for centuries before anyone got around to asking what exactly knowledge is and how it works.

A few things do seem clear, however. One is that not everything we know comes from syllogistic or any other sort of reasoning. Another is that we can use parts of what we know to evaluate the usefulness of other possible ways of knowing things. For example: by careful investigation, we've learned a lot about the unreliability of eyewitness testimony and memory (though we haven't learned that they're never reliable.)

But the most important thing is that there's no good reason to follow Descartes and thinking that knowledge must be based on foundations that are beyond all possible doubt. That's a premise eminently worthy of doubting, not least because it does such a lousy job of accounting for something that seems much less open to doubt: that we really do know a great deal about a great many things.

I was intrigued that you take human knowledge to be very fragile. The reason you gave was that there's no way for all conclusions to be backed by premises, which I take to be a way of saying that not all of the things we take ourselves to be know can be based on reasoning from other things we take ourselves to know- at least, not if we rule out infinite regresses and circles. But why should that fact of logic (for that's what it seems to be) amount to a reason to think that knowledge is fragile? Most of us - including most philosophers and even most epistemologists - take it for granted that we know a great deal. I know that I just ate lunch; you know that there are people who write answers to questions on askphilosophers.org. More or less all of us know that there are trees and rocks and that 1+1 = 2 and that cheap wine can give you a headache. Some of the things we know call for complicated justifications; others don't call for anything other than what we see when we open our eyes or (as in the...

It seems that certain ethical theories are often criticized for contradiction

It seems that certain ethical theories are often criticized for contradiction ordinary ethical thinking, or common moral intuitions. Why should this matter, though? Is there a good reason to believe that ordinary common moral intuitions are infallible, and that more refined ethical systems ought not contradict such intuitions?

You're quite right: ordinary moral intuitions aren't infallible. However, the sort of criticisms you have in mind doesn't really suppose that they are.

Start with an extreme case. Suppose someone came up with a moral theory with the consequence that most of our common moral beliefs were wrong. Now ask yourself: what sort of reason could we have to believe this moral theory? The point is that there's no possible way of making sense of this; perhaps there is. But if I'm told that my ordinary moral judgments are massively wrong, there would be a real problem about what sort of reason we could have to accept the very unintuitive theory from which that consequence flowed.

Or take a more concrete example. Suppose some moral theory had the consequence that wanton cruelty toward innocent people was a good thing. I don't know about you, but I find it hard to imagine what could possibly make this moral theory more plausible than my ordinary moral belief that wanton cruelty is very wrong indeed.

Here's another way of getting at the point: if I don't give any weight to my ordinary moral judgments, then it's not clear what basis I could have for giving weight to a theory whose output was supposed to replace those judgments. If I cold be so massively wrong about ordinary moral matters, what hope would I have for picking the correct Big Picture of morality?

On the one hand, what's been offered so far is essentially a string of rhetorical questions. But the point of the questions is to make vivid that there is a close connection between our judgments about ordinary moral questions and larger theoretical questions about morality.

One way this is sometimes put is by saying that ordinary moral judgments play an evidential role to play in evaluating moral theories; the ability of a moral theory to make broad sense of our considered moral judgments is a point in its favor; the failure of a theory to do that job is a serious strike against it. This doesn't mean that ordinary judgments get the final say; sometimes we give up our intuitions in the face of compelling general arguments or principles. But an "ethical theory" that gave no weight to first-order moral judgments would have a hard time making the case that we should accept its deliverances.

You're quite right: ordinary moral intuitions aren't infallible. However, the sort of criticisms you have in mind doesn't really suppose that they are. Start with an extreme case. Suppose someone came up with a moral theory with the consequence that most of our common moral beliefs were wrong. Now ask yourself: what sort of reason could we have to believe this moral theory? The point is that there's no possible way of making sense of this; perhaps there is. But if I'm told that my ordinary moral judgments are massively wrong, there would be a real problem about what sort of reason we could have to accept the very unintuitive theory from which that consequence flowed. Or take a more concrete example. Suppose some moral theory had the consequence that wanton cruelty toward innocent people was a good thing. I don't know about you, but I find it hard to imagine what could possibly make this moral theory more plausible than my ordinary moral belief that wanton cruelty is very wrong indeed. ...