Advanced Search

It has been said that if there is human freedom, then we are responsible for our

It has been said that if there is human freedom, then we are responsible for our actions. By this, it seems natural to suppose that "given that there is no human freedom (let's just suppose for the sake of argument) then it would follow that we are not responsible for our actions." But this seems an instance of what is called the "fallacy of denying the antecedent". Is this really an instance of the fallacy or is it an exemption to the case because personally I don't see any error in the form of the argument.

In the form you've presented the claims, there would be a fallacy of denying the antecedent. If free, then responsible. Not free. So, not responsible.

But I don't think philosophers typically agree with the conditional claim, which says that having free will (or doing A freely) is sufficient for moral responsibility (or being responsible for A). And we should not agree with it. After all, I might freely decide to back my car out of the driveway and in doing so run over the sleeping cat I could not be expected to have seen. If so, I do not seem to be responsible (blameworthy) for killing the cat. There might be ways to fix up the terms, but there is likely an epistemic condition (a justified belief requirement) for responsibility that goes beyond the free will (or control) condition.

However, it is more plausible to say that moral responsibility (being responsible for A) requires free will (that one did A freely, or did some earlier action freely that one should have known would lead to A). So, if I am responsible for killing the cat, I must have free will and must have exercised it in such a way that led to my cat killing.

Suppose we accept: If one is responsible, then one has free will.

Then, by modus tollens (or denying the consequent--i.e., saying we lack free will), we validly conclude that one in not responsible.

Some people suggest that it is so implausible (and/or costly) to assert that humans are never ever responsible for anything at all (e.g., that no one deserves blame for anything) that we have good reason to question any argument (or premises) that concludes that no one has free will.

I often hear arguments that go like this but I don't know how to respond to them

I often hear arguments that go like this but I don't know how to respond to them. "Not everyone who is bullied develops behavioral problems or not everyone who is sexually abused develops a severe mental illness such as schizophrenia." These kinds of arguments often imply that because something doesn't happen to everyone then there is no causal relationship between a stimulus and an event. But that seems wrongheaded because though not every ball falls into a pocket after a break in a game of pool that doesn't mean that the cue ball/stick along with the person making the break didn't cause those balls to fall in the pockets. You could also use the same form of argument to argue against more popular beliefs such as the idea that soldiers experience PTSD because of their wartime experiences. So what exactly is wrong with those kinds of arguments assuming I am right in believing they are wrongheaded?

I notice that your question hasn't been answered yet, but it has been waiting for one for a while. I feel a little out of my depth here, but just to provoke some further response(s). I'll take a shot.

Your question is really about what it means to say that something causes something else. If we think of causation as "deterministic," then the laws of causality work in such a way as to have it be that a given cause will always produce a given effect, and the appearance of this effect can be wholly explained in terms of that cause. There are some scientific fields that seem to work this way--classical mechanics, for example. But then, even in the field of physics, it seems that at some levels this (deterministic) conception of causation seems not to apply--for example, within quantum mechanics. Those who explain what is called the "indeterminacy" at this level might explain it in different ways, but at least in terms of explaining and predicting, it looks like the simplicity of a deterministic model of causation is problematized here.

But if you look at most sciences outside of physics, the very notion that causal explanations are deterministic does not apply. Consider a widely accepted and thoroughly scientifically examined example: smoking causes cancer. Does this causal claim imply that either absolutely everyone who ever smokes gets cancer, or else the claim is false? Of course it doesn't! The same lack of complete determinism may be observed in most of medicine--but that does not prove that "medical science" is somehow a misnomer.

But now extend that to the cases you are asking about. The explanation of the prevalence of PTSD among soldiers and former soldiers is very obviously to be explained by their exposure to the horrible things that they experienced in war zones. Does this causal explanation require that everyone who is in a war zone will get PTSD? Of course not--no more than the idea that every smoker will get cancer (or everyone who falls out of third-story window will die, or everyone who drives when drunk will get in an accident, or ... you get the point). Those who seek to undermine our concerns about bullying or sex abuse with "arguments" like the ones you have mentioned simply do not understand the very nature of causal explanation in the biological or social sciences--and probably don't even understand causal explanation, period (since, as I say, even in some areas of physics deterministic assumptions seem not to apply).

I hope this helps! (Others...?)

Would someone please clarify the importance of the distinction between

Would someone please clarify the importance of the distinction between a) either A is true, or Not A is true and b) either A is true, or A is not true I've seen answers on this site in which the difference between those two formulations is very important, but I'm not quite sure why. Thank you.

It would be helpful to know which answers on the site you're referring to, but I'll take a stab at your question anyway. The only difference between your formulations (a) and (b) is the second disjunct in each, so I'll focus on that.

I presume A is some statement. What's the difference between "Not A is true" and "A is not true"? I'm not sure there's always a difference. Depending on the system of logic or semantics, "Not A is true" can mean merely (i) "Whatever truth-value (if any) A has, it's not the value true." Or it can mean (ii) "A is false" in systems in which every statement is true or false. The difference between (i) and (ii) is sometimes important, such as when we're dealing with the classic Liar sentence (L) "This sentence is false." One might say that L is not true and yet not false either: one might say that L is neither true nor false.

How do (many) philosophers respond to the logical fallacy "the enemy of my enemy

How do (many) philosophers respond to the logical fallacy "the enemy of my enemy must be my friend"? It is not uncommon if I am having a conversation with someone about a public policy proposal, in which I criticize an idea advanced by one political party, for the other person to respond "how can you possibly favor the idea advanced by the OTHER political party?" when in fact I favor NEITHER party's idea? I'm actually a bit surprised at how widespread this kind of fallacious thinking is. Many times neither "side" of a public policy debate has useful ideas (in my opinion) and I would prefer a third alternative very much over either "side's" position. Any suggestions about how to escape this enforced box would be greatly appreciated. Thank you. PS does this fallacy have a formal name? if it does, then at least in on-line debates I can merely link to the Wikipedia article about the fallacy. Thanks again.

If two political parties A and B have competing public policy proposals, and you criticize party A's proposal, it is typically a fallacy for your listener to assume that you therefore favor B's proposal. (The only exceptions are those simplistic cases in which Party B is simply opposing A's proposal, as when one party on the town council proposes to put a traffic light on Main Street and another party opposes doing so.) The fallacy lies in the fact that there are typically possible policies different from both A's and B's. (Even in the case of the traffic light dispute, a third party might propose installing a stop sign.) Complicated and interesting public policy disputes might give rise to dozens of viable, competing proposals.

This fallacy could be labeled 'false dichotomy' or 'false dilemma'. In the classic instances of this fallacy an arguer presents an either-or choice, and then concludes that one ought to believe or perform one of the disjuncts, when in fact there are more than the two offered choices. People who respond to criticisms of the United States by saying, "America: Love it or leave it!", commit this fallacy.

While you are right to be annoyed by people who commit this fallacy in interpreting your stance on policy issues, I would suggest that you be thoughtful about remarks like 'neither side has useful ideas'. You may not like any party's proposal in its entirety, but it is worth understanding what ideas ground the proposals. It is likely that at least one side has worthwhile ideas, even if you think they have not been worked out in the best possible way. When looking at current political disputes, it is easy to say 'a pox on both their houses'; it is much harder to come up with a viable alternative policy that respects whatever is valuable in each party's proposal.

Are there any philosophers who deny that the principle of explosion is a valid

Are there any philosophers who deny that the principle of explosion is a valid principle while at the same time both being not accepting of a paraconsistent logic and being accepting of the Law of Non Contradiction?

I am reading Neitzsch's "Human, All Too Human", in one of his aphorisms he

I am reading Neitzsch's "Human, All Too Human", in one of his aphorisms he states that logic is optimistic. Does he mean that it would be foolishly optimistic to trust logic or in its truth? Or does he mean something else I just can't seem to understand?

Thank you for your question. I'm guessing you are referring to aphorism 6 in the first volume. You are certainly right to call Nietzsche up here -- the reference to the concept of optimism is not at all clear. In fact, it goes back to an earlier book of Nietzsche's, The Birth of Tragedy. (If you want to look, the clearest -- which isn't saying much in this case -- treatment of this idea is found in chapter 18.) There Nietzsche argues that an important change took place around the time of Socrates, and that what we now think of as science, broadly speaking, was 'invented'. What characterises this Socratic science? Well, logic, first of all, broadly understood in its Greek sense as a rational enquiry into the nature of things. But also, Nietzsche says, a certain optimism. Science only makes sense if the world CAN be understood and that, once it is understood, it can be CHANGED for the better. Science, he says, is intrinsically optimistic about its own utility. Now, here in Human, All Too Human the tune has changed a little (this is still more clear in aphorism 7). Nietzsche now argues that particular sciences get along very well without a sense of utility, of a sense of what they are for. It is only philosophy, as the widest science, or as the 'whole' of science, that is still optimistic in that Socratic sense.

Classical logic says that from a contradiction you can derive anything. I think

Classical logic says that from a contradiction you can derive anything. I think that depends on how you define a contradiction. If you have two opposing truth values with respect to A, A is true and A is false what can we infer about the truth status of A? Well in one way to look at it you could say that to assert a contradiction means we hold that both statements about A are true regardless of whether they contradict each other. A is true regardless of the contrary position that A is false. Likewise A is false regardless of the contrary position that A is true. If we define a contradiction in this manner then we can separately infer both truth values of A. Given A is true and A is false we can conclude A is true and given A is true and A is false we can conclude that A is false. If you infer A is true from the contradiction then A or B is true. If A or B is true then if A is false then B is true. A is true regardless of whether A is false therefor we can not conclude an explosion occurs. It seems that...

You wrote: (i) "It seems that for classical logic to make sense of a contradiction in such a way that it leads to explosion...it must define what it means to hold a contradiction in a particular way" and (ii) "[W]ouldn't it be defined in some arbitrary way that forces us into the 'explosion' scenario?"

Regarding (i): If the assertion "A is true and A is false" means anything, then surely it implies that A is true and implies that A is false. I can't think of another way to construe the assertion. Are you suggesting that a conjunction doesn't imply each of its conjuncts?

Regarding (ii): How is it arbitrary to infer the truth of A and the falsity of A from the assertion "A is true and A is false"? Again, I can't think of another way to understand the assertion.

As far as I know, paraconsistent logicians tend to object to inferring B from (A or B) and not-A: they point out that the inference relies on the implicit assumption that not-A rules out A, an assumption they reject.

For the given premises P and Not P, is P a valid derivation? Shouldn't the

For the given premises P and Not P, is P a valid derivation? Shouldn't the derivation be true for all the premises for it to be valid or is it not sound and yet valid? But aren't we determining its unsoundness by virtue of something other than the content of those premises?

Given premises P and not P, it is indeed valid to derive P. I don't know of any logical systems, including non-classical systems, that would deny the validity of that derivation. (A valid derivation needn't use all of its premises: "P; Q; therefore, P" is valid.)

The derivation you gave isn't sound, however, because not all of the premises are true: it's guaranteed that one of the premises is false (even if we don't know which one). Yes, we're ascertaining the unsoundness of the derivation without knowing the content of its premises, but that's perfectly fine: If you know that the form of the derivation guarantees that it has a false premise, you don't need to know anything more in order to know that the derivation is unsound.

Is there a name for a logical fallacy where person A criticizes X, and person B

Is there a name for a logical fallacy where person A criticizes X, and person B fallaciously assumes that because A criticizes X he must therefore subscribe to position Y, the presumed opposition of X, although A does not, in fact, take that position? For example, if A criticizes a Republican policy then B assumes that A must be a Democrat and staunch Obama-supporter,even though A is in fact a Republican himself, or else an Undeclared who regularly criticizes Obama as well.

It seems to be a special case of a fallacy with many names: 'false dichotomy,' 'false dilemma,' 'black-and-white thinking' and 'either/or fallacy' are among the more common. When someone commits the fallacy of the false dichotomy, they overlook alternatives. Schematically, they assume that either X or Y must be true, and therefore that if X is false, Y must be true. The fallacy is in failing to notice that X and Y aren't the only alternatives. Your example makes the point. You've imagined someone assuming that either I accept a particular Republican policy X or I am a Democrat, when -- as you point out -- there are other possibilities.

The situation you describe is a little more specific: the fallacious reasoner is making an inference from what someone is prepared to criticize. As far as I know, there's no special name for this special case, but the mistake is the same: overlooking relevant alternatives.

Can paradoxes actually happen?

Can paradoxes actually happen?

Yes! But bear in mind that a paradox is an apparent contradiction, an apparent inconsistency, that we're tasked with trying to resolve in a consistent way. For example, a particular argument implies that the Liar sentence ("This sentence is false") is both true and false, and a similar argument implies that the Strengthened Liar sentence ("This sentence is not true") is both true and not true. Usually it's our conviction that those arguments can't be sound that impels us to seek out the flaw in each argument. So too for other famous paradoxes, such as the Paradox of the Heap. Paradoxes abound! But that doesn't mean that contradictory situations do.

Now, some philosophers, such as Graham Priest, say it's a mistake to demand a consistent solution to every paradox. Priest says that the Liar Paradox has an inconsistent solution, i.e., the Liar sentence is both true and false: it's both true and a contradiction. So Priest would say that not only do paradoxes actually occur but inconsistent situations do too. Standard logic utterly rejects that idea. You'll find the idea discussed here and here.

Pages