Even if the conditional isn't material, it's clear that this kind of inference has to fail. Suppose my roof leaks whenever it rains. Then it seems true to say: If my roof is leaking, then the streets are wet. But the streets aren't wet because my roof is leaking. Rather, there is a third cause of both these events. Even if there has to be a "link" between them for the conditional to be true, then, the link needn't be directly causal.
Too often people offer sympathy in ways that make themselves feel better at the expense of the target of their ostensible kindness. To tell someone who is suffering "That's not so bad -- I've had it much worse" reassures the speaker about his or her own fortitude in the face of misfortune, but it displays gross insensitivity to the plight of the person suffering in the here and now (even if the speaker is right about their respective degrees of suffering). In saying that the victim's suffering is less than the victim thinks it is, the ostensible sympathizer withholds a full acknowledgement of that suffering. (Contrast that insensitive remark with something like: "I can imagine the pain you are in. I remember how much it hurt when I broke my leg.")
In some situations, however, a remark like "Other people have it much worse than you" might be appropriate. If your friend is complaining excessively about not receiving a promotion that she wanted, it might be appropriate to remind her that, well, she does still have a job, that other people are less fortunate than she is, etc. It might be hard to convey this message with tact and in such a way that your friend will receive it in the right spirit, but the point of such a message would be to guide your friend to a proper assessment of the harm she has suffered. And such an assessment does require ranking the harm in the grand scheme of things. The result should be that your friend becomes less upset over her missed opportunity.
There is obviously something wrong with the argument, "You are suffering to degree S1. Other people are suffering to a degree S2 that is greater than S1. Therefore, you are suffering (or you should be suffering) less than S1." I don't know of a specific name for this fallacy, but we might lump it under the heading of ignoratio elenchi, or missing the point. But there is a related argument that I find OK: "You judge that a loss is of degree L1. But when you take a more global view of your life and the place of your loss in the wider world, you can see that the loss is really of degree L2, which is less than L1." I don't think that thinking of a loss in a wider context can or should make it go away. (It would be wrong to try to make your friend feel that her missed promotion was no loss at all.) But such reflection can take the sting away from a personal loss when the person is question is grieving too much.
As logicians and contemporary philosophers use the word 'valid', a valid argument (or piece of reasoning) is one such that it is impossible for the premises to be true and the conclusion false. The primary task of symbolic logic is to determine which arguments are valid; logicians pursue this goal by providing rules and techniques for evaluating the validity of argument forms.
Any instance of a valid argument form -- that is, any particular piece of reasoning that has that form -- is valid. A valid syllogism -- or more generally any valid argument -- can exhibit no fallacy. (We may provisionally define 'fallacy' as a defect in an argument other than false premises.)
In the examples you are worried about, we have arguments that appear to possess a valid form, but really do not. Here is a valid form:
Thing A has property P.
Thing B is identical to Thing A.
Therefore, Thing B has property P.
Your first two examples seem to have this form, and it seems that their premises could be true while their conclusions are false, so they seem to show that the form is not valid after all. But when we reflect further on the second premise of each example, we can notice that they do not line up with the second premise of the above form. David's bones are part of David, but they are not identical to him. So your David example has the form:
Thing A has property P.
Thing B is part of Thing A.
Therefore, Thing B has property P.
This argument form is not valid, as your example itself shows.
Formal symbolic logic has given us algorithms for determining when an argument form is valid. But it still takes care and skill to figure out the logical form of a given piece of ordinary-language reasoning. An argument can masquerade as an instance of a valid form, when really it has a quite different form. Such arguments are fallacious, but the fallacy lies in whatever misleads us in ascribing a form to the argument. These fallacies do not impugn the validity of the good arguments they resemble.
Here's one more example to think about:
This chess piece is a king.
A king is, by definition, his nation's head of state.
Therefore, this chess piece is his nation's head of state.
What is the valid form this argument seems to have? Why does it not in fact have that form? That is, what fallacy does it commit?
I presume this phrase refers to the "The one thing I know is that I know nothing" remark attributed to Socrates? Well, one form of paradox occurs when you are simultaneously motivated to endorse a contradiction -- i.e. both accept and reject a given proposition, or assign the truth values of both true and false to it. And that seems applicable in this case. On the one hand what Socrates is asserting is that he knows nothing (after all, if he KNOWS that he knows nothing, then since knowledge usually implies truth, it follows that he knows nothing). But then again on the other hand the very assertion seems to disprove it, since he KNOWS it, and therefore knows not nothing, but something. So he simultaneously seems to be asserting that he knows something and that he does not know something. Now you may not find this particularly paradoxical -- you might be tempted to resolve it directly (by rejecting one of the two propositions). But I suppose it's called a paradox because reasonably good cases can be made for both sides of it (even if some individual believes it can be resolved).
hope that helps--
Aristotle gives a nice account of why we must have something "definite in our thinking" and not contradictions in Metaphysics IV. In order to say of something that it is or can be both F and not-F, he writes, we must have successfully identified that thing as the thing that is or can be both F and not-F. But we are in no position to do that if the something both is and is not the something we are talking about, or trying to talk about! So we do not have to abandon the piece of logic, the principle of non-contradiction, in one form, at least, which states that opposite things cannot significantly be said of the same thing. Here, at least, it seems that logic does not break down on the basis of the interesting argument that you gave.
There are lots of questions we can ask about this argument, but I'd suggest that trying to shoehorn the issues into specific named fallacies isn't as helpful as just looking for places where the argument raises questions.. (It's interesting that in my experience, at least, philosophers invoke the names of fallacies only slightly more often than the average educated person does.) That said, here are a couple of quick thoughts.
The first sentence offers two broad reasons for being altruistic: because in the long run it benefits both society and yourself. Take the first bit: if someone didn't already think they should be altruistic, how persuasive would they find being told "You should be altruistic because it benefits society"? If you want to turn to fallacy lists, is this a case of begging the question? (Don't be too quick just to answer yes. Think about the ways in which wanting to benefit society and acting altruistically might differ.) Turning to the next reason, is it incoherent to think someone might decide to be altruistic in order to benefit themselves? (Once again, don't answer too quickly. There are some subtleties here.) And finally, think about the last sentence. It tells us that an increased probability of reproduction is the "ultimate evolutionary goal of any individual being." Ask yourself: is there a clear or simple connection between evolutionary "goals," whatever exactly those may be, and an individual's own goals? Could a reasonable person have goals that differed from supposed evolutionary goals?
The paragraph you quoted sounds like the kind of thing a philosophy teacher might set for her students as an analytic exercise. For that reason, I've treated your question in the way I'd treat it if I had set the exercise and if you were my student: I haven't told you how to answer; I've suggested what you might find it useful to think about. Whether you're a student or not, I hope that actually is useful.
I am inclined to agree with you that arguments and evidence need to be evaluated on their own terms and not dismissed out of hand on the grounds that the "expert" is affiliated with an institution that has a worldview that is thought to be biased or somehow discredited. So, a biologist working in a conservative Christian institute who has generated a case for intelligent design, needs to have her or his work taken seriously by journals or peer groups and given a fair evaluation, even if the majority of practicing biologists reject intelligent design. Still, there are boundaries that most disciplines have over what can count as sound arguments and evidence. Presumably a Christian biologist would not gain in credibility if she appealed to Biblical revelation as part of her evidence base for the journal Nature (though she might have credibility if she was writing for fellow Christian biologists or for a debate in philosophical theology that sought to balance revelation and scientific claims), any more than if Darwin added to his Origin of Species an appendix in which he reported that his account of evolution was endorsed enthusiastically by a series of para-psychical phenomena.
Stepping back from current science, it seems that we have in fact come to reject whole fields and methods of inquiry in the past, and would be very inclined not to take seriously individual contributions in the way of claimed evidence and support from such fields. I suggest this is true of theories about how to identify witches (in the so-called witch craze, there were a variety of methods employed to determine whether someone was a witch, including witch poking finding a dull spot in one's skin where a demon may have entered and the tear test which involved reading an account of Christ's crucifixion and if the subject did not shed tears, this was evidence she was a witch). A more recent case that we often forget is phrenology, the "science" of investigating a person's character by studying the shape of the skull. This was once a highly respected field with lots of experts, but it came to be so discredited that I doubt any recalcitrant practicing phrenology would have the ghost of a chance for getting a serious hearing. Would this be a case of an ad hominem? I think it would be better described in terms of the field of science progressing to the point where contemporary scientists have confidence that certain modes of inquiry and projects are themselves unreliable or demonstrably false (or, if you will, subject to a bias against current science).
Still, in an ideal world of limitless time and resources, I think we should be at least open in principle to someone claiming to have solid evidence that Hogwarts is a real place for training actual witches and wizards, and open to someone who claims to have demonstrated the connection between the shape of the skull and character, and even open to the Society for Para-sychical Research if it claimed to have definitive, irrefutable evidence of post-mortom contact with Darwin. Ideally, I think we should sift through the arguments and purported evidence, though for practical purposes we should spend less time with, say, economic theories based on the practice of voodoo ("Voodoo Economics") rather than an economic theory based on the empirical study of market behavior.
Perhaps this: true by definition, v. true by means of some correpondence between their meanings and the world. "Bachelors are unmarried" is logically true, ie true by meaning, because that is how we use the definitions involved; it's a matter of convention and meaning that that sentence is true, and thus one doesn't need to go investigate the world whether it's true -- indeed it's not making a claim primarily about the world at all, if it's truth matter is a function of definition. Contrast with "bachelors live longer on average than average man." Ths is NOT merely logically true, true by definition -- we must go do a study to find out fi it's true, and thus to learn something substantive, some fact, about the world.. Logical truths are trivial because we learn from them no new facts about the world, beyond the meanings of the words involved.
hope that helps--
Yes. Completely. The tricky question is why. It's tempting to answer that necessarily everything is bound by the laws of logic because the alternative -- the claim that something isn't bound by the laws of logic -- is necessarily false. But, as I suggested in my reply to Question 4837, no sense can be attached to the claim that something isn't bound by the laws of logic. So the claim can't be false, strictly speaking. Perhaps all we can assert is a wide-scope negation: it's not the case that something isn't bound by the laws of logic, just as it's not the case that @#$%^&*. Necessarily everything is bound by the laws of logic because the alternative is literally nonsense? I wish I had a better explanation!
To test a rule of inference, you can try to find counterexamples to it, cases in which the rule lets you derive a falsehood from true premises. Professor Vann McGee offered a well-known (and controversial) such attempt in this article.
But there's no getting around rules of inference entirely. Even as you test one rule of inference you unavoidably rely on others. Because any attempt to answer the question "Why should we trust rules of inference at all?" will rely on reasoning, it will trust some rules of inference, whether or not those rules are made explicit in the reasoning. There's no way to get "outside" all rules of inference and see how they measure up against something more trustworthy than they are.