Advanced Search

As a believer, I think that theism is more reasonable than atheism although I

As a believer, I think that theism is more reasonable than atheism although I think that atheists can have good reasons to believe that their worldview is true. Is this position rational? Put in another way, is it possible for me to claim that my worldview is the correct one while granting that the opposite worldview can be as reasonable as the one I hold to be true?

I hope you are right for I while I am a Christian philosopher (or a philosopher who is a Christian) I believe that many of my friends and colleagues who are atheists or agnostics or who accept Islam or a non-theistic view of God (as my Hindu philosopher colleague and friend) are just as reasonable as I am in the sense that each of them has intellectual integrity and has spent at least as much time intelligently reflecting on their convictions, earnestly seeking the truth in such matters. Still, I think each of us needs to hold that the reasons that justify our different beliefs are not defeated (undermined) by the reasons for incompatible beliefs. An atheist might be able to acknowledge that I am just as reasonable as she is, but she cannot (in my view) think that her reasoning is undermined by the evidence or reasoning that I undertake. Alternatively, consider a Christian-Muslim exchange (something I am deeply committed to). I accept a traditional Christian understanding of God incarnate on the basis of an historical argument and an argument from religious experience (I roughly following the reasoning of the Oxford based philosopher Richard Swinburne in his book on the incarnation). In doing so, I believe that I am committed to thinking that no one had decisive, irrefutable evidence against the incarnation that any reasonable person would or should accept. I can certainly recognize that my Muslim philosopher friend Mohammad is reasonable in only recognizing Jesus as a holy prophet (peace be upon him), but there is a limit here in terms of my not being able to accept that he knows (with certainty, based on irrefutable evidence) that Jesus was only a prophet.

Three other points are worth noting.

First, I believe that the above matter is not special to philosophy of religion, but it runs throughout metaphysics, ethics, epistemology, philosophy of law, philosophy of language, science, and so on. I have two colleagues that are Kantian and one that is Humean. They both cannot be right about the nature and normative status of ethical obligations, but as far as I can telly they are intellectual peers and regard each other as equally reasonable.

Second, it is partly because we do (in the practice of philosophy) believe that colleagues who disagree with us are equally reasonable that we are motivated to engage each other in debate and sustained arguments. Without that assumption / premise, the very landscape of philosophy would look more hostile (in my view) than it currently does.

Third, as a general point, I happen to think that the reasons why philosophers adopt the positions they do is highly complex and historically conditioned. My hypothesis is that philosophers form their views on different matters based on clusters of arguments, their view of certain concrete cases which they interpret differently in light of alternative theoretical commitments, the success or failure of thought experiments, their particular exposure to positions during their graduate education, and perhaps even psychological and sociological reasons. For example, one person might naturally rebel against the perceived status quo which is why he or she adopts a form of phenomenology in a department which is structuralist, whereas another person is an anti-realist about freedom in a philosophical libertarian culture. So, in offering this third suggestion, I suggest that we rarely have a case in which two philosophers disagree about X because they disagree about the evidential force of a single, separate line of reasoning. To give a concrete case, I think Philip Kitcher is just as reasonable as me or probably more reasonable than me philosophically (he is older, has been practicing philosophy longer at an elite university, while I am a mere College professor). I accept a cosmological argument for theism (you can find a good version in the Stanford Encyclopedia of Philosophy), whereas he does not. Imagine we read the same article. The reasons for our diverging in our assessments probably lies outside of the line of reasoning in the contribution. Kitcher, for example, adopts a form of pragmatism when it comes to ostensibly necessary truths that I think is mistaken. For us to debate the cosmological argument, we would probably need to debate the adequacy of his pragmatism, and then probably move on to ever greater areas of epistemology and metaphysics. Overall, then, I suggest (going back to my original response) two philosophers may be equal in intellectual integrity, equal in focussed, intelligent reasoning, equally in identifying the truth or most reasonable position(s), and yet reach divergent views, partly due to the highly complex, interwoven nature of philosophy.

When faced with a lack of any conclusive argument one way or the other, how does

When faced with a lack of any conclusive argument one way or the other, how does one avoid total scepticism?

By 'conclusive' argument, I assume you mean some argument that proves, or guarantees, its conclusion. By 'total skepticism' I assume you mean to have no opinion one way or the other at all, or to completely lack any confidence in both the conclusion and its negation. Now, if I understood that right, I think the answer to your question is: by considering arguments that are not conclusive, or don't absolutely guarantee their conclusions. Often, we have arguments that, while not proving or guaranteeing their conclusions, they do provide some good reason to think that the conclusion is true. For example: My dog almost never barks unless there's someone coming; my dog is now barking; therefore, there is someone coming. This is not conclusive, in the sense that it leaves open some possibility that someone is not coming. But it seems unreasonable for me to be totally skeptical, or to have no opinion, on whether someone is coming. Rather, I should be somewhat confident, but not certain, that someone is coming. We have the capacity to believe things to different degrees, and sometimes we are less confident in what we believe than utter certainty. When an argument indicates, but doesn't guarantee a conclusion, the rational response seems to be some confidence, but not certainty, in the conclusion. Total skepticism seems unreasonable in such cases, or at any rate, avoidable. Finally, notice that some arguments would guarantee their conclusions if their premises were true, but we are uncertain about whether the premises are true. For example: All dogs bark; Fido is a dog; therefore Fido barks. I'm not totally certain--and I don't possess any guarantee--that all dogs bark. So this argument isn't conclusive, and even once I understand it and believe its premises (though not with certainty), it seems I should be neither certain in its conclusion nor totally skeptical about it. The conclusion, I should think, is probably true (insofar as the premises are probably true).

The age of consent seems to arise because people under a certain age threshold

The age of consent seems to arise because people under a certain age threshold are not capable of making informed, prudent decisions. Because of Neurology or Wisdom or otherwise. However what is to say that they are capable when past this age threshold? Consider an alien species similar to us however they live to the age of 1000 rather than 80-100 as in humans. This alien species might give the age of consent to 200 years because anyone younger is deemed not having enough knowledge to make an informed decision. When this alien species looks at us they will probably pity us for making choices below the age of 200, because we have not the wisdom nor the neurology that is required to make an informed decision. Is the age one has to make an informed decision thus meaningless?

My suspicion is that you may be holding "informed decision" to an unfairly high standard. Granted, we often do not make "informed, prudent decisions." We human beings are certainly not omniscient, and we sometimes reason badly. But plenty of decisions we make can be made rationally without being omniscient: I don't have to know all that much to (say) choose between a salad and an omelette for lunch, or to decide whether to have a summer or a winter vacation this year. Of course, some decisions are more complicated. But I'm happy to say that we have enough knowledge in many cases to make decisions that meet minimal standards of rationality.

You may also be making the controversial assumption that the only factor relevant to "age of consent" is rationality. Ethically speaking, certain decisions matter a lot to us because they impinge on fundamental concerns. Whom to marry, for example, is not a decision that others should make for us. For one, even if I'm not perfectly rational about that decision, I'm still likely to be better situated than anyone else to make that decision. Moreover, I'm might care that the decision be left up to me, even as I recognize I may make a bad decision. In other words, I want the right to be wrong in certain decisions I make. That idea has been a prominent theme in the work of anti-paternalist philosophers (e.g., J.S. Mill).

So no, I don't think age of consent is "meaningless" even if we acknowledge that human beings often don't make the most informed or prudent decisions on their own behalf.

Is it an ad-hominem when I get called "a pessimist who won't be happy with

Is it an ad-hominem when I get called "a pessimist who won't be happy with positive changes in situation X, so further debate is pointless", even though I've presented my arguments for why I'm skeptical of any positive changes in situation X? I feel like it's a dismissive tactic, but would like some clarification.

This is a good, and difficult question. There's no doubt that, in some cases, this sort of objection really is an unfair, unhelpful dismissal. Calling you names ("silly pessimist!") could just be a way to fail to engage with what you're saying. Though that may be what's going on in your case, sometimes such objections actually do have a point. Let's set aside the "ad-hominem" fallacy, whether this is an instance of it, and what that implies, and just consider whether one might have a good point when one objects with something like "you just think that because you're a pessimist." It is useful to compare this with a more straightforward case first. Suppose you think that your daughter is the best singer in the choir, and someone says to you "you just think that because she's your daughter!" You might reply, "but I hear her singing, and I hear the others, and I genuinely think she's got the best voice." "Yeah," the objector replies, "of course you think that, whatever, have fun thinking she's the best!" What's going on in that situation? The objector could be trying to point out that, because you love your daughter, and want for her to be the best, you are biased in your assessment of her skills. This bias, of course, might not be intentional on your part, and it may still seem to you like you have good reasons (you do hear her voice, after all). What the bias is doing, according to this imagined objector, is tainting your evaluation of the evidence. You'll be quicker to easily accept evidence that she's the best, and more critical of evidence to the contrary. So, here we have a case in which the objector may have a point, if the objector is resisting your conclusion by citing your bias due to your desire that your daughter be the best. Of course, this is just *some* reason for suspicion. Your daughter may turn out to be best, and it may well be that, in fact, you were fair and unbiased in your evaluation. All I'm saying here is that the objection at least makes sense: there's some reason to think you are untrustworthy on this question, because there's some reason to think you're biased. Now return to your friend, who accused you of being a pessimist. One way to understand the condition of pessimism is that you eagerly accept, and amplify, arguments or evidence for negative conclusions, and you criticize, dismiss, or minimize (or ignore) arguments for positive conclusions. If so, then even if you're able to cite some argument for your negative conclusion, that's not a very good indication that there aren't good criticisms of that argument, or that there aren't even better arguments for a positive conclusion instead. In short, if you're a pessimist, there's reason to suspect that you're biased when it comes to negative conclusions. As in the previous case, of course your accuser might be wrong. Maybe you really did fairly evaluate all the arguments, and were not so biased in reaching your conclusion. But if you really do operate as a pessimist, then you should admit that's *some* reason to suspect your negative conclusions. All of this raises many good questions. Aren't we all biased, and therefore untrustworthy, in various ways? And if so, why do we ever bother giving arguments in the first place? And is it ever possible to overcome one's biases, at least the ones one knows about? I think there are probably reasonable answers to such questions, but I'm an optimistic philosopher, so maybe I'm just biased...

I came across a webpage which makes this claim."Skeptics of homeopathy insist

I came across a webpage which makes this claim."Skeptics of homeopathy insist that homeopathic medicines do not work, but have difficulty explaining how so many people use and rely upon this system of medicine to treat themselves for so many acute and chronic diseases." Is there a name for the kind of fallacy this person is making or particular way to describe it? I feel like that even if I couldn't explain why so many people "rely" on Homeopathy that doesn't mean that it is a valid form of medicine.

You are probably thinking of the informal fallacy, Argument ad Populum or Appeal to the Masses, in which someone suggests a conclusion is true because many people believe it to be true. http://en.wikipedia.org/wiki/Argumentum_ad_populum

While large-scale belief might provide inductive evidence for some claims, we know the masses can be wrong about lots of things, especially in cases where the underlying explanations are complex, as in medicine. That most people believed the sun goes around the earth did not show that claim is true. That most people did not (do not?) believe tiny things (germs) cause disease does not show that belief is true. That many people think homeopathy works provides little or no support for that claim.

However, there's an interesting twist in this case: the placebo effect is remarkably powerful--if people believe some medical treatment works, that belief can have effects, especially when it comes to pain. So, many people believing homeopathic treatments work might have some causal effects on whether it works, at least for those people. But I am dubious that the placebo effect can cure cancer or kill viruses...

I was always wondering, is it possible to deliberately chose to be irrational ?

I was always wondering, is it possible to deliberately chose to be irrational ?

Great question. There seem to be cases when a person might rationally chose to act irrationally --as occurs in Shakespeare's play Hamlet when the main character, Hamlet, acts as though he is mad to confuse his father-in-law (and, it turns out, the murderer of Hamlet's father) and this buys Hamlet more time in his contemplating how to avenge his father's death. I believe that USA President Nixon --and I am sure some other world leaders- sought to project to potential enemy states that he was capable of being irrational. There also seem to be cases of when persons deliberately put themselves into states of mind in which rationality goes out the window, as when a person deliberately becomes heavily intoxicated or allows their passions to be so completely unedited that they are in an 'anything goes' mode.

Apart from such cases, there may be a different, more paradoxical case, and perhaps the following is what you have in mind. Imagine a person is deliberating about whether to choose A or B and the person knows that choosing A would be highly irrational (e.g. there are very good reasons that the person knows that choosing A would be 99% certain of being profoundly undesirable and there are very good reasons or she is 99% sure that choosing B would be desirable). Under those circumstances, is it possible for the person to choose A? I think it is if we are using a somewhat narrow understanding of what is rational and irrational. Some philosophers have distinguished between rationality and feelings or intuition or faith. Pascal, for example, once claimed (rough paraphrase) that the heart has reasons that the mind does not understand. Such a remark suggests that one might think of 'rationality' as a primarily intellectual judgment rather than "following one's gut feeling" or something like that. Given such a demarcation between reason on the one hand and gut feelings on the other, I think we can imagine a person deliberately acting in ways that (from an intellectual point of view) are irrational because (from the supposedly "gut level") they have other grounds for their decision.

This juxtaposition between reason and gut feelings actually came out in a USA presidential election. The Goldwater campaign had a slogan: "In your heart, you know he is right." Goldwater's opponents came up with this reply: "But in your guts, you know he's nuts"!

Do philosophers always embrace rationalism? Why not irrationalism? What's so bad

Do philosophers always embrace rationalism? Why not irrationalism? What's so bad about my voting for a political candidate based on how attractive his wife is?

Philosophers are generally speaking fans of reason and rationality, but there are exceptions, e.g. David Hume, who wrote that reason is a "slave to the passions." There is also, often, unclarity about what the reasonable or rational thing to do is.

In response to your final question: voting for a political candidate based on the attractiveness of his wife is inadvisable. If human flourishing in the future depends on who is elected, then voting in this way undermines your own interests, not to mention the interests of others.

How can you be confident that you're an open-minded or free thinker? Doesn't it

How can you be confident that you're an open-minded or free thinker? Doesn't it seem likely that even the most prejudiced, dogmatic individuals view themselves as free thinkers (or, at any rate, appropriately responsive to evidence) with respect to their own views?

Good question. Could use some precision in the terms, i.e. what exactly counts as being "open-minded" or "free thinking"? some of these terms might have very specific meaning in certain contexts, but not clear what meaning you're assigning to them here. Also the heart of your second sentence/question is empirical, really -- we'd have to do a carefully devised survey to find out how people generally self-conceive. One of the really deep philosophical questions you have a finger on here might be this: is it possible to reconcile "open-mindedness" with "having reached a firm rational conclusion" -- since the degree to which you are convinced (rationally) by P is the degree to which you are no longer "open to" not-P. So even if you are initially "open" to all sorts of arguments/evidence, once you've made up your mind you are now "closing yourself" with respect to counter arguments/evidence. Of course, one might hold that an "open-minded" thinker is one who continuously revisits the issue, revisits the arguments/evidence, continues to look for new counter arguments/evidence -- but I'm not sure doing that is always really a virtue. (Seems pretty inefficient overall -- once you've investigated something you would never be finished, which will stop you from moving on to other things.) So there are a lot of good things to think about here!

best,

ap

Would you consider a 16 year old an adult, i.e. a rational agent who is capable

Would you consider a 16 year old an adult, i.e. a rational agent who is capable of of making decisions on their own? To what extent can you hold a 16 year old, or similarly aged person, accountable for their actions?

There are plenty of rational and responsible young people and just as many adults who clearly are not capable of making their own decisions. States have to posit an age when certain activities can be legally carried out because they use generalizations about how most people are at those ages, but these are just generalizations. Whatever the law says, we should judge individuals case by case where rational action is at issue.

Is there any way to prove that you are telling the truth when it seems false to

Is there any way to prove that you are telling the truth when it seems false to others?

My answer is bound to disappoint, but here goes anyway.

The obvious options for proving that I'm telling the truth are 1) to give reasons for thinking what I say is actually true, 2) to give reasons for thinking that I'm honest and 3) to give people a basis for doubting their own reasons for doubting me.

1) The best way to prove that you're telling the truth is to give people good reasons to believe that what you're saying is actually true. Unfortunately, in some cases this is really hard. Suppose I really did hear John tell Mary that he planned to break into Sam's computer. That might really have happened, and I might have heard it. But I might not have any independent way of showing that John and Mary really had this conversation, and if it's my word against theirs, there's not a lot that I can do.

2) I might be able to provide evidence that I'm generally honest, and that I don't have any special motive for lying about John. That would help my case indirectly. It would tend to show that I'm not deliberately lying. But even if I convince people that I'm honest and that I'm trying to report things as they happened, there would still be room for doubt. Maybe I misheard what John said. Or, if the issue is whether John was planning to do something illegal, maybe I missed some important bit of context that would give John's words a different meaning.

3) I might be able to give reasons for doubting John's honesty. But while that clears some of the obstacles to believing me, it's not the same as showing that what I've said is actually true.

So there are various things you could do that might help you make your case, but they'll depend on the circumstances. There's no one way that fits all cases, and there's very often no foolproof way in any case. If there were, legal trials would be a lot easier.

Pages