Advanced Search

Is there any way to prove that you are telling the truth when it seems false to

Is there any way to prove that you are telling the truth when it seems false to others?

My answer is bound to disappoint, but here goes anyway.

The obvious options for proving that I'm telling the truth are 1) to give reasons for thinking what I say is actually true, 2) to give reasons for thinking that I'm honest and 3) to give people a basis for doubting their own reasons for doubting me.

1) The best way to prove that you're telling the truth is to give people good reasons to believe that what you're saying is actually true. Unfortunately, in some cases this is really hard. Suppose I really did hear John tell Mary that he planned to break into Sam's computer. That might really have happened, and I might have heard it. But I might not have any independent way of showing that John and Mary really had this conversation, and if it's my word against theirs, there's not a lot that I can do.

2) I might be able to provide evidence that I'm generally honest, and that I don't have any special motive for lying about John. That would help my case indirectly. It would tend to show that I'm not deliberately lying. But even if I convince people that I'm honest and that I'm trying to report things as they happened, there would still be room for doubt. Maybe I misheard what John said. Or, if the issue is whether John was planning to do something illegal, maybe I missed some important bit of context that would give John's words a different meaning.

3) I might be able to give reasons for doubting John's honesty. But while that clears some of the obstacles to believing me, it's not the same as showing that what I've said is actually true.

So there are various things you could do that might help you make your case, but they'll depend on the circumstances. There's no one way that fits all cases, and there's very often no foolproof way in any case. If there were, legal trials would be a lot easier.

My answer is bound to disappoint, but here goes anyway. The obvious options for proving that I'm telling the truth are 1) to give reasons for thinking what I say is actually true, 2) to give reasons for thinking that I'm honest and 3) to give people a basis for doubting their own reasons for doubting me. 1) The best way to prove that you're telling the truth is to give people good reasons to believe that what you're saying is actually true. Unfortunately, in some cases this is really hard. Suppose I really did hear John tell Mary that he planned to break into Sam's computer. That might really have happened, and I might have heard it. But I might not have any independent way of showing that John and Mary really had this conversation, and if it's my word against theirs, there's not a lot that I can do. 2) I might be able to provide evidence that I'm generally honest, and that I don't have any special motive for lying about John. That would help my case indirectly. It would tend to show that I'm...

Debating with a theologian over the validity of biblical condemnation of

Debating with a theologian over the validity of biblical condemnation of homosexuality i've been offered a sequence of arguments that seem to me circular. First argument: Divine directives 1. God has given the directive to establish the eterosexual marriage 2. Homosexual acts are condemned in the BIble 3. Homosexuals brake the divine directive Second argument: Perverse heart 1. To brake a divine law willingly is perversion 2. Homosexual acts are condemned in the Bible 3. Homosexuals are perverse Third argument: social deviance 1. To diffuse behaviours that are condemned in the Bible is a form of social deviance 2. Homosexual acts are condemned in the BIble 3. Homosexual are social deviant To me it is obvious that all these arguments implies, as a second premise, the condemnation whose validity is in question. When i have made this observation i have been offered a curios answer: anyone has a worldview that starts from certain unquestionable premise, that are in themselves circular but not invalid....

Interesting.

It's true that we do sometimes rely on assumptions, premises or whatnot that we simply take for granted. In fact, it's hard to see how we could avoid doing that; otherwise we'd end up in an endless regress of justifications. We could use the term "worldview" for broad premises that we use this way, but I'm not sure the term adds much so I'll leave it aside.

But there's another question that leaves an ambiguity in what you're saying. Is the theologian offering an argument that s/he think should persuade a non-believer? Or is he offering arguments that a believer might accept whether or not anyone else does?

If your asking this person "Why do you believe that homosexuality is wrong" then pointing out that it's a consequence of other assumptions that the person accepts and sees as more basic is fine. In that case, he's simply setting forth the internal logic of his view. Whether or not you accept the first two beliefs, there's no circularity in saying "The Bible represents God's directives, and we should obey God's directives. The Bible tells us what God's directives are, and it directs us not to perform homosexual acts. Therefore we shouldn't." There's also no logical jump. If you accept the premises, it's reasonable to draw the conclusion

On the other hand, if what you're asking the theologian is "Why should I, who don't share your religion, think that homosexuality is wrong?" then the arguments are plainly not good enough. They rest on premises that you don't simply accept, and you've been given no reason to believe them. Compare: suppose someone said to this theologian "Utilitarianism is the right view of morality. [That is, roughly: what's right is what produces the most happiness and the least unhappiness.] There are no good utilitarian arguments against homosexuality. Therefore it's not wrong." In the circumstances, that would be no better an argument. The theologian plainly doesn't accept the first premise, and he hasn't been given a reason to.

Now the theologian might say that any reasonable person should see that his premises about God are true. But of course, that's not so. Many clearly reasonable people disagree -- just as many reasonable people don't that utilitarianism is the correct story about right and wrong. There are interesting, serious reasons to doubt that there's a God. And even if there is a God, there are interesting, serious reasons to doubt that a literal reading of the Bible reveals his will. (Likewise, there are interesting, serious reasons to doubt that utilitarianism gets morality right.)

If your theologian turns to insult rather than argument when you ask for clarification, there may not be much point in pursuing the discussion. But it could be pursued. Chances are neither of you would end up convincing the other. But you each might gain more insight into why the other thinks as he does.

But this doesn't say anything about your last question: when is it reasonable to put a premise beyond question?

That's a hard question. For one thing, it's context-dependent. If people who share a broad point of view are arguing about details, it's usually reasonable not to call the shared presuppositions into question. But that's not the only context. For most of us, there are certain views that we're not likely to give up even though we know full well that others don't share them. (Basic ideological commitments are sometimes like this, for example.)

Still, I think we can say at least a little more. One point is a rule of thumb: if the person you're disagreeing with seems sane, thoughtful and well-informed, that's a reason to take seriously the possibility that they might be onto something. Another point is related: sometimes, even though we don't find our opponents' reasons compelling, we can see that they have some force; we can feel their tug. For example: I'm opposed to capital punishment. But I can see how a perfectly reasonable person might come to a different conclusion. That suggest that I should't turn my opposition to capital punishment into an axiom. Similarly: I'm not a theist. But I know plenty of sane, reasonable theists, and I can understand the pull that theism has for them. Once again, that suggests I should't take my non-theism as an unquestionable presupposition.

Beyond that, it's not easy to say much. It sounds to me as though your theologian is not giving enough credit to doubters. That said, there's probably some room for you to give him at least some credit and it might be interesting to see where that leads.

Interesting. It's true that we do sometimes rely on assumptions, premises or whatnot that we simply take for granted. In fact, it's hard to see how we could avoid doing that; otherwise we'd end up in an endless regress of justifications. We could use the term "worldview" for broad premises that we use this way, but I'm not sure the term adds much so I'll leave it aside. But there's another question that leaves an ambiguity in what you're saying. Is the theologian offering an argument that s/he think should persuade a non-believer? Or is he offering arguments that a believer might accept whether or not anyone else does? If your asking this person "Why do you believe that homosexuality is wrong" then pointing out that it's a consequence of other assumptions that the person accepts and sees as more basic is fine. In that case, he's simply setting forth the internal logic of his view. Whether or not you accept the first two beliefs, there's no circularity in saying "The Bible represents God's...

If i define philosopher as lover of wisdom, how can i be sure that its a

If i define philosopher as lover of wisdom, how can i be sure that its a rational,critical and systematic investigation of the truths and principles of being, knowledge, or conduct(one of nowadays favoured definitions of philosophy, it seems to me)that brings wisdom? It seems quite bit too dogmatic to me. It seems like these epithets are implying the only way through one can gain wisdom, but what if there are others means to gain wisdom?

If word origins were a good guide to the nature of a profession, a secretary would be a keeper of secrets and a plumber would be someone who works in lead. That suggests we have some reason to be suspicious at the outset. Even if we grant that "philosopher" comes from the Greek for "lover of wisdom," that doesn't tell us much about what the discipline of philosophy actually is.

Let's take the philosophers who think of themselves as systematically, critically examining principles of being, knowledge and/or conduct. Do they see themselves as engaged in the pursuit of wisdom? Some might, but I'd guess most don't. They're trying to sort through interesting and abstract questions of a particular sort, but no wise person would think of abstract theoretical understanding as amounting to wisdom nor, I submit, would any wise person think that wisdom requires abstract, theoretical understanding.

I'd side with the wise here. Wisdom isn't easy to characterize in a sound bite, but I think of a wise person as someone who has deep practical insight into what matters for human life, and who is able to align the way s/he lives with that insight. Being good at philosophy is neither necessary nor sufficient for being wise in that way. Indeed, though philosophers are no less wise on average than other people, my experience is that on average they are no more wise either. Some of the least wise people I've known are skilled philosophers, and many of the wisest people I've known have no talent for or training in philosophy. [I'll add a parenthetical remark here, so long as you promise not to tell anyone: I'm not convinced that Socrates himself was especially wise, though he was undoubtedly clever.]

This doesn't mean that there's no connection of any sort between philosophy and wisdom. If wisdom has to do with what matters for human life, it has to do with matters of value on which philosophers sometimes reflect. More generally, the question of how best to analyze the notion of wisdom is a perfectly good philosophical question. But being wise isn't a matter of theoretical understanding, any more than being a good musician is a matter of knowing a lot of music theory. In fact, having theoretical insight into the concept of wisdom is no guarantee at all that one will be wise oneself.

Some people may see the disconnect between philosophy and wisdom as unfortunate; I think that's a mistake. What philosophers do has its own kind of interest and value. That etymology isn't a good guide to the relevant value is neither surprising nor a flaw in the enterprise.

If word origins were a good guide to the nature of a profession, a secretary would be a keeper of secrets and a plumber would be someone who works in lead. That suggests we have some reason to be suspicious at the outset. Even if we grant that "philosopher" comes from the Greek for "lover of wisdom," that doesn't tell us much about what the discipline of philosophy actually is. Let's take the philosophers who think of themselves as systematically, critically examining principles of being, knowledge and/or conduct. Do they see themselves as engaged in the pursuit of wisdom? Some might, but I'd guess most don't. They're trying to sort through interesting and abstract questions of a particular sort, but no wise person would think of abstract theoretical understanding as amounting to wisdom nor, I submit, would any wise person think that wisdom requires abstract, theoretical understanding. I'd side with the wise here. Wisdom isn't easy to characterize in a sound bite, but I think of a wise...

Why do smart people disagree about fundamental questions about life?

Why do smart people disagree about fundamental questions about life?

How about because they're hard questions?

Okay, maybe that's a bit quick. But it's close. When a question doesn't have an obvious answer, it's no surprise that people disagree. And if there's no agreed-upon method for getting the answer, it's even less surprising. A lot of what most people would count as fundamental questions about life are like that. For that matter, so are a lot of questions that most people would have a hard time getting excited about. (A good chunk of what you'll find in academic journals deals with questions that hardly count as fundamental issues about life, but the answers aren't obvious and the methods for getting at answers aren't obvious either.)

For some such questions, there's another sort of reason: picking an answer depends on how we rank competing values. Many of the familiar differences between liberals and conservatives are of this sort, for example. And it's not just that questions of value can be hard or that there's not always a clear way to settle them. It may be that in some cases, there isn't a uniquely correct answer.

That might suggest that smart people would stop disagreeing about such things. After all, if you like vanilla and I like chocolate, we don't disagree. We just have different preferences. But to suggest that some value-questions don't have uniquely correct answers isn't to imply that none do. It also isn't to say that all answers are equally acceptable. And even if we agree that there's no real disagreement when you and I pick different acceptable responses, it may not be obvious that we're in that kind of case. In other words, even if there's in fact no one right answer, that fact may not be obvious and it won't stop us from caring deeply about our "disagreements," even if they aren't real disagreements.

So perhaps the original answer still stands with a qualification: it's not just that questions like this can be hard; sometimes they're meta-hard: hard to figure out if they really have answers to begin with.

How about because they're hard questions? Okay, maybe that's a bit quick. But it's close. When a question doesn't have an obvious answer, it's no surprise that people disagree. And if there's no agreed-upon method for getting the answer, it's even less surprising. A lot of what most people would count as fundamental questions about life are like that. For that matter, so are a lot of questions that most people would have a hard time getting excited about. (A good chunk of what you'll find in academic journals deals with questions that hardly count as fundamental issues about life, but the answers aren't obvious and the methods for getting at answers aren't obvious either.) For some such questions, there's another sort of reason: picking an answer depends on how we rank competing values. Many of the familiar differences between liberals and conservatives are of this sort, for example. And it's not just that questions of value can be hard or that there's not always a clear way to settle them. It...

Some people have argued that because people's choices are often influenced by

Some people have argued that because people's choices are often influenced by factors that are not relevant to rational decision making, people do not have free will. For instance, people are much more willing to register as an organ donor on their driver's liscenses if this is presented as the default option ("check this box to be an organ donor" vs "check this box to opt out of being an organ donor"). Does a person need to be rational in order to have free will?

I'd like to suggest that it's not an all-or-none affair, but yes: rationality is part of free will. One way to think about it is to ask what kind of "free will" would be worth caring about. A will that's not able to respond to reasons is one I wouldn't want to have, and any sense in which it would be "free" seems to me to be pretty Pickwickian.

This point doesn't settle the question of how free will and determinism are related. Robert Kane's version of libertarianism, for instance, doesn't call up any obvious conflict between free will and reason. That's partly because reason doesn't always dictate a single course of action. It would be reasonable of me to work on my administrative duties for the rest of the afternoon, and also reasonable to spend the time on research. But it wouldn't be reasonable to tear off my britches and run naked into the street, and I don't think the fact that this would be beyond me (absent a very good reason) to mean I don't have free will.

So yes: little glitches in our reason do represent limitations on our "free will" (a phrase, by the way, that I think could use a holiday.) But reason needn't be perfect for us to be reasonably free.

I'd like to suggest that it's not an all-or-none affair, but yes: rationality is part of free will. One way to think about it is to ask what kind of "free will" would be worth caring about. A will that's not able to respond to reasons is one I wouldn't want to have, and any sense in which it would be "free" seems to me to be pretty Pickwickian. This point doesn't settle the question of how free will and determinism are related. Robert Kane's version of libertarianism, for instance, doesn't call up any obvious conflict between free will and reason. That's partly because reason doesn't always dictate a single course of action. It would be reasonable of me to work on my administrative duties for the rest of the afternoon, and also reasonable to spend the time on research. But it wouldn't be reasonable to tear off my britches and run naked into the street, and I don't think the fact that this would be beyond me (absent a very good reason) to mean I don't have free will. So yes: little glitches in our...

When two people disagree, is there always one right person and one wrong person?

When two people disagree, is there always one right person and one wrong person?

No. Alice may think that Jones is a genius; Bob may think he's a fool. He might be neither.

No. Alice may think that Jones is a genius; Bob may think he's a fool. He might be neither.

Hume showed that belief in induction has no rational basis, yet everyone

Hume showed that belief in induction has no rational basis, yet everyone believes it and in fact one can't help believing it. How then can one criticize religious belief, the person who says "I know my belief in God has no rational basis, but I believe it anyway"?

At least part of the answer to your question is hidden in the way you phrased it. Suppose that I'm wired so that there's really nothing I can do about the fact that I think inductively. As soon as I put my copy of Hume down, I revert straightaway and irresistibly to making inductive inferences. We usually 't think it doesn't make sense to criticize people for things they have no control over. If we can't help making inductions, then criticism is pointless. But we don't think that all non-rational beliefs are like this. On at least some matters, we're capable of slowly, gradually changing the way we think until the grip of the irrational belief weakens to the point where we can resist it. For example: someone might realize that they're prejudiced against some group. They might come to see that this prejudice is simply irrational. That might lead them to think they should try to change the way they think and react, and they might well succeed. Or to take a different example, when cognitive-behavioral therapy is successful, it's mainly a matter of helping people learn not to think in certain irrational ways that they once were prone to.

So... If a belief is irrational, and if it's the kind of belief that we can unlearn, then it might well make sense to criticize someone for holding it. It might make sense in a proactive way, as a means of moving them to change, and it might also make sense in a backward-looking way if we think there's no excuse for their having left themselves in the grip of the belief for so long. But if it's the kind belief that can't be unlearned, then criticizing someone for holding it is unreasonable.

Whether belief in God really is irrational is another matter. My answer would be that it isn't always. But we do know, at least, that it's a belief that some people have unlearned, and sometimes in part by way of thinking hard about it.

At least part of the answer to your question is hidden in the way you phrased it. Suppose that I'm wired so that there's really nothing I can do about the fact that I think inductively. As soon as I put my copy of Hume down, I revert straightaway and irresistibly to making inductive inferences. We usually 't think it doesn't make sense to criticize people for things they have no control over. If we can't help making inductions, then criticism is pointless. But we don't think that all non-rational beliefs are like this. On at least some matters, we're capable of slowly, gradually changing the way we think until the grip of the irrational belief weakens to the point where we can resist it. For example: someone might realize that they're prejudiced against some group. They might come to see that this prejudice is simply irrational. That might lead them to think they should try to change the way they think and react, and they might well succeed . Or to take a different example, when cognitive...

Suppose it's your birthday, and you get your Aunt (who has an infinite amount of

Suppose it's your birthday, and you get your Aunt (who has an infinite amount of money in the bank) to mail you a signed check with the dollar amount left blank. Your Aunt says you cash the check for any amount you want, provided it is finite. Assume that the check will always go through, and that each extra dollar you request gives you at least some marginal utility. It seems in this case, every possible course of action is irrational. You could enter a million dollars in the dollar amount, but wouldn't it be better to request a billion dollars? For any amount you enter in the check, it would be irrational not to ask for more. But surely you should enter some amount onto the check, as even cashing a check for $1 is better than letting it sit on your dresser. But any amount you put onto the check would be irrational, so it seems that you have no rational options. Does this mean that the concept of "infinite value" is self-contradictory? If so we have a rebuttal to Pascal's Wager.

I hope that some of my co-panelists who think more about decision theory will chime in, but here are a few thoughts.

Cheap first try: it seems plausible that even if every additional dollar brings some marginal utility, by the time we reach, say, a trillion trillion dollars (a septillion dollars) the utility provided by the septiliion+1th dollar is so tiny that the utility cost of worrying about it exceeds the utility it could provide. Of course, that's not really an answer to your question. What you have in mind is a scenario on which it's not just that each additional dollar adds utility, but on which the total area under the utility curve goes to infinity. But it's worth noticing that these are separate ideas. Even if each additional dollar adds value, the infinite sum might still converge to a finite number.

So we can restate the problem this way: there's an infinite well of utility available, and you can choose to have any finite amount of it, but you have to specify the quantity of utiles (where each utile adds a constant amount of utility.) In that case, it seems, no matter what amount you pick, it would have been rational to ask for more; it would always have been possible to increase your payoff by a non-trivial amount. It's a nice problem, but it's not quite clear what it shows. We can agree that no matter what amount you pick it would have been rational to ask for more. But the conclusion you've suggested is stronger: that it would be irrational not to have asked for more. That's not so obvious. Compare: in ordinary situations, it's not clear that people who "satisfice" -- decide to make do with an amount of expected utility that's less than the maximum they could have achieved -- are being irrational.

However, there are some delicate issues here, best left to those who know more than I. Suppose we grant for argument's sake that in the situation you've posed, every option is irrational. Two questions: first, does this show that the very notion of infinite value is incoherent? And second, if the answer is yes, does this show that Pascal's Wager is fatally flawed?

On the first question, the answer is not an obvious yes. After all, suppose you were given this choice: (a) pick nothing; (b) pick a finite amount of utility; (c) pick infinite utility. This decision problem seems to have a clear answer. Why not say that the problem isn't with the idea of infinite utility; the conclusion is simply that if there could be infinite utility, some decision problems would have no good answer, while others would. The ones that do are the ones whose constraints allow you to maximize.

As for Pascal, suppose that what I've just said is wrong, and that the idea of infinite utility really does make no sense. Then certain classic versions of Pascal's wager are incoherent (we'll leave aside what Pascal himself may have had in mind), but there are neighboring arguments that don't simply collapse. Suppose that you are extremely skeptical about God's existence, but allow that it has at least some positive probability ε, however small. Then if the rewards of belief are great enough, assuming God exists, then there's still an "ordinary" expected utility argument in the offing. It's easy to construct little 2x2 tables with appropriate numbers (exercise for the reader), and we don't even need to assume that God would punish non-believers. All that Pascal-style arguments need assume is that what God would have on offer is wonderful enough to swamp other considerations, even given a low value for ε.

Needless to say, there are plenty of other criticisms that Pascal's Wager has to deal with. All that's being suggested here is that the problem of establishing the coherence of infinite utilities need not be one of them.

I hope that some of my co-panelists who think more about decision theory will chime in, but here are a few thoughts. Cheap first try: it seems plausible that even if every additional dollar brings some marginal utility, by the time we reach, say, a trillion trillion dollars (a septillion dollars) the utility provided by the septiliion+1th dollar is so tiny that the utility cost of worrying about it exceeds the utility it could provide. Of course, that's not really an answer to your question. What you have in mind is a scenario on which it's not just that each additional dollar adds utility, but on which the total area under the utility curve goes to infinity. But it's worth noticing that these are separate ideas. Even if each additional dollar adds value, the infinite sum might still converge to a finite number. So we can restate the problem this way: there's an infinite well of utility available, and you can choose to have any finite amount of it, but you have to specify the quantity...

What are some real-life examples using reason (deductive or inductive) in a

What are some real-life examples using reason (deductive or inductive) in a sound and valid manner and coming up with a false statement of reality? In other words, I'm trying to prove that reason is not always a reliable way of knowing.

It might help to start with some definitions. As philosophers and logicians use the term "valid," a piece of reasoning is valid, roughly, if it's impossible for the premises to be true unless the conclusion is also true. That means that any argument with true premises and a false conclusion is automatically invalid. And as philosophers and logicians use the word "sound," a sound piece of reasoning is valid and has true premises. That means that any sound argument automatically has a true conclusion.

Of course, valid arguments can lead us to bad conclusions. That happens when they start with false premises. The following argument is valid, but the conclusion is false:

Some whales are fish. All fish have gills. Therefore, some whales have gills.

The problem, of course, is the first premise. But the reasoning isn't at fault.

So far, we've talked about deductive reasoning, and we can say that there are principles of deductive reasoning that are reliable in this sense: when applied to true premises, they produce true conclusions. But reasoning is often inductive, which means among other things that even if the premises are true, it isn't guaranteed that the conclusion will true as well, no matter how meticulous the induction. Inductive reasoning aims to show that its conclusions are probable given the premises, but this means that a person could be an impeccable inductive reasoner, could start from true premises, and still end up believing false things. That's the risk of trying to extend what you know beyond whatever follows strictly from what you already believe.

Then there's the question of where our premises come from in the first place, and one thing's for sure: they don't all come from reasoning. My belief that I had yogurt for breakfast is a matter of memory; my belief that I'm in the Philosophy Department lounge as I type is a matter of what I see. And on it goes. My memory can fail me; my perceptions can be illusory; my information-processing can be addled.

So there we are. If we reason from valid deductive principles, we'll end up with truths if we start with truths. That's a kind of reliability, but no guarantee of truth. If we're good inductive reasoners, and we start with true premises, then our conclusions are probably true, which is arguably a kind of reliability, as well, but not guaranteed to be true. And there are lots of ways for the beliefs we reason from to be off the mark. In short, there are no guarantees, but this isn't really surprising. And it's perfectly compatible with our being overall reliable about a good many things, even if we can fully expect that we'll sometimes go astray.

It might help to start with some definitions. As philosophers and logicians use the term "valid," a piece of reasoning is valid, roughly, if it's impossible for the premises to be true unless the conclusion is also true. That means that any argument with true premises and a false conclusion is automatically invalid. And as philosophers and logicians use the word "sound," a sound piece of reasoning is valid and has true premises. That means that any sound argument automatically has a true conclusion. Of course, valid arguments can lead us to bad conclusions. That happens when they start with false premises. The following argument is valid, but the conclusion is false: Some whales are fish. All fish have gills. Therefore, some whales have gills. The problem, of course, is the first premise. But the reasoning isn't at fault. So far, we've talked about deductive reasoning, and we can say that there are principles of deductive reasoning that are reliable in this sense: when applied to true...

As I see it, there is not a single person on the planet who can prove or

As I see it, there is not a single person on the planet who can prove or disprove the existence of God. If there is no provable God and/or afterlife then there can be no better hope for anything beyond the grave than what religion espouses. If there is a God however, then the rewards for correct behavior are well defined. Why then would the rational man NOT believe in some sort of supreme divine being if there is no proof either way?

To ask a question our illustrious leader, Alexander George, has several times asked here: What's meant by "prove"? If what's meant is what's ordinarily meant by "prove", then it's not clear that a single person on this planet can prove human beings evolved from apes. Nor can anyone prove that the Loch Ness monster does not exist. But that simply doesn't mean that there can't be good reasons to believe that human beings evolved from apes or that the Loch Ness monster does not exist. There can be, and there are.

Now what exactly that has to do with the rest of the question is not yet clear. But have a look here http://plato.stanford.edu/entries/pascal-wager/ for some thoughts (not mine).

It sounds as though you're giving a version of Pascal's Wager . One version of that argument runs along the following lines (whether or not this is exactly what Pascal had in mind): If God exist and I believe, I'll get infinite bliss. If he exists and I don't believe, I'm damned. But if God doesn't exist and I believe, I lose little, if anything and if he doesn't exist and I don't believe, I don't gain that much. Since belief potentially gains me much and loses me little, but since disbelief potentially gains me little and loses me much, I should believe. One problem, of course, is whether skeptical people can actually get themselves to believe. Pascal thought they could by going to mass, taking holy water and the like. Let's suppose he's right. What's the downside? One famous difficulty is the "many gods" objection. Which version of God do we believe in? What sorts of actions should we perform? Should we be Christians? What if there's a God who sees that as an unacceptable form of thinly...

Pages