Advanced Search

I just had a job interview today. As is often the case, I am now nervous as to

I just had a job interview today. As is often the case, I am now nervous as to whether or not I got the job. But in the process of being nervous, I got to (over)thinking about my own nervousness and potential disappointment if I don't get the job, and I've come to wonder something: would it be rational for me to be disappointing at not getting the job? I mean, I suppose if we were to endorse the logic that if (a) something is important to me, (b) it is rational to be disappointed when important things fail/fall through, and (c) getting this job is important to me, then it seems logical to be disappointed. But why endorse this logic in the first place? Why not just apply, do your best and then, if it falls through, shrug and move on to other opportunities? Is it in any meaningful way rational to be disappointed, sad or frustrated when things don't go our way? It may be natural, and it may be human, but that doesn't mean it has to actually make sense.

Great question, and one with very deep historical roots. The ancient Stoics, for example, thought that remorse and regret were not compatible with being a true Sage, and I think the same arguments they give about these responses would also apply to those of disappointment or frustration when things don't go as you had hoped they would. But to extend this way of thinking even further, you might then go on to ask whether it is even ever really rational to hope for something that is not under your own control. For the Stoics, the only thing that is under our control (or, at least, can and should be under our control) is how we react to things. As a result, such "bad" reactions as remorse, regret, disappointment, or frustration are not the right way to respond to things that happen in the world. The true Sage would understand how the world works so well that nothing he or she would ever do would give rise to remorse or regret. Similarly, the Sage would understand the world so well that nothing would frustrate or disappoint him or her, because the Sage would never be so irrational as to hope for something that was not the ways things really are or will be. The Sage is one who simply wants the world to be the way it actually is, so that one's will is perfectly aligned with what actually has happened, does happen, and will happen.

Now, many people would question this Stoic view as being "morally challenged," at best. It sounds, for example, like the Stoic Sage is someone whose response to a school bus full of innocent children hanging precariously over a cliff, but slowly tipping towards the point where it will surely fall over the cliff, would be something along these lines. "Hmmm...I see that the bus will go over the cliff and all those children will die on the rocks below. Well...that's fine with me!"

If this sounds like something has gone wrong in the Stoic view, then you can apply that to your thought. If the acquisition of something (such as a certain job) seems like it would bring genuine benefits, all things considered, then it strikes me as both natural and also reasonable to feel at least some disappointment or frustration if one does not obtain the valuable thing. The Stoics denied that things like jobs (or, for that matter, even loved ones) actually have any genuine value. Many people will find this element of the Stoic view implausible. If things do have value, then it is right for us to want them, and part of the logic of desire is to feel some kind of dissatisfaction/discomfort if our desires are not met.

So perhaps a more fruitful way to think about this question is not to frame it in terms of contraries (either we should feel frustration or not), but instead to think about what levels of frustration or disappointment are appropriate to the specific episode in which we do not get what we desire. Here, I think most philosophers will adopt a view that is not as extreme as the Stoic view, but which approximates that view more closely than the very exaggerated (and, I suspect, self-absorbed) way in which most people respond to the (often very petty) frustrations of their lives. We are encouraged to be "philosophical" about things, which means, I suppose, that we are supposed to evaluate the actual worth of things as accurately as we can, and also to remind ourselves of the generally very poor position we are in with respect to assessing the actual long-term value, all things considered, of what we find ourselves desiring.

A small anecdote might help here. I was very frustrated in my career for many years, and sometimes came close to being offered jobs that I know I would have accepted had they been offered at the time. I also now think, in retrospect, that accepting at least some of those jobs would have actually hindered my career even more than what I was already finding frustrating at that time, so...actually, it turns out to have been a good thing for me that I did not get what I wanted so badly at that time. Reminding oneself of such things can certainly help to allay some of the more negative aspects of the experience of frustration.

But if a desire really is for something good, and one does not manage to get the good, then it seems to me that at least some level of disappointment is quite reasonable, especially if that disappointment can help to motivate a continued effort to get some version of the good that was missed this time, or a better line of approach, so that one's next efforts might be more successful.

Great question, and one with very deep historical roots. The ancient Stoics, for example, thought that remorse and regret were not compatible with being a true Sage, and I think the same arguments they give about these responses would also apply to those of disappointment or frustration when things don't go as you had hoped they would. But to extend this way of thinking even further, you might then go on to ask whether it is even ever really rational to hope for something that is not under your own control. For the Stoics, the only thing that is under our control (or, at least, can and should be under our control) is how we react to things. As a result, such "bad" reactions as remorse, regret, disappointment, or frustration are not the right way to respond to things that happen in the world. The true Sage would understand how the world works so well that nothing he or she would ever do would give rise to remorse or regret. Similarly, the Sage would understand the world so well that nothing would...

If there were a a good reason to believe that irrational thinking--or at least a

If there were a a good reason to believe that irrational thinking--or at least a certain train of irrational beliefs--leads to greater happiness and prosperity (and I think there is a bit of psych research that suggests this is true), could a rational person decide to think irrationally--to adopt irrational beliefs--and would that itself be a rational decision?

Before I try to give an answer to your question directly, I want to object to the claim that seems to be its basis. I do believe that recent psychological research about happiness supports at least some elements of what might be called "irrationalism." On the other hand, it seems to me that this same research always treats happiness as a purely subjective property, and I want to make clear that this subjectivist treatment of happiness is very much at odds with the objectivist presumption in most of the philosophical literature on happiness.

To quote myself (the easiest author for me to remember!), "Giddy morons may suppose they pursue their interest by doing what only makes them giddier and more foolish, but sensible evaluation will conclude that such lives are nothing to envy. The addict's high, even secured by ba lifetime supply of intoxicants, is no model of surpassing success in the pursuit of self-interest" (T. C. Brickhouse and N. D. Smith, Socratic Moral Psychology, Cambridge: Cambridge University Press, 2010, p. 46). In other words, one who might be counted as "supremely happy" from a subjective point of view only, could still count as a complete wretch to a sensible objective observer. In the philosophical tradition known as "eudaimonism" (from the Greek word, eudaimonia, which is often translated as "happiness," but which is also reasonably well translated as "flourishing," "thriving," or "well-being"), happiness does have some important entailments with respect to subjectivity, but the achievement of actual happiness will not be exhausted by subjective considerations alone.

But if we take this objectivist stance, it starts to look like the hypothesis that forms the basis of your question may not be one to which we can really give our assent: One who thinks or acts irrationally is not one who seems to us to think or act in a way that is objectively choiceworthy. Maybe thinking or acting irrationally can provide subjective advantages (just think how happy I might be if I could convince myself that absolutely everybody loves and cares about me!!!), but if we (more sensibly, I contend) bring the objective point of view to bear on the question, I don't think we would ever suppose the irrationalism was preferable to rationalism.

Before I try to give an answer to your question directly, I want to object to the claim that seems to be its basis. I do believe that recent psychological research about happiness supports at least some elements of what might be called "irrationalism." On the other hand, it seems to me that this same research always treats happiness as a purely subjective property, and I want to make clear that this subjectivist treatment of happiness is very much at odds with the objectivist presumption in most of the philosophical literature on happiness. To quote myself (the easiest author for me to remember!), "Giddy morons may suppose they pursue their interest by doing what only makes them giddier and more foolish, but sensible evaluation will conclude that such lives are nothing to envy. The addict's high, even secured by ba lifetime supply of intoxicants, is no model of surpassing success in the pursuit of self-interest" (T. C. Brickhouse and N. D. Smith, Socratic Moral Psychology , Cambridge:...

Do you only do a good deed (or just about anything), because you're gaining

Do you only do a good deed (or just about anything), because you're gaining something from it yourself? I have thought this with my friend and she thinks people are naturally "good". I just think that as we are animals, we are naturally finding ways to survive. Of course sometimes people make bad decisions, but they are still thinking that the choice is best for them. -Heikki

Let me recycle the line of response that I gave to a slightly different earlier question, with a few tweaks (and not disagreeing with my co-panelist, but with different emphases).

It is a truism that, when I fully act, it is as a result of my desires, my intentions, my goals. After all, if my arm moves independently of my desires, e.g. because you want it to move and push it, or as an automatic reflex, then we'd hardly say that the movement was my action (it was something that happened to my body, perhaps despite my wishes).

But note that even if everything I genuinely do (as opposed to undergo) is as a result of my desires etc., it doesn't follow that everything I do has an egoistic motive in the sense of being motivated by the thought that what I do has a payoff for me or that "the choice is best for [me]". The fact that a desire is my desire doesn't entail that the desire is about me or is about some payoff for me, or something like that. And it is just false that all my desires are like that. I can want to bring about states of affairs in which I just don't feature at all, and want such states of affairs irrespective of any payoff for me.

For example: I can want my grandchildren to have a tolerable world long after I am gone, and I can want to do what I can about climate change for their sakes. That is, to repeat the truism, a desire of mine: but it isn't a desire for something for me (I won't be around long enough for things to get bad). It is a desire for something for them (and for their contemporaries too) that gets me to act. In no sense is that an egoistical desire that I get something. It doesn't have the right sort of content.

"Ah hah," says the cynic, "you don't get it, do you? When people think they are doing something for their grandchildren, that isn't really why they are doing it. They are actually doing it for some selfish reason -- they are doing it in order to feel good (or for some similar pay off in happiness for themselves)."

But there isn't the foggiest reason to suppose that that is true. Of course, since I want something badly for my grandchildren, I will be pleased with what tiny successes I might be involved in which might do something towards the fulfillment of my desires. And the occasional pleasurable feedback will no doubt help sustain my desire to fight the good fight. But what I actually want is the better world for my grandchildren, not the pleasurable feedback. If an angel were to offer me the choice, modest real successes that I never knew about [so no feedback] vs. no real successes but occasional pleasurable illusions of success -- with my choice to be followed by instantly forgetting the angel's bargain -- I'd of course still choose the first. For it is the successes that I care about.

Of course that leaves us with a puzzle, perhaps the puzzle that ultimately underlies your question. If we are as you say animals, engineered by evolution which blindly promotes organisms that tend to win out in the battle for survival, then how come that I actually have such desires (e.g. for my grandchildren's well-being) that seem to have nothing to do with my own survival?

But evolutionary biologists have stories to tell about how altruistic desires can indeed have evolved. Do note, however, that the fact that these other-directed desires have evolved as part of our animal nature does not imply that they aren't "really" altruistic: that doesn't follow at all. For to say that a desire is altruistic (in the everyday sense) is just to say something about the kind of content the desire has, what it is a desire for. We can have desires -- as with my desires that a grandchild flourish (e.g. after my death when I'm not around to be affected) -- that are not self-directed, and there are indeed good naturalistic stories on the market about why this should be so. These are explored e.g. in this nice philosophical encyclopedia article on altruism and evolution.

Looks to me as if you and your friend are having a debate in which the only options on the table are not the only ones available for consideration. Part of what it means to be a human animal is to live with others. This means that just at the level of fitness, we will do better if we have the resources (whether natural or socialized, as I suspect a good deal of both) to deal with others in positive ways. Precisely because there are many others around us who really matter to us, the distinction between "best for me" and "best for others" becomes both artificial and also distorting. What is "best for me" is often for me to sacrifice at least some degree of narrow self-interest in order to help others to flourish. This is the kind of thing that parents and friends do for each other all the time. But it is not limited simply to those close to us. Studies have shown that people who are given money and told to spend it on others report greater happiness thanm those who are given money and told to...

Is "you should..." synonymous with "it is rational for you to..."?

Is "you should..." synonymous with "it is rational for you to..."?

Some philosophers would derive the former from the latter--Kant, for example, is generally supposed to think that obligation derives directly from rationality. But I think it is going to depend upon what specific notions of responsibility ("should") and rationality are at work. I think a good way to see how a negative answer to your question might work is to ask a different version of your question: Is it self-contradictory to say that one shouldn't always be rational, or to say that one should (sometimes) be irrational?

For example, if one supposes that morality is wholly a social construct, and without any basis in reality beyond social convention (I don't believe this, but some do), then it seems to me that one might recognize duties imposed by whatever conception of morality was currently fashionable that seemed (and indeed were) irrational. But that is only if one does not also think that the principles of rationality are social constructs. Usually, however, those who think that morality is a social construct also think that all values (including rationality) are social constructs. Or, if one takes a Romanticist view of rationality (regarding it as something like cold calculation), one might say you shouldn't always be rational. Famously, in the area of religious belief, Kierkegaard argues the the "knight of faith" was one who held religious beliefs in ways that were opposed to rationality.

In most ethical systems, however, I think that, even if obligation and rationality are not treated as the same thing, they are biconditionally related, which is to say that whenever you have one, you will also have the other.

Some philosophers would derive the former from the latter--Kant, for example, is generally supposed to think that obligation derives directly from rationality. But I think it is going to depend upon what specific notions of responsibility ("should") and rationality are at work. I think a good way to see how a negative answer to your question might work is to ask a different version of your question: Is it self-contradictory to say that one shouldn't always be rational, or to say that one should (sometimes) be irrational? For example, if one supposes that morality is wholly a social construct, and without any basis in reality beyond social convention (I don't believe this, but some do), then it seems to me that one might recognize duties imposed by whatever conception of morality was currently fashionable that seemed (and indeed were) irrational. But that is only if one does not also think that the principles of rationality are social constructs. Usually, however, those who think that morality...

Do you think it's possible, even theoretically, for there to exist a substantive

Do you think it's possible, even theoretically, for there to exist a substantive belief (any kind, about anything) that is impervious to any argument, cannot be debunked, etc., and yet is false?

Yes, at least theoretically. An example of how this might be is given in the first of Descartes' Meditations on First Philosophy. Descartes asks us to consider a world that is governed by a kind of evil god who delights in nothing more than making us believe what is false. In such a world, we would be able to find no evidence at all to debunk the falsehoods to which the god inclined us. Descartes challenges us to see if we can be absolutely sure that we do not actually inhabit such a world!

Modern popular culture has taken up this scenario in various entertaining ways. I think it is fair to say that the worlds imagined in "Total Recall," and "The Matrix" are excellent examples of scenarios that raise the theoretical possibility of false belief that is (at least for those who don't escape the Matrix!) invulnerable to refutation.

Yes, at least theoretically. An example of how this might be is given in the first of Descartes' Meditations on First Philosophy. Descartes asks us to consider a world that is governed by a kind of evil god who delights in nothing more than making us believe what is false. In such a world, we would be able to find no evidence at all to debunk the falsehoods to which the god inclined us. Descartes challenges us to see if we can be absolutely sure that we do not actually inhabit such a world! Modern popular culture has taken up this scenario in various entertaining ways. I think it is fair to say that the worlds imagined in "Total Recall," and "The Matrix" are excellent examples of scenarios that raise the theoretical possibility of false belief that is (at least for those who don't escape the Matrix!) invulnerable to refutation.

There are many arguments for the existence of god (e.g., the ontological

There are many arguments for the existence of god (e.g., the ontological argument) which, though interesting, probably don't actually account for the religious belief of even their primary exponents. I suspect that a person may be aware of many reasons for belief in a proposition "P" but that only some of these are actually causally linked to his belief that "P"; others he may offer as a way of persuading non-believers, or convincing them of his reasonableness, but these don't actually explain his own conviction. How do we differentiate between arguments or evidence which create belief, and those which merely support it? Is there some link that we perceive between certain reasons and belief but not others?

It might help to notice that there are distinct senses to "reasons for believing that P." The first sense (usually called "propositional justification" by epistemologists) has to do with there being some fact of the matter that would make it reasonable for me--that would justify me--in believing that P, should I happen to be aware of that fact. Hence, to use an example that has been used by others, the fact that there is smoke billowing out of the house (whether or not anyone is aware of it) is a good reason to think the house is on fire. The other sense is called "doxastic justification" by epistemologists, and has to do with what a person actually has, among his (other) beliefs, as justification for that person's belief that P. So I would be doxastically justified in believing that the house is on fire if I was aware of the smoke billowing out, and was also aware of the connection between smoke and fire.

It is a point of contention among epistemologists precisely what role justification (reasons, arguments, evidence) must play in knowledge and/or reasonable belief. For some, the fact that there is the right sort of causal connection between the knower and the known (even if the knower has nothing we would regard as doxastic justification) is enough. For others, what matters is whether the belief was formed or sustained in ways that reliably produce true beliefs--again, with or without the addition of doxastic justification. Even if we suppose that justification is required, it may well be (and indeed, seems almost certain) that what justifies us is distinct from whatever causes us to have that belief. Doxastic justification is generally understood as consisting in (other) beliefs one has, which provide evidenciary support to the belief they justify. But there are very good reasons to doubt that whatever entities (if they are entities) beliefs are will prove to be able to cause other beliefs. Presumably, the causal story will have to include lots of other entities and processes about which we are not (and perhaps cannot be) aware, in the way we can (at least in principle) be aware of the beliefs that provide doxastic justification for other beliefs we have.

It might help to notice that there are distinct senses to "reasons for believing that P." The first sense (usually called "propositional justification" by epistemologists) has to do with there being some fact of the matter that would make it reasonable for me--that would justify me--in believing that P, should I happen to be aware of that fact. Hence, to use an example that has been used by others, the fact that there is smoke billowing out of the house (whether or not anyone is aware of it) is a good reason to think the house is on fire. The other sense is called "doxastic justification" by epistemologists, and has to do with what a person actually has, among his (other) beliefs, as justification for that person's belief that P. So I would be doxastically justified in believing that the house is on fire if I was aware of the smoke billowing out, and was also aware of the connection between smoke and fire. It is a point of contention among epistemologists precisely what role...

Why can't I remove my emotions (such as falling in love) by rationality?

Why can't I remove my emotions (such as falling in love) by rationality?

The relationship between reason and the emotions is one that has been wondered about for a very long time--going back to our most ancient literature, including the Old Testament and Homer's Iliad. I doubt that I will be able to resolve this one for you, but I do have a suggestion to make.

I'm not sure this is a philosophical question, but I also think that you (or most people) can do what you say you can't do. If you think that you are feeling a certain emotion that is not compatible with a rational assessment of things--for example, you feel as if you are falling in love with some movie star whom you will not likely ever meet--then there are various rational steps you can take to get rid of the emotion. Ever heard the one about taking a cold shower?

OK, maybe it is not as simple as that, but we certainly can look for things that will divert our attention from an emotion, or will use the energies of the emotion in different ways (and thus serving to deflect it, as part of a strategy of extirpating it altogether). Simply reminding ourselves of the irrationality of some feelings we may have will help us to get rid of them (or transform them into something else). There can also be rational strategies for getting help--if an emotion is especially troublesome, it is rational to seek assistance from professionals who can work with you on why you may be feeling some things that seem very irrational to you. Understanding the source of an emotion is also a potent tool for restoring us to a reasonable life.

The relationship between reason and the emotions is one that has been wondered about for a very long time--going back to our most ancient literature, including the Old Testament and Homer's Iliad . I doubt that I will be able to resolve this one for you, but I do have a suggestion to make. I'm not sure this is a philosophical question, but I also think that you (or most people) can do what you say you can't do. If you think that you are feeling a certain emotion that is not compatible with a rational assessment of things--for example, you feel as if you are falling in love with some movie star whom you will not likely ever meet--then there are various rational steps you can take to get rid of the emotion. Ever heard the one about taking a cold shower? OK, maybe it is not as simple as that, but we certainly can look for things that will divert our attention from an emotion, or will use the energies of the emotion in different ways (and thus serving to deflect it, as part of a strategy...

What is a reason (to do or believe something)? Suppose that someone who kills

What is a reason (to do or believe something)? Suppose that someone who kills another person should be punished and that Ann killed somebody. Are there two reasons or just one reason to punish Ann?

Seems like one reason to me. Reasons (and reasoning) can be complex, of course, but there would be no reason to punish Ann if she did not do a punishable act, and there would be no reason to punish her if acts such as the one she did were not punishable. So the way to count the (single) reason seems to be this: Ann committed a punishable act (namely murder--not all killing seems to me to be punishable, which is why I changed your wording).

Seems like one reason to me. Reasons (and reasoning) can be complex, of course, but there would be no reason to punish Ann if she did not do a punishable act, and there would be no reason to punish her if acts such as the one she did were not punishable. So the way to count the (single) reason seems to be this: Ann committed a punishable act (namely murder--not all killing seems to me to be punishable, which is why I changed your wording).