Advanced Search

I recently read that the majority of philosophers are moral realists. I either

I recently read that the majority of philosophers are moral realists. I either do not understand moral realism or, if I do understand it, I don't buy it. Below I describe how I view the ideas of 'right' and 'wrong.' Is my understanding incompatible with moral realism? And how would you critique my understanding? Also if you want to give a version of moral realism that is easy to understand that would be greatly appreciated. Let’s say that I find test taking difficult. I declare: test taking is difficult. This statement is relational in nature. I am saying that because of various elements of my personal makeup the action of taking a test is difficult for me. It would be incorrect of me to say that test taking was objectively difficult. Some, as a result of various differing elements of their personal makeup, may find test taking easy. It is hypothetically possible to enumerate all of the events in my life as a child and the specific neuroanatomical structures that cause test taking to be difficult for me...

Let's start with "test taking is difficult." There's a difference between "test taking is difficult for me" and "test taking is difficult." If what I mean is just that I personally find test taking difficult, then simply saying "test taking is difficult" is a recipe for being misunderstood.

Now of course if I say something is hard, I'm usually not saying it's hard for literally everyone. I mean, roughly, that it's hard for most people. That's fine, but it can be true—whether or not the task is hard for me personally.

This gives us our first point: if a statement doesn't seem just to be about the speaker, then don't read it that way unless there's a good reason to.

Now let's turn to moral statements. When people say that punching a child in the face is wrong, they don't just mean something about themselves. In fact, most people, including most philosophers, would reject that reading out of hand. If I say "it's wrong to punch a child in the face" and you say "what you really mean is that you have negative feelings about punching kids," I'd say no; that's not what I mean. I mean that it's wrong, and it would still be wrong even if I happened to enjoy it.

Now maybe there's no real difference between right and wrong. But at this stage, we're talking about what statements mean. Roughly, that's a matter of what people who use the statements us them to mean. And I think we're on safe ground in saying that when most people say something is wrong, they don't just mean that they don't like it.

You might say that there's nothing else meaningful that they could mean. But the argument you give doesn't show this. Your argument is that there's a causal story that accounts for my saying that it's wrong to punch a child—some story about my upbringing and my influences and my neurons or whatnot. The problem is that even granting this, it doesn't show what you need to show. Compare: I say that 537 times 24 is 12,888. There's certainly a story about my upbringing, education, and the workings of my brain that explains why I say this. But the statement that 537 times 24 equals 12,888 isn't a statement about me, and in fact it's a true statement. My upbringing and the workings of my brain made me into the kind of person who's a reasonably reliable source of information about basic arithmetic.

This brings us to our second point: just because there's a causal story explaining why someone says, that doesn't make their statement subjective, and doesn't give us any reason to think it's not true. On the contrary, the causal story might help us understand why the person is reliable about this sort of thing.

Finally, back to moral claims. When people say that punching children is wrong, they don't just mean that they have a certain feeling about punching kids. If Mary says punching kids is wrong, there's certainly some complicated story about Mary's upbringing, influences, and brain that explains why she says this. But for any statement there's always some such story, and that doesn't usually give us a reason to doubt that what the person says is correct. (It could, of course; maybe the person is on drugs. But the fact that some causal stories give us reasons to doubt what someone says doesn't mean that all of them do.) Mary might be (is, I'd say!) right about punching kids. And because of her empathy, sensitivity, intelligence and wide experience, Mary might even be an unusually reliable source of moral advice.

This isn't to say that there aren't any reasons to doubt moral realism. But those reasons will have to be quite different from the ones you've proposed.

Let's start with "test taking is difficult." There's a difference between "test taking is difficult for me" and "test taking is difficult." If what I mean is just that I personally find test taking difficult, then simply saying "test taking is difficult" is a recipe for being misunderstood. Now of course if I say something is hard, I'm usually not saying it's hard for literally everyone. I mean, roughly, that it's hard for most people. That's fine, but it can be true —whether or not the task is hard for me personally. This gives us our first point: if a statement doesn't seem just to be about the speaker, then don't read it that way unless there's a good reason to. Now let's turn to moral statements. When people say that punching a child in the face is wrong, they don't just mean something about themselves. In fact, most people, including most philosophers, would reject that reading out of hand. If I say "it's wrong to punch a child in the face" and you say "what you really mean is that you have...

Whenever ethics and aesthetics come into conflict, is it always aesthetics that

Whenever ethics and aesthetics come into conflict, is it always aesthetics that must give way? What is so bad about killing ugly people to decrease the net ugliness in the world?

A postscript: the larger question was whether ethics always trumps aesthetics. A closely-related question is whether a life that always puts moral considerations above all other considerations, no matter how apparently trivial the issue, is a good one. Susan Wolf had interesting things to say about this some years ago in her paper "Moral Saints." (Journal of Philosophy, August 1982.) Here's a link to her essay:

http://philosophyfaculty.ucsd.edu/faculty/rarneson/Courses/susanwolfessa...

I have to wonder: are you trolling? If not, I'm not sure whether any possible reply is likely to satisfy you. That said, since it can be useful to try to articulate things we normally take for granted, a handful of comments. If someone thinks that getting rid of ugly people trumps not killing people, there's an obvious question: perhaps you're beautiful now, or at least, perhaps you're not ugly. But that can change. It might change slowly through the depredations of aging, or it might change in an instant because of some horrific accident. If you think it would be okay to kill someone because they're ugly, you should agree that it would also be okay to kill you if you become ugly. Now the reply might be: this amounts to begging the question; it implicitly puts ethics above aesthetics. The test I've offered is near kin to the Golden Rule or, at least the Silver Rule, or in any case Kant's Categorical Imperative. But that misses the point. If Jack thinks it would be okay to kill someone else just because...

A postscript: the larger question was whether ethics always trumps aesthetics. A closely-related question is whether a life that always puts moral considerations above all other considerations, no matter how apparently trivial the issue, is a good one. Susan Wolf had interesting things to say about this some years ago in her paper "Moral Saints." ( Journal of Philosophy, August 1982.) Here's a link to her essay: http://philosophyfaculty.ucsd.edu/faculty/rarneson/Courses/susanwolfessay1982.pdf

Is lying by omission really a form of lying?

Is lying by omission really a form of lying?

Allen Stairs is right in suggesting that it is possible to lie by saying nothing, and perhaps it is worth adding that lies of this sort often form the basis of prosecutions for fraud.

Fraud is legally complicated, but the basic idea is that you can injure someone by causing that person to rely on your assurances, even when you know that your assurances are false. Many legal systems assume that such conduct should sometimes be punished, depending on the circumstances, but they also assume that false assurances (or false representations) can come from what you don't say as much as from what you do say.

For example, if I sell you a car without telling you that it has no brakes, and I omit this fact knowingly, my conduct might well be punishable as fraud. One can label such conduct with a variety of names, of course, but the basic idea is that there is something fundamentally wrong and dishonest about it, and many societies have sought to punish it.

A key element in these situations is trust. If I sell you the car, then you have a legitimate right to expect that you can trust me to disclose any gravely serious defect that I know about, like the car’s having no brakes. (Such expectations are sometimes “fiduciary,” from the Latin fiducia, for trust, and this is especially so when one gets advice from a lawyer or an agent, who is expected to disclose relevant information.) To betray this trust, by commission or omission, is often punishable. By contrast, enemy combatants are not normally expected to disclose their strategic plans to each other, and one reason is that neither side has any obvious right to trustworthy information from the other side. (Perhaps it is just this assumption that lies behind Sun Tzu’s famous aphorism in the Art of War, “All war is based on deception.”)

Similar distinctions often appear in sports or business. Sports teams have no obligation to tell their competitors that the competitors are preparing for the wrong play. But they do indeed have an obligation to disclose violations of the rules by their own players. (“Sure, he played with regulation footballs; we’re just not telling you that the balls were all intentionally deflated.”) Business competitors have no obligation to tell each other of their strategic plans, but they do indeed have a responsibility, both moral and legal, to disclose serious defects in a product to their own customers. In many of these cases, people have a right to trustworthy information, and this right can be violated by what we don’t say as much as by what we do say.

Even in war, an adversary who accepts terms of surrender has a right to expect that there are no undisclosed conditions that would make the surrender absurd or the equivalent of suicide. In Shakespeare’s Henry IV, Part 2, Prince John of Lancaster promises rebel leaders that their grievances will be redressed and urges them to surrender. After the rebels lay down their arms, Prince John orders their arrest and sends them off to execution. He explains that their grievances will indeed be redressed; he simply left out the part about their being arrested and executed.

I am tempted to leave it there, but trust requires that I disclose an additional fact—that I am not a lawyer, just a fellow who teaches philosophy. So if there is any possibility of fraud by omission, it is certainly prudent to consult an attorney.

If the question is about the word "lying," then there's probably no clear answer. But what word we use isn't the interesting question. Suppose X isn't true, but it's to my advantage that the person I'm talking to should think it's true. For example: maybe I'm talking to my boss, who would reasonably expect that I would have carried out a certain task. In fact, I didn't, and the result was not good. By answering questions artfully, I may be able to leave my boss with the impression that I actually carried out whatever the task was, without ever actually saying this. I've left a crucial detail out of what I said. Have I actually lied? Maybe not; depends on how you want to use the word. Have I deliberately tried to deceive my boss? The whole point of the story is that I did. Of course my boss might not just assume that I did my job properly, but I'm hoping that's what she thinks, and I'm trying to make that as likely as I can short of outright saying something false. Is this as bad as an outright...

I'm a lawyer. One of my previous clients asked me for specific legal advice that

I'm a lawyer. One of my previous clients asked me for specific legal advice that he later used to commit financial fraud. I strongly suspected at the time that he was going to use my advice for that very purpose but I told him anyway because I like him as a person and I also disagree with the law that prohibits the particular type of fraud that he committed. Have I acted immorally according to virtue ethics?

First, a thought about the question: you ask whether you've "acted immorally according to virtue ethics." You might be trying to understand what light virtue ethics in particular casts on a case like this, or you might be interested in whether what you did was wrong, period. In either case, I don't think we have enough information to say. But let's take the cases in turn.

Some views provide what's supposed to be a criterion that we might be able to use rather like an algorithm to figure out what's right or wrong. Utilitarianism would tell us to do a sort of cost/benefit analysis, toting up the goods and the harms and deciding whether one action is better than another by seeing how the arithmetic works out. Kantianism would direct us to apply the Categorical Imperative in one or another of its forms. (For example: we might ask whether what we're considering would call for treating someone merely as a means to an end.) Virtue ethics doesn't work that way. It's often understood as telling us to do what a virtuous person, aware of the situation and properly informed, would do. We can get a grip on that by asking what virtues are relevant—honesty, for instance, or fairness or courage or kindness. But the list of virtues is open-ended, and even once we've identified some relevant ones, there's no recipe for saying how they apply in a particular case. For example: a person with the virtue of honesty isn't necessarily someone who unfailingly tells the truth. Rather, a person with this virtue knows when truth-telling is the right thing, and acts accordingly. My own view is that it's a virtue of virtue-ethics that it has this open-ended character, but not everyone feels that way. In any case, it's hard to say with so little information just how the virtuous person might act in your situation without knowing what's at stake and what the law actually is forbidding.

The fact that you ask your question in the first place suggests that, virtue ethics aside, you may not be sure you did the right thing. If you told your former client what the law actually calls for and he chose to break the law anyway, then it's not likely that you breached any duties of professional ethics. However, that doesn't necessarily answer the larger question; things can be in accord with professional codes of ethics and still be wrong. If you were implicitly encouraging him to commit some sort of fraud, that would be at least somewhat worrisome even if this particular sort of "fraud" is something most people wouldn't see as wrong. The worry is that we have at least some duty to obey the law even when the law is less than ideal. One reason: picking and choosing among laws might weaken your own overall respect for the law, and might likewise make other people respect the law less. Again, this isn't conclusive and a lot would depend on the details. But as a lawyer, you arguably are held to a higher standard than most people when it comes to matters of the law. It doesn't sound like you literally broke your professional oath, but we can still ask whether you behaved as we would ideally like lawyers to behave. Did you caution your former client? Did the way you answered his question suggest that you take the general maxim that we should respect the law seriously? Or was there a wink and a nod? How good are your reasons for thinking this kind of "fraud" isn't really wrong?

I don't know the answers to any of those questions. But if you're trying to decide whether you acted wrongly, they're the sorts of questions I'd say you should ask.

First, a thought about the question: you ask whether you've "acted immorally according to virtue ethics." You might be trying to understand what light virtue ethics in particular casts on a case like this, or you might be interested in whether what you did was wrong, period. In either case, I don't think we have enough information to say. But let's take the cases in turn. Some views provide what's supposed to be a criterion that we might be able to use rather like an algorithm to figure out what's right or wrong. Utilitarianism would tell us to do a sort of cost/benefit analysis, toting up the goods and the harms and deciding whether one action is better than another by seeing how the arithmetic works out. Kantianism would direct us to apply the Categorical Imperative in one or another of its forms. (For example: we might ask whether what we're considering would call for treating someone merely as a means to an end.) Virtue ethics doesn't work that way. It's often understood as telling us to do...

What happens, morally speaking, if I promise to do something that happens to be

What happens, morally speaking, if I promise to do something that happens to be slightly immoral? Do I still have some kind of obligation to do it?

I think a lot hinges in your question on the word "slightly". Is there a moral obligation to keep a promise to do something that is "slightly immoral"? I think that the answer has to be "No", since the value of duty to keep promises is not in question, and the act contemplated is only "slightly" immoral. OK, but how slightly? Would it help if you had written, "if I promise to do something that is utterly and completely immoral"? Or if you had written, "If I promise to do something that is only ever so slightly, just the teeniest barely discernible bit, immoral"? I think such gradations make a big difference, and it is not very clear how "slight" the immorality has to be before it ceases to conflict with the important general obligation to keep promises. Of course much depends also on the question to whom the promise was give, why, under what circumstances, and so on. These all need spelling out before we can address the question with any hope of answering it.

Nice question! On some views, there's no judging intrinsically whether doing what you promised is immoral—slightly or otherwise. If you're a consequentialist (someone who thinks consequences always decide what's right), the question is what, overall, produces the best consequences, and it might be that overall, it's better to do what you promised, even if it's something we'd normally expect you shouldn't do. Someone else could say that the case contains a moral dilemma by its very nature. One the one hand, someone might say, it's wrong to break a promise. On the other hand, we've assumed that what you've promised to do is also wrong. On that way of looking at it, we have a dilemma, and on one way of understanding dilemmas, you will do wrong no matter what. That said, you may still be obliged, all things considered, to do what you promised—or not to, depending on the case. We could add other theoretical possibilities here, but for anyone who faces a situation like this in real life, the answer is "It...

Hello. Listening to a radio programme about utilitarianism I was struck by the

Hello. Listening to a radio programme about utilitarianism I was struck by the difficulty of making a universal framework fit in our relationship-driven world, and how a concept of egoistic or relative utilitarianism might do this. That is, we maximise utility not equally over everyone but across those with whom we feel a relationship, and to the extent that we do. So, where a utilitarian sacrifices his children to make a small dent in third world poverty but ignores his newly unemployed neighbour because she is not starving, an "R.utilitarian" buys his children the cheaper laptop, using the balance to contribute to the starving and to help his neighbour out with an interest-free loan while she gets back on her feet. I googled every combination of relationship/relative/egoistic and utilitarianism that I could find, and came up blank. Please can you tell me what this theory is called, and who came up with it 200 years before I did? If not, please don't steal it before I write it up ;-)

Interesting. Here's a possible way of thinking about it. Utilitarianism (Capital "U") as a philosophical view says that the right thing to do is what maximizes utility, where "utility" is characterized in a very particular way: roughly, the sum total of well-being among sentient creatures (or something like that.)

That may or may not be the right account of right and wrong, but most people probably don't have a view on that question one way or another. However, it's at least somewhat plausible that people are utilitarians (small "u") in a different sense: they try to maximize utility, understood as what they value. Whether it makes us good or bad, many of us actually do value the well-being of our children more than we value the well-being of strangers, and our actions reflect that. A small-"u" utilitarian, then, might well behave as what you call an R-utilitarian. That would be because the small-"u" utilitarian is maximizing over what s/he values.

In any case, there's been a fair bit written on the place of relationships in morality. Here's a link to the results of a Google search, with papers by philosophers and social scientists.

Interesting. Here's a possible way of thinking about it. Utilitarianism (Capital "U") as a philosophical view says that the right thing to do is what maximizes utility, where "utility" is characterized in a very particular way: roughly, the sum total of well-being among sentient creatures (or something like that.) That may or may not be the right account of right and wrong, but most people probably don't have a view on that question one way or another. However, it's at least somewhat plausible that people are utilitarians (small "u") in a different sense: they try to maximize utility, understood as what they value. Whether it makes us good or bad, many of us actually do value the well-being of our children more than we value the well-being of strangers, and our actions reflect that. A small-"u" utilitarian, then, might well behave as what you call an R-utilitarian. That would be because the small-"u" utilitarian is maximizing over what s/he values. In any case, there's been a fair bit written...

Would it be ethically sound to love a machine that is a perfect replica of a

Would it be ethically sound to love a machine that is a perfect replica of a human? For example. If it was impossible for anyone to tell the difference, would it be wrong? If this robot were programmed to have human feelings and think in a manner that is indistinguishable from a human, would it be moral to love them as though they were a human. (apologies if this is unclear, English is not my first language)

To get to the conclusion first, I think that the answer is yes, broadly speaking. But I'd like to add a few qualifications.

The first is that I'm not sure the root question is about whether it would be ethically right or wrong. It's more like: would it be some kind of confusion to love this sort of machine in this way? Suppose a child thinks that fancy stuffed animal really has feelings and thoughts, but in fact that's not true at all. The toy seems superficially to have emotions and a mind, but it's really a matter of a few simple preprogrammed, responses of a highly mechanical kind. This might produce strong feelings in the child—feelings that seem like her love for her parents or her siblings or her friends. But (so we're imagining) the feelings are based on a mistake: the toy is just a toy.

On the other hand, if an artificial device (let's call it an android) actually has thoughts and feelings and is able to express them and to respond to what people like us feel or think, then it's hard to see why it would be a confusion to have feelings for the android like the feelings we have to ordinary people. After all, we're supposing that the android has real feelings, possibly including feelings for us.

Put it another way: what you have in mind is an artificial person. The android would be a person because it really has the kinds of psychological characteristics that persons have. It would be an artificial person because it was designed and built rather than born and grown. Whether we'll ever be able to build such things is hard to say. We'd have to understand more than we do now about how matter, organized in the right way, gives rise to minds. But however that works, there's no clear reason to think it couldn't be replicated artificially.

All this said, the relationship between humans with a history of infanthood and childhood, and the looming prospect of old age and death, and, on the other hand, artificial creations with very different origins and prospects wouldn't be psychologically simple. That might have all sorts of implications, moral and otherwise, for what went on between us and them. But the main point is that highly intelligent creatures with complex feelings would deserve our moral consideration even if they were made and not born. And they would also be fit objects for our feelings, quite possibly including feelings of love.

One final note: fiction often does at least as good a job of exploring the issues here as philosophy. And though it's not directly on point, the recent Spike Jonze movie Her raises some interesting questions that you might enjoy pondering.

To get to the conclusion first, I think that the answer is yes, broadly speaking. But I'd like to add a few qualifications. The first is that I'm not sure the root question is about whether it would be ethically right or wrong. It's more like: would it be some kind of confusion to love this sort of machine in this way? Suppose a child thinks that fancy stuffed animal really has feelings and thoughts, but in fact that's not true at all. The toy seems superficially to have emotions and a mind, but it's really a matter of a few simple preprogrammed, responses of a highly mechanical kind. This might produce strong feelings in the child—feelings that seem like her love for her parents or her siblings or her friends. But (so we're imagining) the feelings are based on a mistake: the toy is just a toy. On the other hand, if an artificial device (let's call it an android) actually has thoughts and feelings and is able to express them and to respond to what people like us feel or think, then it's hard to see...

The probability in my mind that I am correct in attributing extensive moral

The probability in my mind that I am correct in attributing extensive moral personhood to other humans is very high. With non-human vertebrate, I attribute slightly less extensive but still quite broad moral personhood, and I am in this too quite confident. But I accept given I am a fallible human being I might be wrong and should give them no moral personhood or moral personhood of the kind I ascribe to humans. Continuing in the same line, I ascribe almost no moral personhood to bacteria and viruses. But again given I am fallible musnt I accept some non-zero probability that they deserve human like personhood? If so, and I am a utilitarian, given the extremely large number of bacteria and viruses on the planet it seems even if I am very sure that bacteria are of only minimal moral importance, I still must make serious concessions to them because it seems doubtful that my certainty is so high as to overcome the vast numbers of bacteria and viruses on this planet. (I am aware it is not entirely clear how...

It's a very interesting question. It's about what my colleague Dan Moller calls moral risk. And it's a problem not just for utilitarians. The general problem is this: I might have apparently good arguments for thinking it's okay to act in a certain way. But there may be arguments to the contrary—arguments that, if correct, show that I'd be doing something very wrong if I acted as my arguments suggest. Furthermore, it might be that the moral territory here is complex. Putting all that together, I have a reason to pause. If I simply follow my arguments, I'm taking a moral risk.

Now there may be costs of taking the risks seriously. The costs might be non-moral (say, monetary) or, depending on the case, there may be potential moral costs. There's no easy answer. Moller explores the issue at some length, using the case of abortion to focus the arguments. You might want to have a look at his paper HERE.

A final note: when we get to bacteria, I think the moral risks are low enough to be discounted. I can't even imagine what it would mean for bacteria to have the moral status of people or even of earthworms.

It's a very interesting question. It's about what my colleague Dan Moller calls moral risk. And it's a problem not just for utilitarians. The general problem is this: I might have apparently good arguments for thinking it's okay to act in a certain way. But there may be arguments to the contrary—arguments that, if correct, show that I'd be doing something very wrong if I acted as my arguments suggest. Furthermore, it might be that the moral territory here is complex. Putting all that together, I have a reason to pause. If I simply follow my arguments, I'm taking a moral risk. Now there may be costs of taking the risks seriously. The costs might be non-moral (say, monetary) or, depending on the case, there may be potential moral costs. There's no easy answer. Moller explores the issue at some length, using the case of abortion to focus the arguments. You might want to have a look at his paper HERE . A final note: when we get to bacteria, I think the moral risks are low enough to be discounted. I...

Most all ethical theories have a problem with them, whether it has some sort of

Most all ethical theories have a problem with them, whether it has some sort of internal inconsistency, has no answer for a certain scenario, or whatever. How can anyone accept an ethical theory that they know is flawed? Don't the flaws mean we need to keep looking and thinking?

There are two sorts of things that might be at issue here and they call for different answers.

If I want the best ethical theory we can come up with, and the available alternatives all seem flawed, then that's a reason to keep looking and thinking—especially if the goal is to get as close as possible to the (probably unattainable) ideal theory.

But if "accept an ethical theory" means something like "use it as the basis for making ethical judgments," then the issue changes. That's because it's debatable, to say the least, that the best way to make ethical judgments is to come up with an ethical theory and apply it.

What's the alternative?

Here's one. Assume that by and large, we're able to make reasonable ethical judgments. The job of an ethical theory on this view is to provide a coherent account of what makes those judgments right or wrong (or true or false, or whatever the appropriate contrast may be.) It could very well be that even though we have the capacity to make sound moral judgments, boiling the judgments down to a tidy theory is very difficult. If that's so, then we'd expect that our ethical theories would be inadequate in various ways. But that wouldn't give us a reason to become ethical skeptics. On this way of looking at things, ordinary ethical knowledge is a bit like practical knowledge or practical skills: we don't need to know the theory to get things right. Theoretical thinking might feed back into our practical skills and refine them, but it's not the place to start.

There are two sorts of things that might be at issue here and they call for different answers. If I want the best ethical theory we can come up with, and the available alternatives all seem flawed, then that's a reason to keep looking and thinking—especially if the goal is to get as close as possible to the (probably unattainable) ideal theory. But if "accept an ethical theory" means something like "use it as the basis for making ethical judgments," then the issue changes. That's because it's debatable, to say the least, that the best way to make ethical judgments is to come up with an ethical theory and apply it. What's the alternative? Here's one. Assume that by and large, we're able to make reasonable ethical judgments. The job of an ethical theory on this view is to provide a coherent account of what makes those judgments right or wrong (or true or false, or whatever the appropriate contrast may be.) It could very well be that even though we have the capacity to make sound moral judgments,...

is there any philosophical reason to be polite? A lot of being polite is just

is there any philosophical reason to be polite? A lot of being polite is just plain lying--why must the truth succumb to social conventions?

An interesting problem.

To begin, I'd put the question differently: is there any reason to be polite? Adding "philosophical" in front of "reason" doesn't really help. And of course, there are many reasons to be polite. It helps avoid needlessly hurting people's feelings; it helps keep disagreements from turning into shouting matches; it provides a set of conventions that help keep us from wasting time sorting out how certain sorts of social interactions should operate; it's a way of showing respect for other people; it helps keep other people from concluding that I'm a jerk. And so on.

All of these reasons are defeasible, as they say. They aren't ironclad, and there are situations that call for ignoring them. But there are also plenty of situations that don't call for ignoring them.

Your worry is about truth. You say "A lot of being polite is just plain lying." Of course, a lot of being polite is not not "just plain lying." It's not polite to smack your lips at table with others. Whatever you think of that convention, it's clear that it doesn't have anything to do with telling the truth.

That said, there are, indeed, cases where politeness and the truth come into conflict. You call me on the phone and start the conversation by saying "Hi, Allen. How are you?" I answer "Fine, thanks," even though I've got a headache. Has the truth succumbed to social conventions?

I'd say that in this case, the answer is mostly no. The pattern of greeting I've just described is one we all recognize as a conventional way of starting a conversation. We both understand this, and neither of us thinks my answer is meant as a report on the state of my well-being. In other words, in a case like this, no one is misled. The exchange of greetings isn't an exchange of information.

Other cases are trickier of course. You have a new haircut. I don't think it suits you, but you say "How do you like my hair?" If I say "Nice," there's a fair case to be made that I'm misleading you. Have I sacrificed the truth to social convention?

Let's agree that I've sacrificed the truth. But I'd say it's not social convention I've sacrificed it to. It's the desire not to hurt your feelings. It's not just that we have a convention about not hurting people's feelings. It's that hurting people's feelings causes them distress. People don't like being distressed, and sometimes the distress isn't worth it.

Should I have told you what I really think about your hair? The answer may or may not be yes; it depends on a lot of things that would only become clear if the context were clearer. If you and I are only nodding acquaintances, I may well think it's not my place to tell you. I may think I don't know enough about your tastes to make the truth helpful, and I may realize that my own tastes in these matters don't count for much. And so on.

The trouble with The Truth is that it has a capital "T." Of course honesty is an important value. And of course it's generally useful to know what's actually so. But that doesn't add up to an argument that truth-telling is of transcendent importance. Honesty is one value among many.

There's a larger point about social conventions and my comment about leaving out the word "philosophy." If we look at our conventions from the outside, like anthropologists on Mars, they seem strange. But if we say a word to ourselves many times in a row ("rhubarb, rhubarb, rhubarb...") the word will start to seem strange. Sometimes there's a point in the outside, dare I say alienated point of view. But much of the time there isn't. The kind of sense that social life makes from the inside is a perfectly good kind of sense on its own.

An interesting problem. To begin, I'd put the question differently: is there any reason to be polite? Adding "philosophical" in front of "reason" doesn't really help. And of course, there are many reasons to be polite. It helps avoid needlessly hurting people's feelings; it helps keep disagreements from turning into shouting matches; it provides a set of conventions that help keep us from wasting time sorting out how certain sorts of social interactions should operate; it's a way of showing respect for other people; it helps keep other people from concluding that I'm a jerk. And so on. All of these reasons are defeasible, as they say. They aren't ironclad, and there are situations that call for ignoring them. But there are also plenty of situations that don't call for ignoring them. Your worry is about truth. You say "A lot of being polite is just plain lying." Of course, a lot of being polite is not not "just plain lying." It's not polite to smack your lips at table with others....

Pages