Advanced Search

If I believe that an action, e.g. killing-someone-from-a-distance-for-personal

If I believe that an action, e.g. killing-someone-from-a-distance-for-personal-pleasure-in-the-act-of-killing, with no extenuating circumstances, is always wrong, must I also believe that not-having-that-action-done-to-me is my "right"? Or can "rights" only exist in the presence of an enforcing authority, while wrongs can exist with or without an authority? Under what circumstances could an act committed by a person be judged morally as a "bad" rather than a "wrong"? I apologise if this reads like an academic question, but it comes from a conversation I had tonight with my wife. Thank you.

Perhaps the easiest way to answer your question is to start from a slightly different place. We need to distinguish the idea of rights from the idea of what is morally right (and wrong). Once we’ve made that distinction, we can then look at the further distinction between what is morally wrong and what is morally bad.

The idea of rights extends widely. I have a right to go to the cinema, a right not to be killed, a right to be paid (given my contract with my employer), a right to have children, a right to the exclusive use of my house. Some rights are moral rights, some are legal, some are the results of contracts. In general, a right can be understood as an entitlement to perform, or refrain from, certain actions and/or an entitlement that other people perform, or refrain from, certain actions. Many rights involve a complex set of such entitlements.

The two central features of rights are:

Privilege/liberty: I have a privilege/liberty to do x if I have no duty not to do x. I have the right to go to the cinema because I have no duty not to go to the cinema. But I have no right to steal, because I have a duty not to steal.

Claim: I have a claim right that someone else does x in certain cases in which they have a duty to me to do x. Claim rights can be ‘negative’ – they require that other people don’t interfere with me (e.g. the right not to be killed); or ‘positive’ – they require that other people do specific action (e.g. the right to be paid if I’m employed).

While every claim right entails a duty, not every duty entails a claim right. Some duties are based on rights, but some are not. I may have a duty to give to charity, but I have not violated anyone’s rights if I don’t. So talk of rights should not be confused with talk of what is morally right. And so I’m afraid we can’t simply say that because something is morally wrong, you have a (claim) right that it is not done to you. In the example you give, I would suggest that it is wrong to kill someone from a distance for pleasure because people have the right not to be treated this way. It is not that you have the right because it is wrong. The fact that it is always wrong (let’s suppose) doesn’t mean that it is based on a right, either. For instance, you have certain property rights, which make it wrong for people to take your property from you. But there are circumstances in which this is not true, e.g. forced purchase by the government, or commandeering your car in a police emergency. The system of property rights is not absolute, but it is a system of rights nevertheless.

So to ‘wrongs’ and ‘bads’. We often associate rights and wrongs with duties, esp. duties of justice - to do with minimally required treatment of others. Actions which do not violate a duty can nevertheless be bad. Exactly which actions qualify depend on your theory of duties. But some suggestions might include squandering opportunities to do good, failing to show kindness in a personal relationship, or other matters related to personal virtue, moral development, and seeking the good of others.

There’s much more to say about all these matters, but I hope that helps!

Perhaps the easiest way to answer your question is to start from a slightly different place. We need to distinguish the idea of rights from the idea of what is morally right (and wrong). Once we’ve made that distinction, we can then look at the further distinction between what is morally wrong and what is morally bad. The idea of rights extends widely. I have a right to go to the cinema, a right not to be killed, a right to be paid (given my contract with my employer), a right to have children, a right to the exclusive use of my house. Some rights are moral rights, some are legal, some are the results of contracts. In general, a right can be understood as an entitlement to perform, or refrain from, certain actions and/or an entitlement that other people perform, or refrain from, certain actions. Many rights involve a complex set of such entitlements. The two central features of rights are: Privilege/liberty: I have a privilege/liberty to do x if I have no duty not to do x. I have the right to go to the...

We feel we choose our moral choices but when somebody feels shame do they choose

We feel we choose our moral choices but when somebody feels shame do they choose to feel that shame even though that feeling seems inescapable?

Most philosophers, me included, would say that we do not choose to feel what we do. Ever since the ancient Greeks, emotions have been thought of as 'passions', because we are passive, not active, in experiencing emotions. We 'suffer' or 'undergo' them, rather than bring them about. It may be that we can make choices, e.g. about what kind of person to be, that will change our character and that will result in our having different emotions in the future. For example, we may choose to face our fears, to become more courageous, and then feel less or fewer fears in the future. But we cannot choose what to feel in the present. Or again, we may have some indirect control over what we feel, by focusing our attention on certain aspects of a situation rather than others. But we can't directly control, by choice, what we feel.

We do make moral choices as well. Given that we don't choose our emotions, it follows that when someone feels shame, this is not a moral choice they make. Instead, we might say that our moral choices apply to actions and perhaps to future character traits, like generosity or courage. Suppose, then, someone chooses to act in a way that then causes them to feel shame, e.g. perhaps they betray a secret they had promised to keep. They choose to betray the secret, but they didn't choose to feel shame. We can't choose what to feel ashamed of.

Perhaps this looks like a threat to moral autonomy. I don't think so. Perhaps the person thinks that it is not wrong to betray this secret (e.g. it could save someone's life). Then they feel shame, but they think that the shame is inappropriate - they don't think that they did anything wrong even though they feel shame. Our feelings and our moral judgments don't always line up.

This situation strikes me as quite normal, when in adulthood, we reject some of the moral rules of our childhood, e.g. someone who feels guilty at not going to church on a Sunday morning, even though they stopped believing in God years before.

Most philosophers, me included, would say that we do not choose to feel what we do. Ever since the ancient Greeks, emotions have been thought of as 'passions', because we are passive, not active, in experiencing emotions. We 'suffer' or 'undergo' them, rather than bring them about. It may be that we can make choices, e.g. about what kind of person to be, that will change our character and that will result in our having different emotions in the future. For example, we may choose to face our fears, to become more courageous, and then feel less or fewer fears in the future. But we cannot choose what to feel in the present. Or again, we may have some indirect control over what we feel, by focusing our attention on certain aspects of a situation rather than others. But we can't directly control, by choice, what we feel. We do make moral choices as well. Given that we don't choose our emotions, it follows that when someone feels shame, this is not a moral choice they make. Instead, we might say that our moral...

It seems that we adopt a formal ethical theory based on our pre-theoretical

It seems that we adopt a formal ethical theory based on our pre-theoretical ethical intuitions. Our pre-theoretical ethical intuitions seem to be the product of our upbringing, our education and the society we live in and not to be entirely consistent, since our upbringing and our education often inculcate conflicting values. So how do we decide which of our pre-theoretical ethical intuitions, if any, are right? It seems that we can only judge them in the light of other pre-theoretical ethical intuitions and how can we know that they are right? If we judge them against a formal ethical system, it seems that the only way we have to decide whether a formal ethical theory, say, consequentialism, is right is whether it is consistent with our pre-theoretical ethical intuitions, so we are going nowhere, it seems.

Perhaps I can play the devil's advocate and rebuild the case for thinking that systematic ethical theory gets us nowhere.

There are actually many different systematic theories--utilitarian, contractarian, deontological, etc.--but the trouble is they clash. The defenders of such theories often agree on particular moral judgments, but as to the abstract principles that define these systems, the experts disagree. In fact, it is precisely disagreement over the principles of these systems that animates much current academic debate in ethics. Yet if not even the experts can agree on which of their systematic principles are correct and which incorrect, why should anyone else rely on them? The theories in question are just as disputable as any real moral decision they could be invoked to justify.

Again, systematic ethical theories are often defended on the grounds that they are like systematic theories in empirical science. (Rawls, for example, makes this move.) Yet empirical theories in science are reliable only because they can be tested by physical experiment. When it comes to systematic ethical theories, by contrast, no one knows how to conduct a physical experiment to test the principle of utility, or Rawls's theory of the original position, or T.M. Scanlon's version of the social contract, or Derek Parfit's "triple theory" of what counts as a wrongful act. Philosophy, regrettably, is mostly just talk, and the only way to confirm or refute any of it is with more talk. If, in fact, none of these theories can be confirmed in the way that theories of science can be confirmed, why suppose that any of these systematic ethical theories are reliable in the first place?

Beyond these points, ordinary people, outside of philosophy, typically reason about right and wrong in a manner that places no reliance on such theories. Their arguments are usually particular to the case. For example, if I say that firing a pistol at my neighbor is wrong because it could hurt him, I have certainly given what counts under ordinary circumstances as a good reason. But my reasoning needn't invoke anything so controversial as the principle of utility, or the theories of Rawls, Scanlon, Parfit, etc. My reasoning relies on a specific consequence. Again, if I say that my shooting at my neighbor would be wrong because I already know that his shooting at me would be wrong, then I seem to argue by analogy. (A is like B, and B is clearly wrong; therefore, A is probably wrong too.) I need systematic ethical theory for none of this.

Now if you have read this far and have an interest in the history of political philosophy, you will perhaps see that I am merely parroting an outlook that was expressed long ago by Edmund Burke in his Reflections on the Revolution in France. Burke's Reflections defends feudalism and chivalry, but antiquated politics aside, he also argues that real moral reasoning, if reliable, avoids sweeping generalizations about rightness, wrongness, political legitimacy, and so forth. Real moral reasoning is typically particular and analogical, and it is essentially inductive. It does not rely on deducing a conclusion from systematic principles that purport to state necessary and sufficient conditions for a moral concept. Burke defended this outlook during the later, conservative period of his life, but also during the earlier, liberal period.

(Notice that Burke isn't skeptical of all of our pre-theoretical ethical intuitions. He's just skeptical of sweeping ethical theories--ones that presume to lay out necessary and sufficient conditions for moral concepts. Of course, there are also many other sorts of systematic thought in ethics and philosophy--all quite innocuous--but it is the attempt to state necessary and sufficient conditions that is the bone of contention.)

In academic philosophy today, Burke's position is definitely a minority view. Yet it still seems to match how most people ordinarily reason, and so it is still worth giving careful thought to. My guess is that other contributors may wish to weigh in on this point, and to defend different conceptions. Nevertheless, the fundamental question Burke poses is this: Given the many theoretical objections to any of these systematic ethical theories, would it actually be reasonable to rely on one of them in making a real moral decision? Burke thought the answer was no.

This is a nice question. Essentially, I agree with your description of what we need to do, but not your conclusion that this gets us nowhere. The process that you describe is known as ‘reflective equilibrium’ (named and defended by John Rawls). In coming to discover what is morally right or good, we reflect on both our individual judgements based on pre-theoretical intuitions and on broader moral principles or theoretical arguments. As you point out, it is very unlikely that these are coherent to start with. So we go back and forth between the individual judgements and the principles adjusting each in the light of the other until we reach coherence or 'equilibrium'. If you think that what is morally right is completely independent of what we think, then you may be concerned that such coherence is no guide to the truth. Indeed, philosophers have objected that this method may just make someone's moral prejudices more systematic, leading them away from the truth. But for that reason, and because there is...

Should moral obligations be constructed to fit within the real world, or within

Should moral obligations be constructed to fit within the real world, or within a hypothetical utopia? For example, I recognize that utilitarianism is the system most likely to be enacted by a ruling majority, because it will favor that majority, should my moral obligations reflect utilitarianism, even though I do not think it is the right system?

Morality must, I think, be something that can guide our choices and actions. And to do this, it must take account of what is realistic - morality needs to be morality for human beings, with the kind of psychology and concerns that we have. But what is 'realistic'? It's not the same as how we find many people behaving, but how it is possible for them to behave. What we can realistically hope for from people is less than utopian behaviour, but it is much more than a more pessimistic view of 'the real world'.

Your example about majority rule is a case in point. Democracy respects majority rule more than any other political system, and yet from its beginnings, at least in modern times, it has also incorporated restrictions on what the majority can do. And that is because we can not only hope, but expect, people to take account of the interests of those they disagree with (altruism is just as much part of human nature as selfishness - the trouble is usually with how the two balance out).

I think it is perfectly possible that our moral obligations don't reflect our favoured normative theory (someone is going to be wrong, given all the disagreements, as long as we reject subjectivism). So, if utilitarianism were the right system, then your moral obligations would reflect that, even if you disagree. But I agree with you that utilitarianism is the wrong moral system. One reason I think this is that I find it very unrealistic, psychologically (this has been discussed at length in the work of Bernard Williams, among others). Like Williams, I think that moral obligations have been constructed to fit the real world, but also - like him - I think that many of these reflect past prejudices and imbalances of power, and that we would do better to change these as we come to recognise their origins.

Morality must, I think, be something that can guide our choices and actions. And to do this, it must take account of what is realistic - morality needs to be morality for human beings, with the kind of psychology and concerns that we have. But what is 'realistic'? It's not the same as how we find many people behaving, but how it is possible for them to behave. What we can realistically hope for from people is less than utopian behaviour, but it is much more than a more pessimistic view of 'the real world'. Your example about majority rule is a case in point. Democracy respects majority rule more than any other political system, and yet from its beginnings, at least in modern times, it has also incorporated restrictions on what the majority can do. And that is because we can not only hope, but expect, people to take account of the interests of those they disagree with (altruism is just as much part of human nature as selfishness - the trouble is usually with how the two balance out). I think it is...

What is the difference between Emotivism and Quasi-realism?

What is the difference between Emotivism and Quasi-realism? Wikipedia says that Emotivism is '... a meta-ethical view that claims that ethical sentences do not express propositions but emotional attitudes', and that Quasi-realism is '... the meta-ethical view which claims that: Ethical sentences do not express propositions.Instead, ethical sentences project emotional attitudes as though they were real properties.' It is said that these two theories stand in opposition to each other.

This is not an easy question to answer! Part of the difficulty is that quasi-realism is a very technical theory. So I can start by saying that Wikipedia is not quite right…

Quasi-realism can be understood as a descendant of emotivism, and both theories claim that ethical sentences express emotional attitudes. They agree that these attitudes are not representations of how the world is; they can’t be true or false. But the two theories disagree on further details about ethical language and how it functions. There is even disagreement within emotivism. Ayer’s emotivism takes the expression of emotion as central: in saying that an action is wrong, I’m not making any further factual claim about it, but expressing my moral disapproval, he says. Stevenson’s emotivism argued that the purpose of ethical language is not merely to express how we feel but to influence how we and others behave, to motivate us to act in certain ways and not others.

Blackburn’s quasi-realism argues that ethical language is rather more complex than either emotivist theory claims. First, ethical language does express propositions, such as ‘what she did was courageous’ or ‘his remark was unkind’ as well as ‘murder is wrong’. The predicates ‘was courageous’, ‘was unkind’, ‘is wrong’, attribute a property to something (what she did, his remark, murder). However, second, these predicates aren’t genuine descriptions of what she did, etc. but ‘projections’ of our evaluations. In using ethical language, we don’t speak of and think in terms our personal evaluations, but in terms of the properties of things in the world. We treat our evaluative commitments (to courage, to kindness etc.) as though they were judgments about how the world is. This is enormously useful, because it is much easier to coordinate our attitudes with other people if we think in terms of an intersubjective world of moral properties. Third, this isn’t simply a mistake or illusion. Quasi-realism argues that we can meaningfully talk of moral judgments being true or false. These are all important differences from emotivism.

If moral judgments don’t ascribe genuine properties to actions, how can they be true or false? We shouldn’t think that our evaluations make something right or wrong – that’s a confusion. We can only talk about things being right or wrong from within some evaluative perspective; but having a certain perspective is not a reason for something being right or wrong. For instance, suppose I kick a dog, and you tell me off. The fact that you disapprove of my action doesn’t make it wrong, nor do you think this (I shall assume!). Rather, what makes my action wrong is the pain it causes the dog.

But don’t people simply have different evaluative perspectives? If so, how can we talk about truth in ethics? Because nobody’s perspective is perfect (and if they think it is, that’s so arrogant it shows that their evaluations are perfect!). ‘True’ ethical statements are those that form part of the ‘best’ set of attitudes we can have, and the best set is what results from improving our attitudes as much as possible. There are not may ‘liveable, unfragmented, developed, consistent, and coherent systems of attitude’, says Blackburn.

This is not an easy question to answer! Part of the difficulty is that quasi-realism is a very technical theory. So I can start by saying that Wikipedia is not quite right… Quasi-realism can be understood as a descendant of emotivism, and both theories claim that ethical sentences express emotional attitudes. They agree that these attitudes are not representations of how the world is; they can’t be true or false. But the two theories disagree on further details about ethical language and how it functions. There is even disagreement within emotivism. Ayer’s emotivism takes the expression of emotion as central: in saying that an action is wrong, I’m not making any further factual claim about it, but expressing my moral disapproval, he says. Stevenson’s emotivism argued that the purpose of ethical language is not merely to express how we feel but to influence how we and others behave, to motivate us to act in certain ways and not others. Blackburn’s quasi-realism argues that ethical language is rather more...

How does one know when is it acceptable to break a promise? Is there something

How does one know when is it acceptable to break a promise? Is there something special about a vow, or is it just a social construct? I can envision various scenarios involving onerous mortgages and starving children, and my conclusion seems to be: "Well, you'll just know it when you see it". But that seems to suggest it's just based on my present whim.

I think that you are right that there are no clear, definite rules about exactly when one may break a promise. But I don't think that this shows that whether or not it is acceptable is based on your whim.

Aristotle argued that there are only very rarely fixed rules in ethics, but there are nevertheless objective reasons for why (and when) actions are right and wrong. It just means that reasoning in ethics doesn't take the form of discovering rules. There aren't laws of ethics the way there are laws of nature. He argued that to know what is the right thing to do, at least in complex and unpredictable situations like this, you have to be good. So 'you'll just know it when you see it' is only true if you are a good person; if you aren't, you will probably think it is okay to break a promise when, really, it isn't (e.g. not being good, you might be swayed by selfishness to disregard the harm that breaking the promise would do to someone else).

Knowing when to break a promise is a matter of weighing up the reasons for and against breaking it that apply to the actual situation you are in (such as the importance of keeping your word v. the suffering of your starving children and the absence of any alternative). There aren't any algorithms for weighing up reasons, and in a different situation, there might be different reasons. So it is very difficult to base one situation on another. (We might at least think the fact that you'll break your promise is a reason not to do it, but what if you had made a promise to a morally bad person to do a morally bad thing? Then it seems it would be good to break your promise!)

The approach to ethics I've outlined here is known as 'particularism'. It rejects the idea that ethical reasoning is always about finding rules for behaviour. But we still have to reason in ethics - it isn't about our whims, and we can get the answers wrong.

A final point about your second sentence. Promises can be special and yet be a social construct (we don't have to choose). If there was no social agreement on promising, there could be no promises and nothing special about them. (If no one accepts your promise, you can't make it!) So promising is definitely a social construct in that sense. But being able to trust people to do what they say they will do is so important for us to live together that promising is special. It requires some very good reasons (again, not a mere whim) to justify breaking a promise.

I think that you are right that there are no clear, definite rules about exactly when one may break a promise. But I don't think that this shows that whether or not it is acceptable is based on your whim. Aristotle argued that there are only very rarely fixed rules in ethics, but there are nevertheless objective reasons for why (and when) actions are right and wrong. It just means that reasoning in ethics doesn't take the form of discovering rules. There aren't laws of ethics the way there are laws of nature. He argued that to know what is the right thing to do, at least in complex and unpredictable situations like this, you have to be good. So 'you'll just know it when you see it' is only true if you are a good person; if you aren't, you will probably think it is okay to break a promise when, really, it isn't (e.g. not being good, you might be swayed by selfishness to disregard the harm that breaking the promise would do to someone else). Knowing when to break a promise is a matter of weighing up the...