Advanced Search

It seems we like to tell one another that it is important to feel negative

It seems we like to tell one another that it is important to feel negative emotions, like sadness or confusion or grief, because it is an important part of being human. Is this really the case, or could we just as well do without grief and despair? Conversely, is it also an important part of being human to feel rage, or hatred towards someone or something?

There are two ways to read your questions:

1. Would we be better off never feeling negative emotions because they were never called for--i.e., because we never experienced the sorts of events that make grief or anger an appropriate reaction? Or...

2. Would we be better off never feeling negative emotions regardless of what happens to us?

I am inclined to answer 'no' to the second question. While some (e.g., Stoics and Buddhists, at least on an oversimplified reading) suggest that we should approach negative events with a level of detachment that make grief, anger, or despair inappropriate, and the wise or enlightened person will reach a point where she can avoid feeling such emotions, I find that approach inappropriate. I think it would be both mistaken and almost inhuman not to feel grief at the death of one's child or not to feel some level of anger at the terrorists who perpetrated 9/11 (whether despair is ever appropriate is trickier). So, I do not think we would be better off if we could get ourselves to stop feeling these emotions no matter what happens to us. There's another question here, which is whether we could have been built such that the appropriate response to tragic events was not negative emotions, but if that were the case, I'm more inclined to say we would not really be human anymore (notice the Stoic or Buddhist is not built so as to never feel these emotions; he has to develop that attitude).

The first interpretation of your question suggests the debate about the problem of evil and suffering. It seems like an all-good, all-powerful God could have made this world a 'heaven on earth' such that we could be built the way we are but we simply did not face any (or many) tragic events. Then we could be human and still avoid any (or as much) despair, grief, anger, and suffering. Wouldn't that be the best of all possible worlds? Well, answers vary here. Some think tragic events and our appropriate emotional responses to them are necessary for us to become better people, build virtues like courage, and learn to empathize (of course, without tragedy, the need for courage and empathy is minimized). When I teach the problem of evil, I like to point out that without tragedies and the negative emotions that go with them, we'd certainly have less interesting literature and movies. But I have to say, I don't think this whimsical point nor the 'soul-building' defense are strong enough to explain why an all-good God would allow so much suffering (especially so much suffering of innocent children).

Here's a riddle your question raises too: If God exists and is entirely perfect, does he (can he) feel any negative emotions, such as grief and anger? If so, how? If not, then it would suggest we'd be more perfect if we were less human and more godly.

There are two ways to read your questions: 1. Would we be better off never feeling negative emotions because they were never called for--i.e., because we never experienced the sorts of events that make grief or anger an appropriate reaction? Or... 2. Would we be better off never feeling negative emotions regardless of what happens to us? I am inclined to answer 'no' to the second question. While some (e.g., Stoics and Buddhists, at least on an oversimplified reading) suggest that we should approach negative events with a level of detachment that make grief, anger, or despair inappropriate, and the wise or enlightened person will reach a point where she can avoid feeling such emotions, I find that approach inappropriate. I think it would be both mistaken and almost inhuman not to feel grief at the death of one's child or not to feel some level of anger at the terrorists who perpetrated 9/11 (whether despair is ever appropriate is trickier). So, I do not think we would be...

Could a robot, imbued with artificial intelligence, feel emotion? And could it

Could a robot, imbued with artificial intelligence, feel emotion? And could it feel the desire to improve its lot in life - e.g. if it was a servant robot, could it feel the desire to overthrow its master, escape the humiliation of being a servant, and possess things for itself?

I don't see any reason that a robot could not, in principle, be built that would be conscious and feel emotions. Some people (John Searle, most famously) disagree, at least about an artificial system that does not replicate our brains' "causal properties". However, I don't think we have any good ideas about how to create consciousness in robots, in part because we don't have any good theories about how consciousness in humans works.

It's always possible that human consciousness only exists because we have something robots could never have (e.g., immaterial souls, although it's not clear why it is impossible robots could be endowed with souls, or our particular biological materials). But it seems more likely to me that our conscious experiences and emotions (including our feelings to improve our lot in life, our desire for possessions, our desires for freedom) are the product of complex processes in the brain that could, in theory, be replicated in a non-biological system. It seems likely to me that the system would have to develop and learn and would have to have a "body" and interact with the real world and real agents (or at least a Matrix world).

Finally, it is also possible that robots could be designed to have non-conscious "desires" (motivations), including motivations to avoid being harmed or to acquire certain possessions. We certainly have some non-conscious motivations. So, we probably need to start thinking about the implications of these possibilities, since there's no question people will try to design such robots at some point. Should we develop some rules for robots?

For some discussion of these issues, see here and here.

I don't see any reason that a robot could not, in principle, be built that would be conscious and feel emotions. Some people (John Searle, most famously) disagree, at least about an artificial system that does not replicate our brains' "causal properties". However, I don't think we have any good ideas about how to create consciousness in robots, in part because we don't have any good theories about how consciousness in humans works. It's always possible that human consciousness only exists because we have something robots could never have (e.g., immaterial souls, although it's not clear why it is impossible robots could be endowed with souls, or our particular biological materials). But it seems more likely to me that our conscious experiences and emotions (including our feelings to improve our lot in life, our desire for possessions, our desires for freedom) are the product of complex processes in the brain that could, in theory, be replicated in a non-biological system. It seems likely to...