Advanced Search

How can we be sure that dreaming is a real phenomenon? It seems like there is

How can we be sure that dreaming is a real phenomenon? It seems like there is no scientifically objective way to know that a person is dreaming; the most we can do is ask them. We are relying on our own subjective experiences, which we cannot verify, and the words of others, which we cannot verify either. REM sleep is correlated with claims of dreaming, but mental activity isn't granular enough to figure out whether a person is in fact *experiencing* an absurd fantasy world rather than simple darkness. Is it possible to approach dreams and dreaming scientifically, if we have no way to examine or verify them or their existence in any way beyond subjective claims?

These are great questions. There have been philosophical arguments that suggest that it is impossible to know whether dreams occur while we sleep or are just confabulations we create as or after we awake (call this 'dream skepticism'). These arguments fail once we consider all the evidence and use abductive (best explanation) reasoning. When we wake people during REM, they are likely to report dreams. When we wake them during other phases of sleep, they are unlikely to report (or remember) dreams. When we record neural activity using EEG and now fMRI, we see activity that correlates both with the sorts of experiences reported by the dreamer and with the sorts of activity that correlates with similar waking experiences (fMRI cannot get at all of what you call the "granularity of experiences" but see the link below for an initial attempt). Etc. This body of data could be explained away by a dream skeptic, but that explanation would likely look ad hoc and fail to make predictions as good as the hypothesis that dream experiences are real, in the sense that they occur during sleep and have roughly the content people report (though first-person reports of even waking experience can be inaccurate in lots of ways, and since we cannot report the content of dreams while they occur, the inaccuracies of memory come into play... except that people can make reports of a sort--e.g., intentional eye movements--during lucid dreaming: see http://en.wikipedia.org/wiki/Lucid_dream)

Dream skepticism is one version of the more general problem of other minds--how can we know what other people are experiencing from the first person point of view (or that they are experiencing anything). The sort of 'best explanation' approach described briefly above is, I think, the best way to approach the general problem too. We have lots of reason, including lots of evidence, to think that neural activity correlates with (and is, in some sense, the basis of) conscious mental activity. So, we have lots of reasons to think that creatures with neural activity similar to our own have conscious experiences similar to our own. And this view makes lots of accurate predictions and explains lots of other observations. Etc.

For brain scanning of dreams, see: http://www.wired.com/wiredscience/2013/04/dream-decoder/

For a nice philosophical discussion of dreaming, read Owen Flanagan's Dreaming Souls: http://www.amazon.com/Dreaming-Souls-Evolution-Conscious-Philosophy/dp/0195142357

These are great questions. There have been philosophical arguments that suggest that it is impossible to know whether dreams occur while we sleep or are just confabulations we create as or after we awake (call this 'dream skepticism'). These arguments fail once we consider all the evidence and use abductive (best explanation) reasoning. When we wake people during REM, they are likely to report dreams. When we wake them during other phases of sleep, they are unlikely to report (or remember) dreams. When we record neural activity using EEG and now fMRI, we see activity that correlates both with the sorts of experiences reported by the dreamer and with the sorts of activity that correlates with similar waking experiences (fMRI cannot get at all of what you call the "granularity of experiences" but see the link below for an initial attempt). Etc. This body of data could be explained away by a dream skeptic, but that explanation would likely look ad hoc and fail to make predictions as good as the...

Is it a common view among philosophers that human beings are simply biological

Is it a common view among philosophers that human beings are simply biological computers? Doesn't this view reduce philosophy of mind to solely neuroscience?

It is a common view among philosophers that human beings are biological entities--that, in some sense, our minds (including our conscious mental processes) are our brains (are based on neural processes). There are few substance dualists (who think the mind is a non-physical entity). But in which sense the mind is the brain remains a topic of great controversy (some fancy terms for the relationship between the mental and physical include identity, supervenience, and functionalism).

It should not be controversial that information from neuroscience will inform debates in philosophy of mind. But it is unlikely that neuroscience alone will answer all questions about the nature of mind. Notice that just the way you phrased your question suggests complications. If we did think the brain were a biological computer (this view is one form of functionalism), then many of the details of neuroscience might turn out to be irrelevant. The interesting facts about computers are about their programs (software), but those programs do not depend on the details of the hardware on which they run. That is, computer science is not going to reduce to physics. So, if our minds are like computer programs, then everything about the mind will not be best explained by neuroscience (understood as the study of neural processes).

Conversely, our minds might not be best understood as computer programs that can 'run' on any old hardware. Particular facts about our brains may be crucial to the particular nature of our minds (e.g., consciousness). If so, neuroscience might be crucial for understanding those facts, and consideration of the mind as a computer would not be sufficient to understand the mind.

At a minimum, philosophy of mind will continue to play a crucial role in framing these questions and these theoretical possibilities, and typically philosophers of mind, informed by the relevant scientific information, can help develop new ways of doing the relevant science and can help integrate and interpret the relevant results.

It is a common view among philosophers that human beings are biological entities--that, in some sense , our minds (including our conscious mental processes) are our brains (are based on neural processes). There are few substance dualists (who think the mind is a non-physical entity). But in which sense the mind is the brain remains a topic of great controversy (some fancy terms for the relationship between the mental and physical include identity, supervenience, and functionalism). It should not be controversial that information from neuroscience will inform debates in philosophy of mind. But it is unlikely that neuroscience alone will answer all questions about the nature of mind. Notice that just the way you phrased your question suggests complications. If we did think the brain were a biological computer (this view is one form of functionalism), then many of the details of neuroscience might turn out to be irrelevant. The interesting facts about computers are about their programs ...

Who are some modern philosophers that argue for either dualism or the idea that

Who are some modern philosophers that argue for either dualism or the idea that mind is a nonphysical substance?

Here's another contemporary philosopher you might want to look into: Galen Strawson--

"I take physicalism to be the view that every real, concrete phenomenon in the universe is physical. …[O]ne thing is absolutely clear. You're…not a real physicalist, if you deny the existence of the phenomenon whose existence is more certain than the existence of anything else: experience, 'consciousness', conscious experience, 'phenomenology', experiential 'what-it's-likeness', feeling, sensation, explicit conscious thought as we have it and know it at almost every waking moment. … [E]xperiential phenomena 'just are' physical, so that there is a lot more to neurons than physics and neurophysiology record…." (Strawson, Galen (2006), Realistic Monism, in A. Freeman (ed.), Consciousness and Its Place in Nature (Exeter: Imprint Academic))

By "modern philosophers" I am assuming you mean contemporary philosophers. (We philosophers use "modern philosophers" to refer primarily to European philosophers from roughly 1600-1900, and among that group there are a number of substance dualists, including Descartes, Malebranche, Leibniz, and arguably Kant). Among contemporary Western philosophers, there are not that many substance dualists, though it is making a bit of a comeback recently. Of note are E.J. Lowe, Richard Swinburne, and (I think) Alvin Plantiga. I am likely leaving out others. There is an even bigger resurgence of "property dualists", people who argue that the universe consists of just one kind of substance, but all (or some) of that substance has both physical properties and mental properties. David Chalmers played a big role in motivating this position. Recently, Susan Schneider (if I understand her correctly) has argued that you can't be a property dualist without accepting substance dualism. The dominant position in...

Why are people so skeptical about the notion that a sufficiently advanced

Why are people so skeptical about the notion that a sufficiently advanced computer program could replicate human intelligence (meaning free will insofar as humans have it; motivation and creativity; comparable problem-solving and communicative capacities; etc.)? If humans are intelligent in the way we are because of the way our brains are built, than a computer could be constructed that replicates the structure of our brains (incorporating fuzzy logic, neural networks, chemical analogs, etc). Worst comes to absolute worst, a sufficiently powerful molecular simulator could run a full simulation of a human brain or human body, down to each individual atom. So there doesn't seem to be anything inherent in the physicality of humans that makes it impossible to build machines with our intelligence, since we can replicate physical structures in machines easily enough. If, however, humans are intelligent for reasons that do not have anything to do with the physical structure of our brains or bodies - if there...

My colleague and I disagree somewhat here, though perhaps on everything essential to your question, we agree.

We all agree that in principle the right kind of "machine" could be every bit as conscious, free, etc.as you and I. And Prof. Nahmias may well be right when he says that if a robot of the C3PO sort acted enough like us, we'd have a very hard time not thinking of it as conscious. I even agree with my co-panelist that people's religious beliefs and the relatively crude character of our actual gadgets may be part of the reason why many people don't think a machine could be conscious.

So where's the residual disagreement? It's on a point that may not be essential, given the way you pose your question. Prof. Nahmias thinks that replicating the functional character of the mind would give us reason enough to think the resulting thing was conscious. I'm not inclined to agree. But that has nothing to do with belief in souls (I don't believe in them and don't even think I have any serious idea what they're supposed to be) nor with the fact that the computers we have are primitive compared to full-fledged people. Interestingly, Prof. Nahmias himself actually identifies -- and agrees with -- the sticking point for folks like me. As he puts it, "we have no theory to explain [how our brains could produce consciousness] and in part because we have no models for how mental properties can be composed of material properties."

Now I don't take this to show that matter appropriately arranged can't be conscious. In fact, I believe that we are just such matter. That is, I agree with folk who think that somehow, I know not how, the right physical goings on make for consciousness. But I don't think a purely functional story will do. And it's not just because I don't know how it would work, but because it seems clear to me that a functional story alone doesn't have the resources.

All this is to say that I take what's often called the "explanatory gap" very seriously. I stay in the materialist cam because there's enough we don't know about matter that I'm cheerfully willing to believe that if we knew more, we might have an explanation for consciousness. As a fall-back, I'm quite willing to go along with Colin McGinn's "Mysterianism": it's matter doing its thing that makes us conscious, but we aren't wired to understand how. But it seems clear to me that not only do we not understand how a purely functional story could fill the gap; we understand enough to know that it couldn't.

On this point I'm cheerfully willing to agree to disagree with Prof. Nahmias; I hope he's willing to do likewise. My point isn't to convince you that he's mistaken, but rather to note that for at least some claims about how matter and mind are related, there are reasons for doubt of a different sort than the ones Prof. Nahmias highlights, though reasons that his own further remarks point to.

You have some philosophy questions in here and some psychology questions. The philosophical questions are about (1) whether a machine could ever replicate all human behavior (i.e., pass a "complete Turing Test"), and (2) whether such complete replication of behavior would entail that the machine actually had the mental states that accompany such behavior in humans (i.e., whether a machine's (or an alien!) passing such a complete Turing Test means that it is conscious, self-aware, intelligent, free, etc.). There's a ton to be said here, but my own view is that the answers you suggest are the right ones--namely, that there is no in principle reason that a machine (such as an incredibly complex computer in an incredibly complex robot) could not replicate all human behavior, and that if it did, we would have just as good reason to believe that the machine had a mind (is conscious, intelligent, etc.) as we do to believe other humans have minds. I think there may be severe practical limitations to...

Are dreams experiences that occur during sleep? Or are they made-up memories

Are dreams experiences that occur during sleep? Or are they made-up memories that only occur upon waking? How could one tell either way?

Good question, one that has been debated by philosophers (perhaps even psychologists?), and one that is answered nicely in Owen Flanagan's Dreaming Souls. You can get a glimpse of the problem on p. 19 found here but he gives the full answer later in the book (e.g., pp. 174-5). Basically, this question offers a nice case where we have to go beyond the evidence offered by our first-person experiences. We can't be sure, upon waking up, whether we had a dream a while ago during sleep or whether our minds are very quickly making up false memories that we experience as dreams. (We also can't be sure from our experiences how long our dreams last--Kant and others have thought they occur 'in a flash'. And we can't be sure whether our reports of our dreams accurately convey what we actually dreamed, assuming the dream experiences occurred during sleep.)

If one assumes that our experiences are the only evidence relevant to answering such questions, then one may not be able to answer them. But here we should "go abductive." We should consider which is the best theory in terms of all the relevant evidence--the theory that is most consistent with all the evidence, explains more of it, predicts new discoveries, etc. Here are some reasons to think that the best theory in this case is the one that says we have dream experiences during sleep (T1) rather than constructing dreams upon awakening (T2):

  • There is a strong correlation between REM (rapid eye movement) during sleep and dream reports: when you wake someone up during REM they often report dreaming and when you wake them up when not in REM, they typically do not report dreaming. T1 predicts that there will be such differences, while T2 has a harder time explaining it (note: you can make T2 fit the evidence but it gets more ad hoc).
  • Length of time in REM sleep correlates roughly with reported experience of length of dream. Again, evidence for T1 over T2.
  • The brain changes in regular ways during times that correlate with both REM and reports of dreams upon being awoken. And some of these changes suggest experiences consistent with the contents of dreams (e.g., visual cortex is active and we report visual experiences in dreams, etc.).

And so on. I hope this helps. One lesson, I think, is that the best way to approach some philosophical questions, including (especially?) ones about our minds, is to gather evidence from every source we can and come up with the explanation that best fits that evidence. Of course, it's a philosophical question which evidence is relevant and what counts as best fit!

Good question, one that has been debated by philosophers (perhaps even psychologists?), and one that is answered nicely in Owen Flanagan's Dreaming Souls . You can get a glimpse of the problem on p. 19 found here but he gives the full answer later in the book (e.g., pp. 174-5). Basically, this question offers a nice case where we have to go beyond the evidence offered by our first-person experiences. We can't be sure, upon waking up, whether we had a dream a while ago during sleep or whether our minds are very quickly making up false memories that we experience as dreams. (We also can't be sure from our experiences how long our dreams last--Kant and others have thought they occur 'in a flash'. And we can't be sure whether our reports of our dreams accurately convey what we actually dreamed, assuming the dream experiences occurred during sleep.) If one assumes that our experiences are the only evidence relevant to answering such questions, then one may not be able to answer them. But...

Can dogs lie? Our dog will 'pretend' to bark at something outside the house when

Can dogs lie? Our dog will 'pretend' to bark at something outside the house when it is near time for her meal or she has not been for a walk. As she has other behaviours to get our attention, patting with her paw, staring mournfully, or stand over us on our lounge - she is a big dog - it seems she 'chooses' to 'lie' at times to get our attention.

Good question, and I think it has a lot of philosophical import. Here's why. What we might call a "true lie" is one where the liar knows what she is doing. She knows that she needs to do or say something to alter what her target believes in order to get him to do something the liar wants. Contrast this with a "behavioristic lie," one that has the effect of getting the target to behave a certain way but without the "liar" knowing how she is doing it. Take the case of a 3-year-old girl who has learned that saying "I'm tired" often gets her out of doing something she doesn't want to do. One night her dad says "It's time to go to bed," so she repeats her standard ploy, "I'm tired." She does not seem to know how her lie works!

This difference between "true lying" and "behavioristic lying" seems to make a big difference. Behavioristic lying might not require any especially impressive cognitive abilities. Well, behavioristic learning itself is pretty impressive--and it allows more interesting and flexible forms of deception than, say, animal mimicry (the viceroy butterfly isn't doing anything cognitive in "pretending" to look like the poisonous monarch butterfly). But it's not as impressive as true lying. Your dog's behavior, if it is just behavioristic lying, does not seem to require understanding your mental states--your beliefs, desires, or intentions. Rather, your dog, like the 3-year-old girl, may have simply learned from past experience what works to get what she wants (e.g., to get fed or taken for a walk). Real lying, on the other hand, seems to require understanding that others perceive the world differently from you, they have different desires, beliefs, and intentions than your own. One cannot intentionally manipulate others' beliefs (i.e., truly lie) unless one understands that they have beliefs that can be manipulated (i.e., that can be false).

I happen to think the ability to "truly lie" may be unique to humans' (though perhaps it shows up in some other higher primates or dolphins or perhaps even dogs given their long co-evolution with humans). And I think it likely evolved because of our ancestors' complex social interactions (including reciprocal altruism) and in tandem with our remarkable ability to interpret, explain, and predict the behavior of others and ourselves in terms of beliefs, desires, intentions, etc. Once you've got that ability, you may be on your way to being able to think about alternative possibilities, choosing (freely) in light of such thinking about alternative future outcomes, thinking symbolically, doing philosophy, the whole shabang! Though I'm a bit leery of saying that so much of what makes us human is tied to our remarkable ability to truly lie...

Good question, and I think it has a lot of philosophical import. Here's why. What we might call a "true lie" is one where the liar knows what she is doing. She knows that she needs to do or say something to alter what her target believes in order to get him to do something the liar wants. Contrast this with a "behavioristic lie," one that has the effect of getting the target to behave a certain way but without the "liar" knowing how she is doing it. Take the case of a 3-year-old girl who has learned that saying "I'm tired" often gets her out of doing something she doesn't want to do. One night her dad says "It's time to go to bed," so she repeats her standard ploy, "I'm tired." She does not seem to know how her lie works! This difference between "true lying" and "behavioristic lying" seems to make a big difference. Behavioristic lying might not require any especially impressive cognitive abilities. Well, behavioristic learning itself is pretty impressive--and it allows more interesting and...

Can the mind "feel" things even though nothing has happened? If so how does

Can the mind "feel" things even though nothing has happened? If so how does this work? For example, someone swung a textbook at my head playfully, and even though he did not hit me, I still felt something where he would have hit.

The brain and nervous system "combine" information from different sensory modalities, so it is quite likely that when you visually perceive that you are about to be hit, other parts of your brain respond, including perhaps sensory systems that normally perceive pain in that part of the head and/or motor systems that prepare you to react to such a blow. There is a lot of interesting research showing that the same parts of the brain are active when you imagine performing an action (but don't perform it) as are active when you perform the action--sometimes you can start to feel your body doing something even though you don't move. Your situation might be sort of the reverse of this. The key is to remember that even though "nothing has happened" on the outside, lots can be happening on the inside--that is, in the brain, which of course, is the basis of our minds' feeling things.

The brain and nervous system "combine" information from different sensory modalities, so it is quite likely that when you visually perceive that you are about to be hit, other parts of your brain respond, including perhaps sensory systems that normally perceive pain in that part of the head and/or motor systems that prepare you to react to such a blow. There is a lot of interesting research showing that the same parts of the brain are active when you imagine performing an action (but don't perform it) as are active when you perform the action--sometimes you can start to feel your body doing something even though you don't move. Your situation might be sort of the reverse of this. The key is to remember that even though "nothing has happened" on the outside, lots can be happening on the inside--that is, in the brain, which of course, is the basis of our minds' feeling things.

How do thoughts interact with the physical universe? Our movements and actions

How do thoughts interact with the physical universe? Our movements and actions seem to be simple responses to the signals from our brain, but what triggers those neurons? I mean, we –chose - to act. We think “do I want to do this, yes.” Then do it. How is that possible? If it’s possible for immaterial things like thoughts with no apparent location in the physical universe to interact with our neurons then why isn’t it possible for imaginary concepts to interact with other physical catalysts?

You are raising really interesting questions that philosophers debate under the headings of "mental causation," "theory of action," and "free will." One way the problem gets generated is by assuming, as you do, that thoughts (including decisions or intentions) are immaterial things. That's what Descartes said, and ever since, the main objection to his view is your question--how could an immaterial thing causally interact with a physical thing like the brain (and vice versa, since on his view the physical world sends information through the brain to the mind which consciously experiences it--how the heck could that happen?).

The main response to this problem is by giving up your assumption of "dualism" and instead try to understand how thoughts and conscious experiences can be part of the physical world. That's no easy task. But one way to make the initial move in that direction is to see that the idea of non-physical or immaterial thoughts makes no more sense than the idea of physical thoughts. We simply have no clue what an immaterial soul or mind would be or how it would work and we have no idea how we might go about studying immaterial things. On the other hand, we are increasingly understanding how the brain works to produce thoughts, experiences, and actions, and we are beginning to understand how we might study consciousness and action in a way that combines first-person reports about how things seem to us with "hard" science approaches, such as neuroscience.

And if we assume that mental states just are brain states (or perhaps mental states "arise from"--without being distinct from--brain states), then we can begin to answer your question of how thoughts cause actions, since they are all part of the physical system.

Hope this helps!

You are raising really interesting questions that philosophers debate under the headings of "mental causation," "theory of action," and "free will." One way the problem gets generated is by assuming, as you do, that thoughts (including decisions or intentions) are immaterial things. That's what Descartes said, and ever since, the main objection to his view is your question--how could an immaterial thing causally interact with a physical thing like the brain (and vice versa, since on his view the physical world sends information through the brain to the mind which consciously experiences it--how the heck could that happen?). The main response to this problem is by giving up your assumption of "dualism" and instead try to understand how thoughts and conscious experiences can be part of the physical world. That's no easy task. But one way to make the initial move in that direction is to see that the idea of non-physical or immaterial thoughts makes no more sense than the idea of physical...

I have just found out today that the man I have been dating for 6 months is

I have just found out today that the man I have been dating for 6 months is mildly autistic. I had no idea about this until just a few hours ago, so this realization left me shocked. I understand autism and that it is nothing like mental retardation, or anything to that extent. But still I feel like I am doing something morally wrong by continuing to date him. Should I end the relationship because it isn't fair to him, seeing as he may not fully understand his feelings or mine? Or should I continue the relationship because his autism is only mild? Please let me know what you think, I am completely torn and cannot figure out whether I am doing something horribly wrong or not.

And... as someone with a close relative who is on the high-functioning end of the autistic continuum, I'd like to add Tony Attwood's website and books to the list of recommendations. But I would agree emphatically with Louise: it's a mistake to think that autistic people are unaware of others' feelings, or incapable of empathy. And I really can't see that you'd be doing anything morally wrong at all by continuing the relationship. Having Asperger's or high-functioning autism doesn't make someone morally defective, and it doesn't mean they can't care deeply about other people. What Louise and Eddy and Peter have said is much more like it.

This isn't to say that autism spectrum conditions can't complicate relationships. But we could say the same things about many traits of personality and character that have nothing to do with autism. Few of us are perfect; people with autism just have a diagnosis.

I would add to Louise's eloquent response one point: autism is the name for a spectrum of mental functioning and I suspect that if your boyfriend functions in a way that you did not notice anything remarkable, then he is on what they call the "high end" of the spectrum. You should discuss with him his history and his diagnosis. You may also wish to read some of the first-person accounts of people with autism, such as Temple Grandin 's.