Update: An interesting article about one of my computer science colleagues on the subject of cheating in chess and touching on the nature of "intelligence" in chess just appeared in Chess Life magazine; the link is here
I'm sympathetic to most of what Professor Heck says, if we consider things from a deontological or even a consequentialist point of view, where the relevant consequences are external to the agent. Fantasy does not violate anyone's rights, and fantasy that never motivates action will not result in actions that harm anyone. But I think there is a plausible way of looking at things that would still find fault with fantasizing about having sex with children, and that would come from the aretaic (or virtue-theoretic) way of thinking, according to which the primary bearer of value is to be found in characteristics of agents. One who indulges in fantasies about sex with children is doing something that both reflects--and also perhaps perpetuates and sustains--a certain trait of character that we might think is not entirely wholesome or admirable. To the extent that we can regard one who indulges in such fantasies as having a trait of character that is improvable, we might also think that some attempt to eliminate or at least diminish the inclination to indulge in such fantasies would result in that person having some improvement in character. It may be that habituation can only go so far, and that virtue theorists (such as Aristotle) overrate the extent to which one can habituate better character traits, but it certainly does seem that a virtue theorist could find the character of someone who tends to indulge in such fantasies at least improvable, and this way of looking at things does, I think, put a different face on this kind of case than what Professor Heck has indicated.
As Richard points out, logically, no, it does not follow. Just because two things are both (merely) physical, it does not follow that one of them can do anything that the other can do, not even if both of the (merely) physical things are brains. My pencil is a physical thing, but it can't do everything that my brain can. A cat's brain is physical, but it can't do everything that mine can. (Of course, mine can't do everything a cat's brain can either: I don't usually land on my feet when I jump from a height, and I'm pretty bad at catching mice.)
But I think your question really is simply whether a sufficiently advanced computer can do anything that a human brain can. Even so, we need to be a bit more precise. By "anything", I'm guessing that you really mean "anything cognitive"; so, I think your real question is a version of: Can computers think?
Philosophers, cognitive scientists, and computer scientists disagree on the answer to that question. I think that one of the best ways to think about how to answer it is this: How much of (human) cognition is computable? In other words, how much of the cognitive things we do (like think, reason, use language, plan, learn, remember, and so on) can be done by (or, more weakly, simulated by) a computer?
If the answer is that all of it is computable, then the answer to your question (as I'm reinterpreting it) is "yes". If the answer is that only some of it is computable, then it will be interesting to see which things are not computable, and why they aren't. But a very great deal of human cognitive activity has been shown to be computable (at least in part), so we can be hopeful that the real answer is that all of it is.
There has been a lot written on this topic. A good place to start is with Alan Turing's classic paper, "Computing Machinery and Intelligence"; there's an online version here
To find out what researchers in artificial intelligence (AI) have accomplished, visit the AI Topics website
I don't work in this sort of area myself, but this kind of view has been held. The position is known as mysterianism, and its main proponent is Colin McGinn. Considerations in the same ballpark also fuel the (in)famous arguments against mechanism due to John Lucas.
What certainly does seem clear is that this kind of possibility can't be ruled out a priori. Surely there are some things human minds simply could not ever understand. That's true of all other creatures. Cats, for example, clearly do not have minds complex enough to understand calculus, let alone the nature of their own minds. We all have cognitive limitations. Perhaps we are in a similar position with respect to our minds.
But it is not obvious either that our minds are limited in this particular way. The "self-reflective" aspect of understanding our own minds does not, by itself, show that we couldn't possibly do it. Your references to complexity and the like are suggestive, but there are many ways to measure of complexity.
Somewhat in line Searle's arguments in "Minds, Brains and Programs" I would say that the key is: original intentionality. Intentionality means something like 'aboutness' or 'representation', in the way that the sentence 'Hesperus is a planet' is about Venus, or represents Venus ('Hesperus' being a name for Venus). In some sense the rings on a tree represent its age: one ring per year. In some sense the written wordforms, the mere physical shapes, 'Hesperus is a planet' represent Venus. But our minds seem to represent things in a much deeper and more fundamental way. The tree rings merely correlate with its age in years. The mere wordforms only represent because we take them to do so. The intentionality of the wordforms is derived from us, whereas the intentionality of our thought that Hesperus is a planet is not derived from anything else: it is original intentionality. I would suggest, as a crude first move, that sentience is intentionality. Searle's thought was that no matter how sophisticated a computer might be, if it was made out of silicon, or a man running around very very fast shifting large numbers of bits of paper around (following a program that was written on a blackboard), it would not be doing anything like genuine thinking or cognition. Suppose the computer was one for making a Chinese meal. It would only be about Chinese food in the derived sense. We could see it as telling us how to cook a meal because we have ways correlating its activity with things we want to know about Chinese food. But it would not in any deeper or more real sense be about, or represent, Chinese food. Its intentionality would be derived, not original. Searle's view was that only a brain or something suitably like a brain could have original intentionality. While I do not myself agree with Searle that we can be sure that the silicon computer or the very fast man with his bits of paper, do not have original intentionality, I do agree that we cannot be sure that they do have it. I don't think we know in virtue of what some physical systms have original intentionality and some do not. But there lies the key to sentience, when we find it.
I'm not sure what's meant by "adequately dealt with", but if it means something like, "Come up with an answer that satisfies a fairly large group of people", then no, I don't think so. But to the other question, whether philosophers today still care about the mind-body problem, the answer is undoubtedly that they are. You might start here: http://plato.stanford.edu/entries/physicalism/. The problem isn't that no-one has any good ideas what to say about mind and body, it's rather that too many people have too many good ideas, and the problem is fantastically hard. So hard that some philosophers, such as Colin McGinn, have argued that human beings are cognitively incapable of solving it (just as, say, dogs are cognitively incapable of even fairly basic mathematics). I don't say McGinn is right, just that one shouldn't assume the contrary.
I know several people who believe such things, or at least say they do.
One group thinks that there are true contradictions that involve very special cases. The usual example is the so-called liar sentence, "This very sentence is not true". There is a simple argument that the liar sentence is both true and not true, and some people believe just that.
Other people, though, think there are contradictions involving much less special cases. An example would be what are called "borderline cases" of vaguepredicates, like "bald". People often want to say that there are somepeople who aren't bald and aren't not bald either. But the so-called DeMorgan equivalences entail that this is equivalent to saying that theperson is both bald and not-bald (or, strictly, both not-bald andnot-not-bald).
People who hold such views are known as "dialetheists". See this article for more.
There are objective scientific tests which show that we don't all see colours the same, such as the Ishihara test for colour vision. Most people don't even see the same "colours" out of both eyes. For many people the left eye might see things more saturated than the right.
The question should also perhaps be refined a bit. Shouldn't it be formulated as whether we see things (objects, surfaces, volumes etc.) in the same colours? "Do we see colours the same?" as it stands seems to mean, "Do you see red as I see red?" But this presupposes that we are both seeing red, and then the question seems to ask whether we see it the same way, for example with the same degree of saturation or exactly as blue.
Downloading such an avatar, assuming it were possible, would probably not result in a "real" person because such an avatar would doubtless be less "complete" than a real person. There are two other discussions besides Velleman's that you might find interesting:
Pollock, John L. (2008), "What Am I? Virtual machines and the mind/body problem", Philosophy and Phenomenological Research 76(2):237-309, online at http://oscarhome.soc-sci.arizona.edu/ftp/PAPERS/Virtual-machines.pdf
and a terrific science-fiction novel by a philosopher:
Leiber, Justin (1980), Beyond Rejection (Del Rey Books); out of print, but available on amazon.com
I think I'm confused. The two girls have two brains---one each. So I don't see any threat here to mind--brain identity.
There is something philosophically interesting about the fact that the girls, together, can ride a bike, etc, using their shared torso, etc, and I'd be interested to hear what people who work on the body would have to say about them. One would really need to know a lot more about them---about what their ability to control "their" legs are like, etc. But I don't see any threat here either to dualism---though dualism does have its own share of problems.