Why are people so skeptical about the notion that a sufficiently advanced

Read another response by Eddy Nahmias, Allen Stairs
Read another response about Mind
Why are people so skeptical about the notion that a sufficiently advanced computer program could replicate human intelligence (meaning free will insofar as humans have it; motivation and creativity; comparable problem-solving and communicative capacities; etc.)? If humans are intelligent in the way we are because of the way our brains are built, than a computer could be constructed that replicates the structure of our brains (incorporating fuzzy logic, neural networks, chemical analogs, etc). Worst comes to absolute worst, a sufficiently powerful molecular simulator could run a full simulation of a human brain or human body, down to each individual atom. So there doesn't seem to be anything inherent in the physicality of humans that makes it impossible to build machines with our intelligence, since we can replicate physical structures in machines easily enough. If, however, humans are intelligent for reasons that do not have anything to do with the physical structure of our brains or bodies - if there is some immaterial reason for consciousness, free will or other aspects of our intelligence - than we're essentially talking about souls. And if souls don't just supervene on physical phenomena (which is the entire nature of this fork of the problem - if they did supervene, we'd be back at the first point), then why shouldn't machines, too, be able have souls? Maybe they already do. The only way to escape this and continue to assert that machines could never possess human intelligence is to say that there is a god, or a group of gods, who decide what gets to have souls and what doesn't, and machines aren't on the list. But outside of theistic circles, this argument can't be expected to carry any weight for as long as people are skeptical about theism in general. So what leads so many people to believe that machines could never replicate a human intelligence?

My colleague and I disagree somewhat here, though perhaps on everything essential to your question, we agree.

We all agree that in principle the right kind of "machine" could be every bit as conscious, free, etc.as you and I. And Prof. Nahmias may well be right when he says that if a robot of the C3PO sort acted enough like us, we'd have a very hard time not thinking of it as conscious. I even agree with my co-panelist that people's religious beliefs and the relatively crude character of our actual gadgets may be part of the reason why many people don't think a machine could be conscious.

So where's the residual disagreement? It's on a point that may not be essential, given the way you pose your question. Prof. Nahmias thinks that replicating the functional character of the mind would give us reason enough to think the resulting thing was conscious. I'm not inclined to agree. But that has nothing to do with belief in souls (I don't believe in them and don't even think I have any serious idea what they're supposed to be) nor with the fact that the computers we have are primitive compared to full-fledged people. Interestingly, Prof. Nahmias himself actually identifies -- and agrees with -- the sticking point for folks like me. As he puts it, "we have no theory to explain [how our brains could produce consciousness] and in part because we have no models for how mental properties can be composed of material properties."

Now I don't take this to show that matter appropriately arranged can't be conscious. In fact, I believe that we are just such matter. That is, I agree with folk who think that somehow, I know not how, the right physical goings on make for consciousness. But I don't think a purely functional story will do. And it's not just because I don't know how it would work, but because it seems clear to me that a functional story alone doesn't have the resources.

All this is to say that I take what's often called the "explanatory gap" very seriously. I stay in the materialist cam because there's enough we don't know about matter that I'm cheerfully willing to believe that if we knew more, we might have an explanation for consciousness. As a fall-back, I'm quite willing to go along with Colin McGinn's "Mysterianism": it's matter doing its thing that makes us conscious, but we aren't wired to understand how. But it seems clear to me that not only do we not understand how a purely functional story could fill the gap; we understand enough to know that it couldn't.

On this point I'm cheerfully willing to agree to disagree with Prof. Nahmias; I hope he's willing to do likewise. My point isn't to convince you that he's mistaken, but rather to note that for at least some claims about how matter and mind are related, there are reasons for doubt of a different sort than the ones Prof. Nahmias highlights, though reasons that his own further remarks point to.

Related Terms