Advanced Search

I have a question about "solved" games, and the significance of games to

I have a question about "solved" games, and the significance of games to artificial intelligence. I take it games provide one way to assess artificial intelligence: if a computer is able to win at a certain game, such as chess, this provides evidence that the computer is intelligent. Suppose that in the future scientists manage to solve chess, and write an algorithm to play chess according to this solution. By hypothesis, then, a computer running this algorithm wins every game whenever possible. Would we conclude on this basis that the computer is intelligent? I have an intuition that intelligence cannot be reduced to any such algorithm, however complex. But that seems quite strange in a way, because it suggests that imperfect play might somehow demonstrate greater intelligence or creativity than perfect play. [If the notion of "solving" chess is problematic, another approach is to consider a computer which plays by exhaustively computing every possible sequence of moves. This is unfeasible with...

Update: An interesting article about one of my computer science colleagues on the subject of cheating in chess and touching on the nature of "intelligence" in chess just appeared in Chess Life magazine; the link is here

This is a very good question. It is reminiscent of the debate over the so-called "Turing Test", in particular, of an objection to the Turing Test made by Ned Block: his "Blockhead". See the SEP article on the Turing Test for more on this. In the case of chess, it is generally believed that chess is solvable in principle. There are only finitely many possible moves at any stage, etc. So, in principle, a computer could check through all the possibilities and determine the optimum move at each stage. Practically, this is impossible at present, as there are too many moves. But if chess had been solved, and if a computer were simply programmed to make the best move at each stage, then it seems quite clear that no "intelligence" would be involved. Of course, this does not by itself show that "intelligence cannot be reduced to any...algorithm", and the question whether it could be is hotly disputed. There are some famous (or infamous) arguments due to Lucas and Penrose that attempt to establish...

Is it wrong to fantasize about sex with children? If a pedophile never acts on

Is it wrong to fantasize about sex with children? If a pedophile never acts on their fantasies are they still guilty of having evil thoughts, assuming that their abstinence comes out of a genuine desire not to do harm?

I'm sympathetic to most of what Professor Heck says, if we consider things from a deontological or even a consequentialist point of view, where the relevant consequences are external to the agent. Fantasy does not violate anyone's rights, and fantasy that never motivates action will not result in actions that harm anyone. But I think there is a plausible way of looking at things that would still find fault with fantasizing about having sex with children, and that would come from the aretaic (or virtue-theoretic) way of thinking, according to which the primary bearer of value is to be found in characteristics of agents. One who indulges in fantasies about sex with children is doing something that both reflects--and also perhaps perpetuates and sustains--a certain trait of character that we might think is not entirely wholesome or admirable. To the extent that we can regard one who indulges in such fantasies as having a trait of character that is improvable, we might also think that some attempt to eliminate or at least diminish the inclination to indulge in such fantasies would result in that person having some improvement in character. It may be that habituation can only go so far, and that virtue theorists (such as Aristotle) overrate the extent to which one can habituate better character traits, but it certainly does seem that a virtue theorist could find the character of someone who tends to indulge in such fantasies at least improvable, and this way of looking at things does, I think, put a different face on this kind of case than what Professor Heck has indicated.

So far as I can see, there's nothing wrong with fantasizing about sex with children. There's nothing wrong with fantasizing about anything you like. If that seems crazy, then it's probably because you are thinking that someone who fantasizes about something must actually wish to do that thing. But that is just not true. As Nancy Friday makes very clear in My Secret Garden , her classic and groundbreaking study of female sexual fantasy, fantasy is not "suppressed wish fulfillment". The point runs throughout the book, which you can find on archive.org , but maybe the best statement is on pp. 27-8, though see also the poignant story that opens the book (pp. 5-7). I'd post an excerpt, but the language maybe isn't appropriate for this forum! As Friday's studies reveal, people fantasize about all kinds of things. Some women fantasize about being raped. It's a very common fantasy, in fact. That does not mean these women actually want to be raped, on any level. As Friday remarks, "The message...

If we assume that both computers and the human mind are merely physical, does it

If we assume that both computers and the human mind are merely physical, does it follow that a sufficiently advanced computer could do anything that a human brain could do?

As Richard points out, logically, no, it does not follow. Just because two things are both (merely) physical, it does not follow that one of them can do anything that the other can do, not even if both of the (merely) physical things are brains. My pencil is a physical thing, but it can't do everything that my brain can. A cat's brain is physical, but it can't do everything that mine can. (Of course, mine can't do everything a cat's brain can either: I don't usually land on my feet when I jump from a height, and I'm pretty bad at catching mice.)

But I think your question really is simply whether a sufficiently advanced computer can do anything that a human brain can. Even so, we need to be a bit more precise. By "anything", I'm guessing that you really mean "anything cognitive"; so, I think your real question is a version of: Can computers think?

Philosophers, cognitive scientists, and computer scientists disagree on the answer to that question. I think that one of the best ways to think about how to answer it is this: How much of (human) cognition is computable? In other words, how much of the cognitive things we do (like think, reason, use language, plan, learn, remember, and so on) can be done by (or, more weakly, simulated by) a computer?

If the answer is that all of it is computable, then the answer to your question (as I'm reinterpreting it) is "yes". If the answer is that only some of it is computable, then it will be interesting to see which things are not computable, and why they aren't. But a very great deal of human cognitive activity has been shown to be computable (at least in part), so we can be hopeful that the real answer is that all of it is.

There has been a lot written on this topic. A good place to start is with Alan Turing's classic paper, "Computing Machinery and Intelligence"; there's an online version here

To find out what researchers in artificial intelligence (AI) have accomplished, visit the AI Topics website

No, because the mere physicality of the brain does not imply that the brain is any kind of computer. Maybe the brain is capable of various sorts of quantum computations that would allow it to perform tasks that no computer, even in principle, can perform. Who knows? Indeed, some people have argued that we can prove that the human mind can do things no computer can do, and these arguments do not imply that the mind is in any way non-physical. I think those arguments are no good myself, but they make this point anyway.

I hope this makes sense... I've always been curious about attempts to understand

I hope this makes sense... I've always been curious about attempts to understand the way our minds work. To me, it seems paradoxical and in some ways even hopeless. I suspect that in order for the mind to understand or learn something new, the mind itself (or at least the way it works) needs to be more complex than what it it processing. In other words, the "size" of the new information cannot exceed the "capacity" of the mind itself in order to store it. An example of this would be the way computers work: Let's say I have a PC with an old operating system (Windows 2000) and I wish to run a software CD designed for a more advanced operating system (Windows 8). My old computer will most likely not recognize any of the information on that new CD, either because my old computer requires more free space (capacity of mind) or because the information stored on that CD requires a different kind of technology to decrypt (complexity of idea). Thus, you can use a computer to fully process programs (according to its...

I don't work in this sort of area myself, but this kind of view has been held. The position is known as mysterianism, and its main proponent is Colin McGinn. Considerations in the same ballpark also fuel the (in)famous arguments against mechanism due to John Lucas.

What certainly does seem clear is that this kind of possibility can't be ruled out a priori. Surely there are some things human minds simply could not ever understand. That's true of all other creatures. Cats, for example, clearly do not have minds complex enough to understand calculus, let alone the nature of their own minds. We all have cognitive limitations. Perhaps we are in a similar position with respect to our minds.

But it is not obvious either that our minds are limited in this particular way. The "self-reflective" aspect of understanding our own minds does not, by itself, show that we couldn't possibly do it. Your references to complexity and the like are suggestive, but there are many ways to measure of complexity.

I don't work in this sort of area myself, but this kind of view has been held. The position is known as mysterianism , and its main proponent is Colin McGinn . Considerations in the same ballpark also fuel the (in)famous arguments against mechanism due to John Lucas. What certainly does seem clear is that this kind of possibility can't be ruled out a priori. Surely there are some things human minds simply could not ever understand. That's true of all other creatures. Cats, for example, clearly do not have minds complex enough to understand calculus, let alone the nature of their own minds. We all have cognitive limitations. Perhaps we are in a similar position with respect to our minds. But it is not obvious either that our minds are limited in this particular way. The "self-reflective" aspect of understanding our own minds does not, by itself, show that we couldn't possibly do it. Your references to complexity and the like are suggestive, but there are many ways to measure of...

What would a robot have to be able to do, or what would it have to be, for us to

What would a robot have to be able to do, or what would it have to be, for us to consider it a sentient being as opposed to a non-sentient automaton? Please note I am using the term "robot" here in a broad sense, including such obviously sentient (fictional) constructs such as C-3PO of Star Wars fame. I don't consider "robot" and "sentient being" to be mutually exclusive terms. I'm interested in what fundamentally distinguishes sentient beings from automatons that merely mimic sentience.

Somewhat in line Searle's arguments in "Minds, Brains and Programs" I would say that the key is: original intentionality. Intentionality means something like 'aboutness' or 'representation', in the way that the sentence 'Hesperus is a planet' is about Venus, or represents Venus ('Hesperus' being a name for Venus). In some sense the rings on a tree represent its age: one ring per year. In some sense the written wordforms, the mere physical shapes, 'Hesperus is a planet' represent Venus. But our minds seem to represent things in a much deeper and more fundamental way. The tree rings merely correlate with its age in years. The mere wordforms only represent because we take them to do so. The intentionality of the wordforms is derived from us, whereas the intentionality of our thought that Hesperus is a planet is not derived from anything else: it is original intentionality. I would suggest, as a crude first move, that sentience is intentionality. Searle's thought was that no matter how sophisticated a computer might be, if it was made out of silicon, or a man running around very very fast shifting large numbers of bits of paper around (following a program that was written on a blackboard), it would not be doing anything like genuine thinking or cognition. Suppose the computer was one for making a Chinese meal. It would only be about Chinese food in the derived sense. We could see it as telling us how to cook a meal because we have ways correlating its activity with things we want to know about Chinese food. But it would not in any deeper or more real sense be about, or represent, Chinese food. Its intentionality would be derived, not original. Searle's view was that only a brain or something suitably like a brain could have original intentionality. While I do not myself agree with Searle that we can be sure that the silicon computer or the very fast man with his bits of paper, do not have original intentionality, I do agree that we cannot be sure that they do have it. I don't think we know in virtue of what some physical systms have original intentionality and some do not. But there lies the key to sentience, when we find it.

The other classic paper on this issue is Alan Turing 's "Computing Machinery and Intelligence", from 1950, which articulates what has come to be known as the " Turing Test ". Turing's idea was to set up an experiment. A modern version might use some kind of internet chat program. You are talking with two other "people". One really is a person. The other is a computer. You can talk to them for as long as you like, about whatever you like. Then if you can't tell the difference, Turing says, the computer is intelligent. Obviously, this is, at first blush, what Andrew calls an "epistemological" approach to the problem, but Turing doesn't see it just that way. Let me mention, by the way, that 2012 is also the " Alan Turing Year ", celebrating the 100th anniversary of his birth. Turing had a very interesting, and tragic, life. Not only was he one of the founders of modern computer science, he put his genius to work for the British military during World War II and helped crack the German codes ....

Has philosophy adequately dealt with the mind-body problem? I am looking for a

Has philosophy adequately dealt with the mind-body problem? I am looking for a serious answer from a person who is genuinely passionate about philosophy and not mere deferrals of the question through cliche stances so abundantly available amongst hobbyist-philosophers. Not to worry I am not out to justify some sort of theological stance, I am merely curious if professional philosophers are still concerned by this question or its derivatives. I would be very grateful for a response.

I'm not sure what's meant by "adequately dealt with", but if it means something like, "Come up with an answer that satisfies a fairly large group of people", then no, I don't think so. But to the other question, whether philosophers today still care about the mind-body problem, the answer is undoubtedly that they are. You might start here: http://plato.stanford.edu/entries/physicalism/. The problem isn't that no-one has any good ideas what to say about mind and body, it's rather that too many people have too many good ideas, and the problem is fantastically hard. So hard that some philosophers, such as Colin McGinn, have argued that human beings are cognitively incapable of solving it (just as, say, dogs are cognitively incapable of even fairly basic mathematics). I don't say McGinn is right, just that one shouldn't assume the contrary.

I'm not sure what's meant by "adequately dealt with", but if it means something like, "Come up with an answer that satisfies a fairly large group of people", then no, I don't think so. But to the other question, whether philosophers today still care about the mind-body problem, the answer is undoubtedly that they are. You might start here: http://plato.stanford.edu/entries/physicalism/ . The problem isn't that no-one has any good ideas what to say about mind and body, it's rather that too many people have too many good ideas, and the problem is fantastically hard. So hard that some philosophers, such as Colin McGinn, have argued that human beings are cognitively incapable of solving it (just as, say, dogs are cognitively incapable of even fairly basic mathematics). I don't say McGinn is right, just that one shouldn't assume the contrary.

Can we imagine a being who genuinely believes a bald-faced, explicit

Can we imagine a being who genuinely believes a bald-faced, explicit contradiction (such as that "murder is right, and murder is not right")? Or is there something in the very idea of belief which makes this, not only contingently unlikely, but necessarily impossible?

I know several people who believe such things, or at least say they do.

One group thinks that there are true contradictions that involve very special cases. The usual example is the so-called liar sentence, "This very sentence is not true". There is a simple argument that the liar sentence is both true and not true, and some people believe just that.

Other people, though, think there are contradictions involving much less special cases. An example would be what are called "borderline cases" of vaguepredicates, like "bald". People often want to say that there are somepeople who aren't bald and aren't not bald either. But the so-called DeMorgan equivalences entail that this is equivalent to saying that theperson is both bald and not-bald (or, strictly, both not-bald andnot-not-bald).

People who hold such views are known as "dialetheists". See this article for more.

I know several people who believe such things, or at least say they do. One group thinks that there are true contradictions that involve very special cases. The usual example is the so-called liar sentence, "This very sentence is not true". There is a simple argument that the liar sentence is both true and not true, and some people believe just that. Other people, though, think there are contradictions involving much less special cases. An example would be what are called "borderline cases" of vaguepredicates, like "bald". People often want to say that there are somepeople who aren't bald and aren't not bald either. But the so-called DeMorgan equivalences entail that this is equivalent to saying that theperson is both bald and not-bald (or, strictly, both not-bald andnot-not-bald). People who hold such views are known as "dialetheists". See this article for more.

Is there any objective, scientific way to prove that we all see colours the same

Is there any objective, scientific way to prove that we all see colours the same? I know it's one thing for two people to point at an object and agree on its colour, even the particular shade, but there's no way that I can tell whether or not the next person in line sees everything in shades of greys, or in negative. We can even study how light interacts with objects and enters our eyes, without truly knowing if one person would see everything the same if he suddenly were able to see though another's eyes. So, is there any proof that we all do see colours the same? Maybe even proof or evidence to the contrary? If that's so, I must say that you're all missing something great from where I can see.

There are objective scientific tests which show that we don't all see colours the same, such as the Ishihara test for colour vision. Most people don't even see the same "colours" out of both eyes. For many people the left eye might see things more saturated than the right.

The question should also perhaps be refined a bit. Shouldn't it be formulated as whether we see things (objects, surfaces, volumes etc.) in the same colours? "Do we see colours the same?" as it stands seems to mean, "Do you see red as I see red?" But this presupposes that we are both seeing red, and then the question seems to ask whether we see it the same way, for example with the same degree of saturation or exactly as blue.

This is a much discussed question, which often appears in the guise of the " inverted spectrum hypothesis ": One might wonder whether some other person sees what you see as red the way you see green, etc. It turns out it can't be quite that simple, but one might nonetheless wonder whether we do all see colors the same way. In fact, Ned Block has argued that there is some empirical evidence that we don't all see colors the same way. (See this paper .) It goes without saying that this is very controversial.

If a person were to be a created, a virtual reality person (such as a character

If a person were to be a created, a virtual reality person (such as a character in a Sims game, that "reacts" and "grows"), and this person was "downloaded" into an actual body, is that person considered "real?" Were they real before the download, or is a physical body part of the conception of real? Would you even be considered a legitimate person, since all of your "memories" could be considered "fake"?

Downloading such an avatar, assuming it were possible, would probably not result in a "real" person because such an avatar would doubtless be less "complete" than a real person. There are two other discussions besides Velleman's that you might find interesting:

Pollock, John L. (2008), "What Am I? Virtual machines and the mind/body problem", Philosophy and Phenomenological Research 76(2):237-309, online at http://oscarhome.soc-sci.arizona.edu/ftp/PAPERS/Virtual-machines.pdf

and a terrific science-fiction novel by a philosopher:

Leiber, Justin (1980), Beyond Rejection (Del Rey Books); out of print, but available on amazon.com

I have no views about this question at all. But I did recently hear the philosophy David Velleman read a paper on a very similar question. I think it's this one .

Abigail and Brittany Hensel, born 1990, midwest USA

Abigail and Brittany Hensel, born 1990, midwest USA A very rare, dicephalus pair, they have separate heads and necks, but share one torso and a pair of legs. Each has her own heart and stomach, and controls the limbs and feels sensation exclusively on her own side. They share three lungs and, below the waist, a single set of organs. Physically they move as one, in perfect co-ordination. Mentally they are independent, with different preferences and abilities. Their parents are opposed to separation, which would be highly dangerous. Even if successful, the girls would be left severely disabled, and unable to enjoy walking, running, swimming and bike riding which, together, they can do easily. I am a Cartesian Dualist - I think! Does this situation above not solve the Mind/Body, Mind/Brain problem?

I think I'm confused. The two girls have two brains---one each. So I don't see any threat here to mind--brain identity.

There is something philosophically interesting about the fact that the girls, together, can ride a bike, etc, using their shared torso, etc, and I'd be interested to hear what people who work on the body would have to say about them. One would really need to know a lot more about them---about what their ability to control "their" legs are like, etc. But I don't see any threat here either to dualism---though dualism does have its own share of problems.

I think I'm confused. The two girls have two brains---one each. So I don't see any threat here to mind--brain identity. There is something philosophically interesting about the fact that the girls, together, can ride a bike, etc, using their shared torso, etc, and I'd be interested to hear what people who work on the body would have to say about them. One would really need to know a lot more about them---about what their ability to control "their" legs are like, etc. But I don't see any threat here either to dualism---though dualism does have its own share of problems.

Pages