Advanced Search

Suppose that I know all the laws of physics and the position of all the atoms in

Suppose that I know all the laws of physics and the position of all the atoms in the universe. I flip a coin. Obviously I will know with 100% certainty what the outcome will be. Suppose I am a mere mortal, I will only be able to say that there is a 50% chance of heads and a 50% chance of tails. So probability is a measure of our ignorance? That cannot be right! Probability is something intrinsic to reality. But how can an uncertainty be intrinsic without reference to a knower?

Sometimes probability is a measure of our ignorance. If you give me a quarter with the instruction to hide it in one of my fisted hands while your eyes are closed (and I do as you say), then you'll not know which hand holds the coin. (I will know, I can feel it.) So you can only assign probabilities because you lack knowledge.

In other cases, probability is objective. If current physics is right, then some processes in nature are in principle unpredictable or such that their outcome is uncertain. Yes, this suggests some reference to a knower: it means that it's impossible for there to be someone who can predict or be certain about the outcome. But why should this be problematic? The fact that a black hole emits no light can be expressed by saying that black holes are invisible - and yet the fact is "intrinsic to reality," involves no essential reference to beings with eyes.

Sometimes probability is a measure of our ignorance. If you give me a quarter with the instruction to hide it in one of my fisted hands while your eyes are closed (and I do as you say), then you'll not know which hand holds the coin. (I will know, I can feel it.) So you can only assign probabilities because you lack knowledge. In other cases, probability is objective. If current physics is right, then some processes in nature are in principle unpredictable or such that their outcome is uncertain. Yes, this suggests some reference to a knower: it means that it's impossible for there to be someone who can predict or be certain about the outcome. But why should this be problematic? The fact that a black hole emits no light can be expressed by saying that black holes are invisible - and yet the fact is "intrinsic to reality," involves no essential reference to beings with eyes.

A friend posed a problem that according to him reveals an inconsistency in

A friend posed a problem that according to him reveals an inconsistency in mathematics. There are two envelopes with money in them, and you are given one envelope. One envelope has twice the amount of money as the other, but you don't know which one is which. The question is, if you are trying to maximize your money, after you are given your envelope, should you switch to the other envelope if given the chance? One analysis is: let a denote the smaller amount. Either you have a or 2a in your envelope, and you would switch to 2a or a, respectively, and since these have the same chance of happening before and after, you don't improve and it doesn't matter if you switch. The other analysis is: let x denote the value in your envelope. The other envelope has either twice what is in yours or it has half of what is in yours. Each of these has probability of .5, so .5(2x) + .5(.5x) = 1.2x, which is greater than the x that you started with, so you do improve and should switch. Is there something wrong with...

I'd like to add a little bit to what Thomas has said. Probability problems can be tricky because the answers sometimes depend on small details about exactly what procedure was followed. For example, the problem says that "you are given one envelope." Who gave you the envelope? Did the person who gave you the envelope know which envelope was which? Was he a very stingy person, who might have been more likely to give you the envelope with the smaller amount of money? If so, then the probability that you have the smaller amount might not be 1/2.

But that is clearly not the intent of the problem, so let us assume that the person who gave you the envelope flipped a coin to decide which envelope to give you. Then, as Thomas says, the probability is 1/2 that you have the small amount and 1/2 that you have the large amount. Suppose that you open your envelope and find $100 in it. You now know that the other envelope contains either $50 or $200. Do these two outcomes still have probability 1/2 each? Not necessarily; by opening your envelope you acquired new information, and that information could change the probabilities. The answer depends on what procedure was used to decide how much money to put in the envelopes.

Suppose the person who filled the envelopes used the following procedure: they chose a random integer x from 1 to 100, with each integer being chosen with probability 1/100. Then they put $x in one envelope and $2x in the other. In that case, a short calculation using the laws of conditional probability shows that, yes, the probability is 1/2 that the other envelope contains $50 and 1/2 that it contains $200. The expected value of the amount of money in the other envelope is therefore $125, so you would be well-advised to switch.

But now suppose the envelopes were filled by choosing x between 1 and 50, and putting $x in one and $2x in the other. Now, when you find $100 in your envelope, you know for sure that the other envelope contains $50, and you should not switch.

What if we don't know anything about how the envelopes were filled? Now the question is more difficult, but I would be inclined to say that probability theory cannot tell us the probability of the other envelope containing $50 or $200. Probability theory can only tell us how to compute probabilities if we have a well-defined probability distribution for the possible outcomes of the random event under consideration. The choice of this probability distribution is a matter of interpretation; it is not a purely mathematical issue. If we don't have enough information to determine the distribution, then it is not clear that there is a right answer to the probability question.

Wouldn't it be nice if mathematics could be brought down so easily! But, sorry, no cigar this time. It is indeed true that the probability that you are holding the fat or meager envelope is 50/50. Here are the two cases: 1. If you are holding the meager envelope, then switching gets you from x to 2x for a gain of x. 2. If you are holding the fat envelope, then switching gets you from x to 1/2 x for a loss of 1/2 x. But note that the "x" in these two cases does not signify the same amount of money. In Case 1, x is the smaller amount. In Case 2, x is the larger amount. In Case 1, your gain from switching is the smaller amount. In Case 2, your loss from switching is 1/2 the larger amount (equal to the smaller amount). The illusion arises because, at first blush, the situation seems similar to another where someone offers to give you 50/50 odds on either doubling or halving some fixed amount of money you have. There your reasoning goes through and you are well-advised to accept. Your...

As I understand it, inductive reasoning is considered by most a posteriori; yet

As I understand it, inductive reasoning is considered by most a posteriori; yet I had learned about induction in a statistics class similar to the way someone would understand a clearly a priori mathematical theory. Assuming one would consider some conclusions based on induction, is it a priori or a posterori? John

You should distinguish here between the inductive method of extrapolating from observed cases to as yet unobserved cases, on the one hand, and particular extrapolations derived by using this method, on the other hand.

Particular extrapolations are a posteriori. They depend on what has actually been observed.

The method, however, has certain a priori elements, esp. in the very “clean” and somewhat artificial stories you will have encountered in your statistics class. One such story might be this. You are faced with a large urn which you know contains many marbles all of which you know to be either white or red. On n occasions one marble was randomly selected from the urn, its color was recorded, and it was then mixed back in. Of these randomly selected marbles, 70 percent were white and 30 percent red. At the end of the story, you are then asked what we can learn from the random drawings about the color composition of the marbles in the urn.

In this sort of story, one can calculate precisely, given the result of the drawings, the probability of various color compositions in the urn. The probabilities will peak near the ratio observed in the drawings and will concentrate in predictable ways as the number of drawings increases. (If the 7:3 ratio holds up over 1000 drawings, for instance, the probability that the real ratio is under 6:4 becomes quite small in a way that can be calculated precisely.) This is the a priori element: The rational way of adjusting one’s expectations is guided -- even determined -- by probability calculations.

In the real world, however, induction is rarely so neat. Here we need to decide what predicates are useful for extrapolation, we must worry about observations not being independent of one another, we must guard against experimenter effects and biased (theory-guided) observations, and so on.

Consider, for instance, the task of designing and fine-tuning an algorithm for accepting or rejecting mortgage applications on the basis of past repayment experience. There are indefinitely many ways of collecting and coding information about applicants. The information provided may be influenced by the conduct of the bank staff, and its coding by the bank staff’s cognitive and other biases. There isn’t just one rational way of coping with all these complexities; though some banks clearly come up with more successful algorithms than others. (Even such ex post assessments of banks are not unproblematic, however, in that only acceptance errors, nor rejection errors, will come to light. We'll never know whether the Smiths, who were denied a mortgage, would have met their debt service obligations, had they received a mortgage.)

There are a priori elements, to be sure: We can know in advance that certain features will strengthen, or weaken, a method. (For instance, a good method should work so that, the more disproportionate is the default rate of applicants with a certain characteristic, the more weight this characteristic is given as a reason to deny an application.) But much else will depend on more or less lucky guesswork and imprecise “good judgment.”

You should distinguish here between the inductive method of extrapolating from observed cases to as yet unobserved cases, on the one hand, and particular extrapolations derived by using this method, on the other hand. Particular extrapolations are a posteriori. They depend on what has actually been observed. The method, however, has certain a priori elements, esp. in the very “clean” and somewhat artificial stories you will have encountered in your statistics class. One such story might be this. You are faced with a large urn which you know contains many marbles all of which you know to be either white or red. On n occasions one marble was randomly selected from the urn, its color was recorded, and it was then mixed back in. Of these randomly selected marbles, 70 percent were white and 30 percent red. At the end of the story, you are then asked what we can learn from the random drawings about the color composition of the marbles in the urn. In this...

Is an event which has zero probability of occurring but which is nonetheless

Is an event which has zero probability of occurring but which is nonetheless conceivably possible rightly termed "impossible"? For instance, is it "impossible" that I could be the EXACT same height as another person? I take it that the chance of this is zero in that there are infinitely many heights I could be (6 ft, 6.01 ft, 6.001 ft, 6.0001 ft, etc.) but only one which could match that of a given other person exactly; at the same time, I have no problem at all imagining a world in which I really am exactly as tall as this other.

I agree that there's nothing paradoxical here; surprising, perhaps, but not paradoxical.

The only kind of additivity that is usually assumed in probability theory is countable additivity, and there's no violation of that here. But you do have uncountably many non-overlapping outcomes, each with probability zero, such that the probability of at least one of those outcomes happening is one. So uncountable additivity doesn't work.

I would agree that an outcome with probability zero need not be impossible. Consider, for example, flipping a coin infinitely many times. Each infinite sequence of heads and tails has probability zero of occurring, but one of them has to occur, so it wouldn't make sense to say that they're all impossible. (Notice that there are uncountably many possible sequences of heads and tails.)

But of course this is not a realistic experiment--no one can actually flip a coin infinitely many times. The original example proposed also seems unrealistic to me--according to the uncertainty principle, the individual protons, neutron, and electrons at the top of my head don't have precise positions, so I'm not sure it makes physical sense to think of my height as a precise real number. I'm not sure if there is any realistic example of a physical event with outcomes that are possible, but have probability zero.

As you make finer and finer measurements in the way you suggest, the probability declines each time by a factor of 10. As you go on and on, it shrinks below any value no matter how small. But, no matter how long you go on, it will never be zero, it will always be more than zero. (This is analogous to how, when you count, you'd eventually surpass any number anyone cares to specify but never reach infinity.) OK, so far no paradox. But mathematics also recognizes numbers whose decimal extensions are infinitely long. And if you express each person's height as such a number, then your paradox does indeed arise. And I am not surprised, as there are other paradoxes involving infinite numbers as well, for instance, that there are as many even numbers as there are natural numbers, as demonstrable by a one-to-one mapping. Still, this is a nice addition (at least to my stock)! Let's see what others think.

Does the word 'chance' (or 'accident', 'luck', or 'random') refer to the absence

Does the word 'chance' (or 'accident', 'luck', or 'random') refer to the absence of causation, or does it express our ignorance of causation? Equally, does the word 'infinite' refer to the unlimited, or to our ignorance of limits?

You might also have a look at some of the other entries in the category: Probability.

I think the terms in your first question are generally used in a sense that's relative to our (human) knowledge. But this need not mean that this use reflects our ignorance of causes. For there may be real chance and randomness in nature (here the words "real" and "in nature" indicate that "chance" and"randomness" are used in their more unusual sense). The currently accepted view in physics holds that this is in fact the case at least in regard to subatomic particles. The word "infinite" is generally used to refer to what really is infinite, mostly things in mathematics and geometry (the set of all natural numbers, Euclidean space). The mere fact that we don't know whether a thing has limits does not justify calling it infinite in any normal sense.

Do luck and bad luck exist? Or have they just been imagined in order to create

Do luck and bad luck exist? Or have they just been imagined in order to create excuses?

One might think that (bad) luck does not exist because the universe is deterministic (running like clockwork according to strict physical laws). I assume this is not your concern. The (bad) luck label might then be attached to things happening to an agent insofar as these things (however causally determined) are better or worse than she could have predicted. In this sense, clearly, luck and bad luck do occur.

To be sure, agents will invoke bad luck as an excuse. But this is no reason to reject the very idea of bad luck. After all, such excuses are sometimes valid -- as when the sole copy of your typescript is destroyed by a fire (something that very nearly happened to John Rawls's ATheory of Justice!). And when such excuses are lousy, this can be shown even when bad luck is accepted in principle: We can point out that the outcome was not really worse than the agent could have predicted or that the agent failed to take sufficient account of the risk. For example, we can tell the notorious drunk driver that it was not unpredictable that he would cause an accident sooner or later. Or we can tell him that, though he encountered a low-probability challenge and was in this regard unlucky, he is nonetheless not excused because he ought not have run the risk of encountering such a challenge while driving drunk.

People sometimes excuse their general failure in life by saying that they are prone to bad luck. Now, it is true enough that, in retrospect, some people have better luck than others. If you roll 6 billion dice ten times each, it is likely that some of them will score a perfect 60 and some an abysmal 10. We should recognize this and excuse people who have had to deal with much more than the average burden of misfortunes. But the causal or explanatory claim suggested by the phrase "prone to bad luck" is false. People have such a proneness no more than coins do. And so we should therefore reject this excuse: "I am an unlucky person, everything I try to achieve goes wrong, so I won't even try any more." We can answer her that whatever bad luck she may have had in the past will not make her any more likely to have bad luck in the future.

One might think that (bad) luck does not exist because the universe is deterministic (running like clockwork according to strict physical laws). I assume this is not your concern. The (bad) luck label might then be attached to things happening to an agent insofar as these things (however causally determined) are better or worse than she could have predicted. In this sense, clearly, luck and bad luck do occur. To be sure, agents will invoke bad luck as an excuse. But this is no reason to reject the very idea of bad luck. After all, such excuses are sometimes valid -- as when the sole copy of your typescript is destroyed by a fire (something that very nearly happened to John Rawls's ATheory of Justice !). And when such excuses are lousy, this can be shown even when bad luck is accepted in principle: We can point out that the outcome was not really worse than the agent could have predicted or that the agent failed to take sufficient account of the risk. For example, we can tell the notorious drunk...