Advanced Search

Quantum mechanics seems to suggest that there really is such a thing as a random

Quantum mechanics seems to suggest that there really is such a thing as a random number, yet all of philosophy and logic point to a reason or cause for everything, perhaps beyond our understanding. Is this notion of a random number just another demonstration of limited human understanding?

I guess I'd have to disagree with the idea that "all of philosophy and logic point to a reason or cause for everything." There's certainly no argument from logic as such; it's perfectly consistent to say that some events are genuinely random. Some philosophers have held that there's a reason (not necessarily a cause in the physical sense, BTW) for everything, but the arguments are not very good.

On the other hand... quantum mechanics is a remarkably well-confirmed physical theory that, at least as standardly interpreted, gives us excellent reason to think that some things happen one way rather than another with no reason or cause for which way they turned out.

An example: suppose we send a photon (a quantum of light) through a polarizing filter pointed in the vertical direction. We let the photon travel to a second polarizing filter, oriented at 45 degrees to the vertical. Quantum theory as usually understood says that there's a 50% chance that the photon will pass this filter and a 50% chance that it won't. But quantum theory itself provides no account whatsoever of which will actually happen. And on the usual interpretation, there is no reason or cause; it's really random.

Now the usual interpretation of quantum mechanics could be wrong. There are deterministic interpretations, most notably "many worlds" or Everettian quantum mechanics, and Bohmian mechanics. No one is in a position to rule either of those out; all I can say is that neither of those approaches is to my taste. But even though both of them restore determinism, that's not really their motivation. Most people who work in foundations of physics are not bothered by the very idea of indeterminism and in fact, indeterminism wasn't by any means Einstein's biggest issue with quantum mechanics.

So to sum up: I don't think there are any good general arguments against randomness. I think the concept is coherent, and that it's a plausible fit for our best physical theory. It also happens to suit my own prejudices about quantum mechanics, but that's just icing on the cake. ;-)

I guess I'd have to disagree with the idea that "all of philosophy and logic point to a reason or cause for everything." There's certainly no argument from logic as such; it's perfectly consistent to say that some events are genuinely random. Some philosophers have held that there's a reason (not necessarily a cause in the physical sense, BTW) for everything, but the arguments are not very good. On the other hand... quantum mechanics is a remarkably well-confirmed physical theory that, at least as standardly interpreted, gives us excellent reason to think that some things happen one way rather than another with no reason or cause for which way they turned out. An example: suppose we send a photon (a quantum of light) through a polarizing filter pointed in the vertical direction. We let the photon travel to a second polarizing filter, oriented at 45 degrees to the vertical. Quantum theory as usually understood says that there's a 50% chance that the photon will pass this filter and a 50% chance that it...

Here's a probability question I've been wondering. Suppose there's a company

Here's a probability question I've been wondering. Suppose there's a company that has a million customers. It is known that 55% of these customers are male and 45% of customers are female. Task is to guess the sex of the next 100 (of the existing) customers who are going to visit the company. For every right guess point is awarded. What's the best strategy to get most correct answers? If we consider the customers one by one, it is good plan to always guess the most probable answer and therefore guess that all 100 of the customers are male. However if we take the hundred people as a group, isn't this task analoguous to situation where one litre of seawater in a container has same salinity as seawater in general? Therefore we could guess that there are 55 males and 45 females among the group of 100 customers. Certainly, if instead of 100 people we would take the whole million customers as a group then 55%/45% split would be the true and correct answer. My question is this: what changes the way of thinking...

What you say about the individual problems is right: if I get a point for each right answer, then each time someone comes to the site, the best strategy is to guess that it's a man. (At least this is right if knowing the sex of an individual customer doesn't help predict whether s/he will visit the site or not.) This is the best strategy because if each individual visit is like a random selection of a customer from the population, the chance is greater that the selected customer will be a man.

The analogy with seawater is problematic. After all, if I pick one customer, that customer won't be 55% male and 45% female. The salinity of small samples of seawater closely approximates the salinity of the sea (unless we get down to really small samples of a few molecules, and then your principle breaks down.) The make-up of a small sample from a population may depart markedly from the make-up of the populations.

What's interesting is that once our samples get to be of even a moderate size, things become more like your seawater analogy. If we take a random sample of 100 from the population of customers, then—putting it roughly—there's a 95% chance that we'll find between 45 and 65 men. Our sample will have a margin of error of about 10%, in other words. If we take a sample of 1000, then the margin of error shrinks: it's more like 3%. As our sample gets even larger, the margin of error gets smaller and smaller (although it get smaller not in proportion to the sample size, but in proportion to its square root.)

The sea water case looks different superficially, but if we think of salinity as a matter of the fraction of molecules in the sample that are salt molecules, then we have to remember that even a drop of sea water contains a fantastically larger number of molecules. What we have with the seawater case is a sample so large (in terms of numbers of molecules) that the margin of error isn't worth mentioning.

And so on individuals and groups: in the kind of case you describe, the "group fact"—the 55/45 ratio—is just the summation of the individual facts. There are roughly 550,000 men and 450,000 women in the customer base. In the problem you describe, we are assuming that noting the sex of a customer who comes to the site is analogous to randomly sampling from a list of the customers and noting the sex of the person picked. From there, it's just a matter of using probability and arithmetic (although once the sample gets big enough and limits start taking hold, calculus-based math makes life a lot simpler!)

What you say about the individual problems is right: if I get a point for each right answer, then each time someone comes to the site, the best strategy is to guess that it's a man. (At least this is right if knowing the sex of an individual customer doesn't help predict whether s/he will visit the site or not.) This is the best strategy because if each individual visit is like a random selection of a customer from the population, the chance is greater that the selected customer will be a man. The analogy with seawater is problematic. After all, if I pick one customer, that customer won't be 55% male and 45% female. The salinity of small samples of seawater closely approximates the salinity of the sea (unless we get down to really small samples of a few molecules, and then your principle breaks down.) The make-up of a small sample from a population may depart markedly from the make-up of the populations. What's interesting is that once our samples get to be of even a moderate size, things...

In a chapter on regression to the mean (Thinking Fast and Slow) Daniel Kahneman

In a chapter on regression to the mean (Thinking Fast and Slow) Daniel Kahneman resorts to "luck" as an explanation for why one professional golfer shoots a lower score in a round than his/her rivals given that the talent pool is reasonably even. While a "lucky" (or unlucky) bounce can impact one's score, I find luck as a concept a poor explanation for performance. What is the philosophical status of luck, and are there different flavors of luck depending upon the philosophy? Is luck to chance as evidence is to data?

Games typically involve a blend of things that a player can control and things s/he can't. A golfer can work on her backswing; she can't do anything about the moment-by-moment shifts in the wind and the fine-grained condition of the greens. Things like the winds and the lay of the greens or the outcome of a dice-roll are what we might call externalities. It's not that they have no explanations and it's certainly not that they have no bearing on who wins and who loses. But the players don't deserve any blame or credit for how they turned out. In that sense, they're matters of luck. Depending on the game, skilled players may have ways of compensating for them to some extent, but they can produce advantages and disadvantages that are outside the players' control.

With that in mind, I don't take Kahneman's appeal to "luck" to be an explanation. An explanation would call for specifics about conditions and causes, and the mere appeal to luck doesn't provide any of those. I take the appeal to luck to be a way of saying that any detailed explanation will not be primarily in terms of well-chosen actions and displays of skill.

So yes: "luck" as such isn't an explanation. It's a way of alluding to a lack of an explanation. But note: it's not a way of saying that there's no explanation. It's plausible that if we had a sufficiently detailed account of exactly what happened in the golf game, we'd see exactly why Jones ended up with the lowest score. It's a way of saying that there's no explanation of a certain sort—in this case, in terms of things that fall under the control of the players.

As for the philosophical status of luck, that's a big topic. We tend to use the word "luck" when there's something at stake, and when what's at stake is outside our control—whether or not the lack of control is a matter of chance events. There's a large literature on what's called moral luck, and it might be a place to start looking. The Stanford Encyclopedia of Philosophy has a survey article on the topic HERE

Games typically involve a blend of things that a player can control and things s/he can't. A golfer can work on her backswing; she can't do anything about the moment-by-moment shifts in the wind and the fine-grained condition of the greens. Things like the winds and the lay of the greens or the outcome of a dice-roll are what we might call externalities. It's not that they have no explanations and it's certainly not that they have no bearing on who wins and who loses. But the players don't deserve any blame or credit for how they turned out. In that sense, they're matters of luck. Depending on the game, skilled players may have ways of compensating for them to some extent, but they can produce advantages and disadvantages that are outside the players' control. With that in mind, I don't take Kahneman's appeal to "luck" to be an explanation. An explanation would call for specifics about conditions and causes, and the mere appeal to luck doesn't provide any of those. I take the appeal to luck to be a...

I'm going to ask a somewhat bizarre question concerning casuality, probability,

I'm going to ask a somewhat bizarre question concerning casuality, probability, and the nature of belief so bear with me thanks! Suppose a craps player goes to two casinos in Macau, the first one architecturally built according to feng shui principles and a second one not according to feng shui principles. Feng shui is an ancient Chinese system of geomancy that modern psychologists tend to discredit. This craps player personally believes in feng shui himself but only to a moderate extent. He frequents both casinos equally and bets exactly the same way every time but he usually wins at the first casino and usually loses at the second casino. 1) Does this prove that feng shui is "real," at least for him? 2) Pragmatically, even if feng shui isn't "real" or cannot be proven to be real, isn't it advisable for him to stop going to the second casino? 3) Can psychology really influence probability involving human decisions?

Statistics could give evidence that something about one of the casinos makes it more likely that your gambler will win there. Feng shui could be the explanation, though it would be a funny sort of feng shui that only worked for some of the gamblers, and so if it is feng shui, the casino may not be in business long!

The more general question is whether there could be serious evidence that the gambler is more likely to win in one casino than the other, and the answer to that is yes. It might be feng shui, but other explanations, weird and mundane, would also be possible. (Maybe he's an unwitting participant in a psychology experiment; and the experimenters load the dice in his favor in one of the casinos.) Careful observation and experiment might even hone in on the explanation, if there really is a stable phenomenon to be explained.

As for the pragmatic question, why not? If the evidence suggests that he's more likely to win in one casino than the other, he could go with the evidence without committing himself to an explanation.

I'm having a bit of trouble understanding your third question. Do you mean: "Can psychological factors influence the probabilities of outcomes of decisions?" then the answer is surely yes, but typically for mundane reasons. If I make a decision and I'm confident as I carry it out, for example, that may make it more likely that things will go well. On the other hand, psychological factors aren't the sort of thing we'd expect to influence dice. Could they?

We've now reached a question about parapsychology or something in the neighborhood. On the one hand, there doesn't seem to be much evidence for psychokinesis or other parapsychological phenomena. Furthermore, given our general knowledge about how the world works, it would be surprising if such things were real. But they could be real; no a priori argument can show otherwise. And if they were real, we could have good evidence to believe that they were. But - to keep to the casino theme - it's not the sort of thing you should bet on.

Statistics could give evidence that something about one of the casinos makes it more likely that your gambler will win there. Feng shui could be the explanation, though it would be a funny sort of feng shui that only worked for some of the gamblers, and so if it is feng shui, the casino may not be in business long! The more general question is whether there could be serious evidence that the gambler is more likely to win in one casino than the other, and the answer to that is yes. It might be feng shui, but other explanations, weird and mundane, would also be possible. (Maybe he's an unwitting participant in a psychology experiment; and the experimenters load the dice in his favor in one of the casinos.) Careful observation and experiment might even hone in on the explanation, if there really is a stable phenomenon to be explained. As for the pragmatic question, why not? If the evidence suggests that he's more likely to win in one casino than the other, he could go with the evidence without...

Recently, Nate Silver won acclaim by correctly predicting the electoral results

Recently, Nate Silver won acclaim by correctly predicting the electoral results for all fifty states. If one of Silver's predictions had failed, however, would that have shown that he was wrong? I mean, I take it that Silver's predictions amount to assignments of probability to different outcomes. Suppose that I claim that an ordinary coin has a 50% chance of landing head or tails. If a trial is then run in which the coin lands tails three times in a row, we wouldn't take this to mean that I was wrong. Along similar lines, then, would it not have been possible for literally all of Silver's predictions to have failed and yet still be correct?

Right, as Silver himself would be the first to agree. However, we might want to put it a bit differently. The projections could all be mistaken, but not because his methods or premises were incorrect. Here's a way to see the general point.

Suppose we consider 20 possible independent events, and suppose that for each, the "correct" probability that the event will happen is 95%. (I use shudder quotes because there's an interesting dispute about just what "correctness" comes to for probability claims, but it's a debate we can set aside here.) Then for each individual event, it would be reasonable to project that it would occur. But given the assumption that the events are independent, the probability is over 64% that at least one of the events won't occur, and there's a finite but tiny probability (about 1 divided by 1026) that none of the events will occur. So it's possible that all the projections could be reasonable and all the probabilities that ground them "correct," and yet for some or all of the projections to fail. A statistical prediction/projection doesn't amount to saying that the projected event is bound to happen.

In the case of Silver's projections, the details are more complicated. For example: he projected not just who would win each state, but by how much, and he attached margins of error to his projections. But the general point remains: his methods could be sound and yet since he was making statistical predictions rather than outright claims about what would happen, any number of projections could have failed even if his methods were sound. On the other hand, if his projections failed badly, that would be good reason to think the problem was in the model rather than the caprice of the election gods. Such is our fallible lot.

Right, as Silver himself would be the first to agree. However, we might want to put it a bit differently. The projections could all be mistaken, but not because his methods or premises were incorrect. Here's a way to see the general point. Suppose we consider 20 possible independent events, and suppose that for each, the "correct" probability that the event will happen is 95%. (I use shudder quotes because there's an interesting dispute about just what "correctness" comes to for probability claims, but it's a debate we can set aside here.) Then for each individual event, it would be reasonable to project that it would occur. But given the assumption that the events are independent, the probability is over 64% that at least one of the events won't occur, and there's a finite but tiny probability (about 1 divided by 10 26 ) that none of the events will occur. So it's possible that all the projections could be reasonable and all the probabilities that ground them "correct," and yet for some or...

Is there such thing as coincidence? I mean is it possible that something happen

Is there such thing as coincidence? I mean is it possible that something happen without any purpose or significance?

Suppose you and I are in the same room and we're bored. We start flipping coins. I flip twice; so do you. I get "Heads; Tails," so do you. Sounds like a meaningless coincidence to me. In fact, it would take a lot of argument to make the case that it was anything other than meaningless.

Surely what's just been described is possible, and so meaningless coincidences are possible. But surely it's also the sort of thing that's actually happened countless times, and so meaningless coincidences are more than just possible.

The more interesting question is whether anything has purpose or significance apart from the purpose or significance that creatures like us give it. Put another way, the question is whether there's any significance inherent in the universe itself. Many religious believers would say yes, though they would trace the meaning back to the intentions of God. Carl Jung, the Swiss psychologist, believed in meaningful coincidences that he called "synchronicity." His account of them (as I understand it) made a connection between the outer occurrences and our minds, but it was a connection that didn't amount merely to our imposing meaning on things. And certain sorts of magical views of the world also involve what might be counted as meaningful coincidences.

If there's anything that's common to views called "naturalistic," it's that there's no meaning in things apart from the meaning that derives from the beliefs, purposes, intentions, etc. of creatures with minds. But supposing naturalism is correct, why are there so many cases of strange coincidence?

The answer is that with enough chance events, the chances are very high that the unlikely will happen. We can make do with a simple illustration. Many people have idled time away flipping a coin repeatedly. For any given sequence of ten flips, the chance that all ten outcomes will be "heads" is only about one in a thousand. But untold thousands of people have made this experiment, and so the rules of probability alone make it virtually certain that some of them will see ten heads in a row.

More generally, with enough things happening, the chances are overwhelming that there will be lots of individually improbable coincidences, even though it may not be possible to say which ones in advance. The naturalist's claim is that the level of apparently meaningful coincidences is well within what we'd expect by chance alone, and although it's not easy to make that thought entirely precise, its spirit is plausible enough to be worth taking quite seriously, I think.

Suppose you and I are in the same room and we're bored. We start flipping coins. I flip twice; so do you. I get "Heads; Tails," so do you. Sounds like a meaningless coincidence to me. In fact, it would take a lot of argument to make the case that it was anything other than meaningless. Surely what's just been described is possible, and so meaningless coincidences are possible. But surely it's also the sort of thing that's actually happened countless times, and so meaningless coincidences are more than just possible. The more interesting question is whether anything has purpose or significance apart from the purpose or significance that creatures like us give it. Put another way, the question is whether there's any significance inherent in the universe itself. Many religious believers would say yes, though they would trace the meaning back to the intentions of God. Carl Jung, the Swiss psychologist, believed in meaningful coincidences that he called "synchronicity." His account of them (as I...

Let's say there is some crime committed and that only 5% of similar crimes are

Let's say there is some crime committed and that only 5% of similar crimes are committed by someone like Person A (based on demographics, personality type, previous criminal record, etc.). If the police later find evidence suggesting that Person A is the perpetrator of a crime and that there is only a 10% chance that the evidence could exist if Person A is innocent, then does that mean there is a 90% chance that Person A is guilty? Or do we have to factor in the fact that there was only a 5% probability that A was guilty before the evidence was found? Thanks!

What we're trying to get to is the probability, given all the evidence, that A is guilty. Let H be the hypothesis that A is guilty. You're supposing that our initial probability for H is 5%, i.e., .p(H) = .05. Then we get a piece of evidence – call it E – and the probability of E assuming that H is false is 10%, i.e., p(E/not-H) = .1. Your question: in light of E, how likely is H? What's p(H/E)?

We can't tell. We need another number: p(E/H). We need to know how likely the evidence is if A is guilty. And we can't infer that from p(E/not-H). Why not? Well, suppose the evidence is that the Oracle picked A's name out of a hat with 10 names, only one of which was A's. The chance of that if A is not guilty is 10%, but so is the chance if A is guilty (assuming Oracles don't really have special powers.) iI this case, the "evidence" is actually irrelevant.

The crucial question is this: what's the ratio of p(E/H) to p(E/not-H)? Intuitively, does H do a good job of explaining E? And knowing only one of p(E/H) or p(E/not-H) leaves that under-determined. So to answer your original question, p(E/H) = .1 does not imply p(H/E) = .9.

-------
Here's some technical detail. The relevant bit of math is something called Bayes' Theorem. Although the formula is a bit unintuitive, it's this:

p(H/E) = [p(H)p(E/H)]/[p(H)p(E/H) + p(not-H)p(E/not-H)]

So as you can see, p(H) does matter. Other things equal, the higher the prior probability the higher the probability given the evidence. But the ratio of p(E/H) to p(E/not-H) is the thing to watch. If p(E/not-H) = .1, as you've assumed, then the highest this ratio can possibly be is 10. As such things go, that's not very high. In fact, if the prior probability of H is .05, as you assumed, then with p(E/not-H) = .1, the highest value possible for the hypothesis given this bit of evidence is only about .345 - quite a bit lower than 90%. But even if the original probability that A did it was 50%, with p(E/not-H) = .1, we still can't get p(H/E) higher than about 84%.

What's going on? Intuitively, the idea is that if there's a 10% chance of seeing this evidence even if A isn't guilty, then it's too likely that the evidence is, so to speak, a false positive for it to count very heavily.

What we're trying to get to is the probability, given all the evidence, that A is guilty. Let H be the hypothesis that A is guilty. You're supposing that our initial probability for H is 5%, i.e., .p(H) = .05. Then we get a piece of evidence – call it E – and the probability of E assuming that H is false is 10%, i.e., p(E/not-H) = .1. Your question: in light of E, how likely is H? What's p(H/E)? We can't tell. We need another number: p(E/H). We need to know how likely the evidence is if A is guilty. And we can't infer that from p(E/not-H). Why not? Well, suppose the evidence is that the Oracle picked A's name out of a hat with 10 names, only one of which was A's. The chance of that if A is not guilty is 10%, but so is the chance if A is guilty (assuming Oracles don't really have special powers.) iI this case, the "evidence" is actually irrelevant. The crucial question is this: what's the ratio of p(E/H) to p(E/not-H)? Intuitively, does H do a good job of explaining E? And knowing only one...

Suppose that you had two bags each with an infinite number of blue marbles.

Suppose that you had two bags each with an infinite number of blue marbles. Suppose you also had another bag of infinity red marbles. If you mixed those three bags what are your odds of getting a red marble? Obviously this isn't a realistic experiment but is it 1 in 3 or 50%?

I'd suggest that there needn't be a determinate answer without adding more detail. In particular, the notion of "mixing" the three collections would need to be spelled out. Suppose the "mixing" works this way: take 10 marbles from the red bag and one from each of the blue bags. Put in an infinite vat and stir. Repeat ad infinitum. (We could imagine the first operation is performed in 1 minute, the second in half a minute, the third in a 1/4 minute…) The intuitive thought is that a "random" draw is most likely to give you a red marble. (10 chances out of 12).

This may seem contrived, but only because we have some other loose, unspecified idea of mixing that we're comparing it to. The point is simply that the problem, as stated, doesn't determine the answer.

I'd suggest that there needn't be a determinate answer without adding more detail. In particular, the notion of "mixing" the three collections would need to be spelled out. Suppose the "mixing" works this way: take 10 marbles from the red bag and one from each of the blue bags. Put in an infinite vat and stir. Repeat ad infinitum. (We could imagine the first operation is performed in 1 minute, the second in half a minute, the third in a 1/4 minute…) The intuitive thought is that a "random" draw is most likely to give you a red marble. (10 chances out of 12). This may seem contrived, but only because we have some other loose, unspecified idea of mixing that we're comparing it to. The point is simply that the problem, as stated, doesn't determine the answer.

Suppose I agree with theists that "God exists" is a necessary proposition, and

Suppose I agree with theists that "God exists" is a necessary proposition, and so is either a tautology or contradiction. That seems to indicate that the probability of "God exists" is either 1 or 0. Suppose also that I don't know which it is, but I find the evidential argument from evil convincing, and so rate the probability of "God exists" at, say, 0.2. But if the probability of "God exists" is either 1 or 0, then it can't be 0.2 - that would be like saying that "God exists" is a contingent proposition, which I've accepted it isn't. How then can I apply probabilistic reasoning to "God exists" at all? If I can, then how should I explain the apparent conflict?

Interesting points. I take it that the most reasonable reply for a defender of the ontological argument to make is to claim that Prefoessor Smith's world is not in fact possible. If one can make a case for abstracta (properties or propositions necessarily existing) then there cannot be a world where only a single pencil exists. For a good case for such a Platonic position, see Roderick Chisholm's Person and Object. R.M. Adams also has a good discussion of the difficulty of imagining / conceiving of God's non-existence. I take this up in a modest book: Philosophy of Religion: A Beginner's Guide (Oneworld Press, Oxford) or in more detail in a discussion of Hume and necessity in Evidence and Faith: Philosophy and religion since the seventeenth century (Cambridge University Press).

I'd like to offer a rather different take on this than my co-panelist. Many theists don't think that "God exists" is a necessary proposition. However, some famously do. St. Anselm is the most well-known example, but he's not the only one. The contemporary philosopher Alvin Plantinga apparently does as well. Now we can grant that it's not obviously a contradiction to say that the world contains only a single pencil, but people who think God exists necessarily may not think that metaphysical necessity is the same as logical necessity. If I understand Plantinga correctly, he doesn't think it's a contradiction to say "God doesn't exist," though he does think that God's existence is metaphysically necessary. All of that is throat-clearing. We could make a similar point in a different way. Mathematical truths are necessary if true at all, or at least so we'll suppose. But it's famously hard to argue that mathematical truth is the same as logical truth. So the more interesting question is this:...

We all know co-incidences happen. At what point should the person, who discovers

We all know co-incidences happen. At what point should the person, who discovers one after another, such as numbers/names/colours, which all link together, turn and say: There must be more behind these co-incidences and I shall find out, what it is all about?

There's no simple answer to this question, but there is a caution: both common experience and a good deal of psychological work suggest that we have a strong tendency to project patterns onto random events. We also tend to notice things that interest us and ignore things that don't. And remember that it is overwhelming probable that some improbable events or other will occur. A single run of ten heads in a row on flipping a fair coin has a chance of 1 in 1,024. But if lots of people perform the same experiment, it becomes nearly certain that someone will get 10 heads.

Still, some apparent coincidences do seem to call out for explanation. Without offering a full-blown story of how this should work, here are some thoughts. First, do you have a hypothesis in mind? Casting around blindly for an "explanation" may not get you very far. Second, would your hypothesis really make what you noticed that much less surprising? Or is what you noticed the sort of thing that might well have happened by chance anyway? Third, has your hypothesis been gerry-rigged to fit the data? If so, it's no surprise that the data "confirms" it. Fourth, would your hypothesis really explain the data? Saying that the "explanation" for the facts is magic or ESP doesn't really give us much insight. Finally, how wild is the hypothesis? A low "prior probability" for the hypothesis means that it has to make the data quite a bit more likely than the alternative before the data do much to confirm it. That's why conspiracy theories are usually not credible. Ordinary human screw-ups call for far fewer moving parts.

There's no simple answer to this question, but there is a caution: both common experience and a good deal of psychological work suggest that we have a strong tendency to project patterns onto random events. We also tend to notice things that interest us and ignore things that don't. And remember that it is overwhelming probable that some improbable events or other will occur. A single run of ten heads in a row on flipping a fair coin has a chance of 1 in 1,024. But if lots of people perform the same experiment, it becomes nearly certain that someone will get 10 heads. Still, some apparent coincidences do seem to call out for explanation. Without offering a full-blown story of how this should work, here are some thoughts. First, do you have a hypothesis in mind? Casting around blindly for an "explanation" may not get you very far. Second, would your hypothesis really make what you noticed that much less surprising? Or is what you noticed the sort of thing that might well have happened by chance anyway...