Descartes sought certainty because he thought that if we know something with certainty, then it must be true. And he was right, if only because 'S knows that p' implies p, so that in 'We know with certainty that . . .' the phrase "with certainty" is redundant; there is no such thing as uncertain knowledge. I suspect that the sense of your question may be Cartesian: is it the case that certainty implies truth? There are several concepts to sort out here: 'We know for sure, or for certain, or with certainty that . . .', 'I am certain (sure) that . . .', 'I feel certain, sure, that . . .', 'It is certain that . . .' (but not 'It is sure that . . .') There is a very useful paper by G.E. Moore called "Certainty" that might be helpful here, which is sensitive to distinctions of this kind. Sean is right in his response above that psychological certainty or "feeling certain" may not be a mark of truth, though I wonder whether anyone has troubled to test the correlation empirically in humans, and whether it makes sense to think about testing it in animals. On the other hand it also seems correct that if something is indeed certain, e.g. that 7×9 = 63, then '7×9 = 63' is true.
I read the question rather differently: can any amount of past and present evidence falsify a claim about the future, insofar as it still remains the future? Of course, past and present evidence can give us ample reason to doubt certain claims that might be made about the future: but could it ever demonstratively disprove such claims? I'm not at all sure that it could. An instance of an F that isn't G can falsify the proposition that all Fs are currently G, but it can't similarly falsify the proposition that all future Fs will be G. Current evidence tells us about what is currently true or false, and to project this onto the future for the purposes of falsification is as problematic -- no more so, but also no less so -- as projecting it onto the future for the purposes of verification. And, as David Hume showed us more than 250 years ago, that there are genuine grounds for concern about the latter. The 'problem of induction' suggests that there is a certain logical circularity in any such attempt at projection, be it positive or negative.
But then, let's not forget how Karl Popper presented his doctrine of falsificationism. It was supposed to be an alternative to inductivism, one that didn't actually need to rely (at least not overtly) on any problematic assumptions concerning the uniformity of nature. So I don't think Popper would be especially concerned about the challenge that you raise. He could acknowledge that such a supposition isn't falsifiable -- and consequently would presumably conclude that it isn't scientific -- but he could then just carry on regardless. I'm not sure that, within the terms of his own programme, this would qualify as a problem at all.
If I read you correctly, your point is this: if you're prepared to assert P, you should be prepared to assert that you know that P. And the converse is even clearer: if you are willing to assert that you know that P, you're willing to assert that P is true. That's an interesting and important observation, but it doesn't show that the standard analysis of knowledge is circular. Suppose I'm prepared to assert that P. Do I actually know that P? That depends. Even if I'm prepared to make a sincere assertion -- and hence believe that P -- I might not really be justified or P might not actually be true. In either case, the classical analysis says, I don't actually know that P. The analysis of knowledge doesn't make any reference to what people are prepared to assert. On the contrary: it points out how there can be a gap between what we're prepared to assert and what we actually know.
We could turn this into a slogan: saying it's so is saying you know, but that doesn't mean you do.
I've been racking my brains over this one -- it's a tricksy little question! -- and I'm still not sure what the answer should be. Of course Nicholas Smith would be correct, if the question was about the proposition that I am being systematically deceived. But it isn't. I take it that the question is how we know that it is possible that I am being systematically deceived.
Admittedly, Descartes himself does ultimately conclude that this isn't even so much as possible: but he reaches this conclusion via a rather idiosyncratic and unconvincing argument, resting on the nature of God; and, in any case, even he acknowledges that it certainly does seem to be possible. He sets up his methodological scepticism in the First Meditation (as I'm sure you know), pointing to things like optical illusions, dreams, and the possibility of an evil demon. Many of the same points could be made about each of these arguments: but, for simplicity's sake, I shall just take the one about illusions.
So, for instance, he says that, when we look at a tower in the distance, we might take it to be round. But we then get closer, and we now find it to be square. So it turns out that we've been in error in at least one of our judgments, presumably the earlier one. But then how can we be so sure that we are not similarly in error in our current judgments too, and not only here but right across the board? The thing that is driving the argument is the fact that a certain object is appearing to be round, but the very same object is also appearing to be not-round. And it would be a contradiction for one and the same thing actually to be both round and not-round. (We certainly know that a priori). So we can conclude that some of our perceptions or judgments must be false; and that's what opens the door to global scepticism. If some of them are false, then perhaps all of them are false.
But is it really that simple? The tower looked round at time t1, and it looks square at time t2. But maybe it was round at t1, and is square at t2. How can we be so sure that it hasn't simply changed with the passage of time? If we can't be sure of that, then we can't be sure that either of these judgments was erroneous after all. And, if we can't be sure that any of our judgments are erroneous, then it's not clear how we could know that any of them even could be erroneous. And, if we can't know that, then it seems that perhaps we can't know, after all, that it's possible that all of them could be erroneous together. If we can't establish the reality and (a fortiori) the possibility of error, then we'll have nothing to extrapolate globally to establish the possibility of systematic deception. Now, for my part, I do feel entirely confident that we can indeed know that error is possible. But what I'm much less confident about is precisely how we can know this. Simple sense-perception alone certainly won't be enough to establish it.
Well, that's about as far as I've got. We can only know that systematic deception is possible, if we can know that error is possible. That is to say, we can only know that all of our beliefs might be false, if we can know that some of them might be false. And we could know that some of them might be false, if we knew that some of them actually were false. But how do we know that? If I'm forced to give an answer, I guess I'd plump for 'empirical' over 'a priori'. But, ultimately, I just don't know what to think about this. Nice question!
It should be noted that most philosophers who are interested in skepticism aren't themselves skeptics: they see skepticism as raising a challenge that must be met by an adequate account of human knowledge, and insofar as they try to defuse skepticism, they manifest considerable skepticism about its truth. However, attention to ancient skepticism reveals a divide in views about skepticism: Pyrrhonian skeptics were skeptical about skepticism, because one aim of Pyrrhonism was to avoid dogmatism about any and all beliefs; Academic skeptics, by contrast, seem to have maintained that skepticism was true, and consequently were sometimes called 'negative dogmatists'. (I say that Academic skeptics seem to have been negative dogmatists because it is a matter of scholarly debate whether the Academics were indeed negative dogmatists, and also whether there were other negative dogmatists.) One deep question is whether the Pyrrhonian or the Academic has the more coherent attitude to skepticism: after all, how can one know that skepticism is true if skepticism is meant to undercut the very basis of knowledge itself? Consideration of this question, which, I think, admits of various answers, and was engaged with in antiquity, would, I think, go far towards illuminating the nature of skepticism itself.
Your question cannot be answered without some specification of what knowledge is--what counts as knowledge. This topic is extremely controversial among epistemologists. But I think one aspect of your question allows at least a part of an answer to it.
Epistemologists may not agree on the entire analysis of knowledge, but most agree that whatever is known must be true, and most agree that in order to know something you at least have to believe it. The real controversies tend to begin when epistemologists debate what is now often called the "warrant" condition, which is the purposely vague expression used to denote whatever else is needed for knowledge, other than true belief--or to put it slightly differently, whatever it is that distinguished knowledge from other species of true belief.
Now think a little bit about the (relatively uncontroversial) belief condition. What does it mean to believe something? One thing belief is often supposed to include is a dispositional component. Part of what it means for me to believe there is a truck coming down the road towards me is that I am disposed to step off the road surface to get out of its way. If I am not so disposed (assuming I am not seeking to be killed or injured by the truck), then we might wonder if I really believe the truck is coming at me.
Your case tempts me to respond that even though the person has received the information that his father is dead, he does not yet believe it, since at least some of the dispositions in accordance with which we would expect him to act in certain ways appear not (yet) to be present. It is one thing to have (access to) certain information, and another thing actually to believe that information. Being in a state of denial seems to me to be an example of at least impaired cognitive function at the level of belief. One can't know something without believing it, at least in the dispositional sense. So in your case, it looks like it can't count as knowledge until, as you put it, it "hits him."
Whether or not this is really the correct diagnosis of the case, however, will depend upon just how much we build into our account of belief in terms of relevant dispositions. That seems to me a matter of likely controversy, and so it could be that another place to attack this case would be in terms of the warrant condition. As I said, there are lots and lots of different accounts of what warrant consists in (for just a few examples: being completely justified, having one's belief generated or sustained by reliable true-belief-forming processses, or having the belief produced by reliable cognitive processes that are functioning properly within an environment to which they are well suited). One might say that the person's justification cannot be complete unless and until the person recognizes all of the evidence as evidence, and perhaps the initial under-reaction shows that he has not yet achieved that level of justification. Or perhaps the person's cognitive functions are not fully adequate (do not count as functioning properly) until their representation of the fact to him are sufficient to qualify as "hitting him."
Just to muddy the waters a little further, it also seems quite possible to me that someone can know something at a different time than he or she manages to respond to what he or she knows at an emotional level. Just because I am stunned into a kind of non-response to something at first does not necessarily indicate a lack of knowledge; it might instead indicate a lack of readiness to respond or react to what I know at the emotional level. So I think this is also another way to see your case.
I guess if you want the gist of my response in a nutshell, it seems to me that the case is somewhat underdetermined, as presented, which is why it seems to me there are several reasonable reasonses that can be made to it.
In order to determine what role, if any, religion generally should play in knowledge about "how things are or came to be," it is essential first to know just what 'things' are at issue. For example, it seems to me that if the 'things' in question are truths about morality, then religion generally may well have a role to play; by contrast, it seems to me that if the 'things' in question are truths about the nature of the physical world, say, then it's not clear to me that religion has any role whatsoever to play in helping us to gain knowledge of such truths. (I write here not from any particular standpoint on the issue: indeed, even the great seventeenth-century French philosopher and theologian Nicolas Malebranche, who famously believed in the truth of occasionalism, the view that God is the only real cause in the universe, and, hence that all changes in the universe were effected by God's causal power, did not think that appeals to God were relevant in the context of giving scientific explanations. "One would make oneself ridiculous," Malebranche writes in his first, and to my mind, greatest and most philosophically significant work, The Search After Truth, "if one said, for example, that it is God who dries roads, or who freezes the ice in rivers. It must be said that air dries the earth, because....and that the air or the subtle matter freezes the river in winter, because in this time....In a word, the natural and particular cause of the effects in question must be given, if it is possible.")
Why mention the Holocaust example specifically? Any worries about the "certainty" of historical knowledge would equally apply to every single piece of historical knowledge. Of course, what makes the Holocaust example stand out is that it does get challenged -- by people who have typically deeper agendas -- so perhaps what you should be asking is this: whenever you read about any historical event, and whenever you find people challenging conventional historical events, can you distinguish what is driven by "agenda" and what is driven by actual consideration of the available "facts"? (An excellent general book on the subject is the recent book "Voodoo Histories", which is a study of various conspiracy theories (including Holocaust denial and others), trying to articulate how/when people with agendas choose to selectively apply ordinary standards of reason and evidence ......)
Thank you for your nice question. We normally think of our beliefs as things that ought to be responsive to evidence, and only to evidence. So for instance, most of us would agree that it is not a proper reason for thinking that smoking is not harmful to my health, that it makes me feel better to think so. Rather, most people would probably criticize me for thinking something on the basis of what I want to be true rather than in light of how the world is. Again, we would probably criticize an adult (though perhaps not a child) for believing in the Tooth Fairy, if his or her reason for so believing was that doing so makes her feel better. After all, wishing something to be so doesn't usually make it so. (Possible exceptions to this rule have to do with our own behavior: wishing to go outside to enjoy the weather might induce me to go outside to enjoy the weather, but this sort of case seems far removed from the topic of your question.)
So wishful thinking seems to be what some philosophers might call an "epistemic failure"--a failure to use our minds properly in aid of finding out how the world is.
You might reply: Well, in the case of faith, doesn't such "wishful thinking" stand a chance of making our lives better? For instance, doesn't believing in God give our lives meaning and give us direction for how to act?
To this many epistemologists might reply: Even if it's in your best interest to believe something, this fact doesn't increase the chances that it is true. On the other hand, this practical argument for theism might show that it's in one's best interest to be a theist. But to do this, it will have to make a lot of assumptions about people. For instance, it will have to assume that being a theist is the only way, or the only feasible way, to achieve the comfort in question (such as giving our lives meaning, and so on). But that is a heavy assumption, and might make one wonder whether there might be another way of achieving this comfort (such as re-thinking the assumption that our lives have to have meaning) without buying the theism.
As you think through these issues, you might also consider how your question relates to Pascal's Wager.