Your question: "Is it moral?", can be asked about the conduct and the person. As you describe the case, the conduct is moral (i.e., morally above reproach), but the person arguably is not because he has no concern for the rights, needs and interests of other people. What does it matter, you ask, if the results are the same? Just think about living with someone who genuinely cares about you versus with someone who behaves the same way out of fear that, if she is not nice to you, she will be punished by losing out on the benefits of your mother's fortune. Or think about a whole world in which any consideration people show one another is motivated solely by a selfish concern over rewards and punishments. The value of human civilization cannot lie exclusively in right conduct -- robots could be programmed to produce that more reliably than human beings -- it must lie, in large part at least, in the nobility of human motivations.
Here a few classics that shaped the debate when it was at its peak...
J.J.C. Smart and Bernard Williams: Utilitarianism: For and Against (Cambridge UP 1973).
Amartya Sen and Bernard Williams (eds.): Utilitarianism and Beyond (Cambridge 1982), esp. Rawls's essay.
Ronald Dworkin: “What is Equality? Part II: Equality of Resources” in Philosophy and Public Affairs 10/4 (1981), 283-345 (also in Ronald Dworkin: Sovereign Virtue (Harvard 2000)). http://links.jstor.org/sici?sici=0048-3915%28198123%2910%3A4%3C283%3AWIEP2E%3E2.0.CO%3B2-3
G.A. Cohen: “Equality of What? On Welfare, Goods, and Capabilities” in Martha Nussbaum and Amartya Sen (eds.): The Quality of Life (Oxford 1993).
Amartya Sen: “Evaluator Relativity and Consequential Evaluation,” in Philosophy and Public Affairs, 12/2 (1983), 113-32. http://links.jstor.org/sici?sici=0048-3915%28198321%2912%3A2%3C113%3AERACE%3E2.0.CO%3B2-U
Amartya Sen: Choice, Welfare, and Measurement (Harvard UP 1997, first 1982), chapters 4, 16.
Amartya Sen: On Ethics and Economics (Blackwell 1987).
Well, with the symmetrical argument you could conclude the opposite:
If causing happiness is good and if life is in part happiness, then procreating is good.
Both conclusions seem inadequately supported. It matters how much suffering and how much happiness one's offspring is likely to face. And there are other valuable and disvaluable things besides happiness and suffering: knowledge, culture, art, science, sports and love may all be good things a future person will experience -- good even if they are unaccompanied by happiness. And there are also contributions this future person will make to the lives of others -- good and bad contributions. So the question whether it's bad to procreate requires a more complex weighing up of considerations than is suggested by your argument.
Not necessarily. The fact that the adult in question feels OK with the treatment she receives does not show that her feeling is based on a sound judgment. Not long ago, black citizens of the US were routinely treated as inferior creatures: they were scolded for drinking from the whites-only water fountain, for sitting in the front of the bus, and so on. Some blacks, deeply conditioned to feel inferior in a white society, may have felt OK with such abuse ("oh gosh, yes, I am so sorry, I didn't realize this is the whites-only water fountain"). But this hardly shows that there was no abuse going on when blacks were told off in the ways described.
We often do nothing in such situations without recognizing this failure to act as a choice, and perhaps even really without making a choice not to act (see Hilary Bok: Acting without Choosing). Bad faith is manifested not in doing nothing, but in the failure to acknowledge our responsibility for doing nothing. Whether we make a conscious choice or not: we have a choice, we are fully responsible for what we do and we ought to face up to this responsibility -- or so I understand Camus' point here.
Doing nothing in the face of an impending bad/evil is sometimes the right thing to do (for example, when intervening would afford little protection to those under threat relative to the costs and dangers to which it would expose third parties) and sometimes at least permissible (for example, when the dangers and cost of intervention to oneself would be high relative to the protection one might afford to those under threat). If, in La Chute, Clamence would have run a serious risk of dying by jumping into the river after the suicidal woman, then his doing nothing would have been morally permissible. Again, on my understanding, what undermines Clamence is not his conduct but his later recognized failure to take responsibility for this conduct.
Philosophers often draw a distinction between acts and omissions and make this distinction morally significant: other things (particularly: what is at stake for the agent and those affected by her agency) being equal, it is worse wrongly to prefer one's own good to that of others in an action than in an omission. For example,taking two bagels from a blind man in order to save a dollar is worse than saving a dollar by refusing to buy a blind man two bagels (assuming that the two bagels are equally important to the blind man in both scenarios). While this seems quite right in many straightforward cases, it is notoriously hard to draw the distinction in a precise way while preserving its alleged moral significance (see Jonathan Bennett: The Act Itself). Cases like the ones you have in mind are difficult in this way. One can easily describe Clamence's conduct as an omission, adding perhaps that when he kept walking he did what he would have done if the woman had not been there. But one can also describe it as an action, pointing out that she might not have jumped had he not walked past her with such indifference to her palpable distress. Not responding to a fundraising commercial on the TV is a pretty clear case of an omission, but turning away a needy person is a pretty clear case of an action. This gets us to a different meaning (not in focus in Camus) of "doing nothing [sometimes!] means also doing something": in some cases, doing nothing is tantamount to an active refusal and thereby becomes more wrong, if it is wrong.
It is true that you often cannot know the outcome of alternative courses of conduct beforehand. But you can typically assign reasonably accurate probabilities, at least a little time forward. We do this all the time when we make decisions -- between two holiday destinations, perhaps, or about whether to accept a job offer or have a child. Utilitarians ask us to do the same sort of thing but then to evaluate in terms of the happiness of all those affected (including oneself). We are to choose the course of conduct that we have good reason to believe will produce the highest probability-weighted expected happiness. The probability-weighted expected happiness produced by a particular course of conduct (C) is calculated this way: one identifies the possible outcomes of C, evaluates each outcome in terms of happiness, multiplies each outcome's happiness by that outcome's probability (subject to one's having chosen C), then sums the products.
You are right that one can do this with tolerable accuracy only if one confines attention so as to exclude most remote and indirect effects. But one can typically do this on the reasonable assumption that these remote and indirect effects will be about the same for the various candidate courses of action. To be sure, this may often be a false assumption. Still, if one makes this assumption consistently, then one will still do better in terms of happiness than one would if one were to choose randomly -- and this makes the assumption a reasonable one.
Admittedly, this is not very reassuring. Most intelligent utilitarians do a little better than the rest of us at promoting happiness, but they still do vastly worse than they could do with perfect foresight into the effects various courses of conduct would have in future decades, centuries, and millennia. But is this a problem with utilitarianism? One might deny this on the ground that there is no a priori assurance that what we morally ought to do is easy or achievable. It's very difficult for those who seriously work to end world poverty to succeed -- yet it does not follow from this discovery that there is no value in ending world poverty. Utilitarians are committed to their goal for reasons that are independent of how successfully humans can be in the pursuit of this goal.
If the choice is to impose the cost either on the infectors or on the infected, the former rule seems preferable because it gives suitable incentives to potential infectors to find out whether they have a contagious disease and, if so, to avoid infecting others. This reason might be overcome in special cases where the infectors are very much poorer than those they infect.
There is a third option, probably better, namely to cover such costs through universal health insurance. This saves the research and litigation expenses associated with determining who the infector is. One could then still pursue cases of gross negligence through the criminal justice system.
We might distinguish three levels of help by reference to the cost or risk to the helper. There is help you are legally require to give, for instance under the "Good Samaritan" laws of your state. Then there is the help you are morally required to give, which would typically go beyond the first category. Finally there is the help you are not legally or morally required to give, help "beyond the call of duty" or supererogatory help.
The law typically requires only help that imposes little cost or risk on the helper. Standing safely on the deck of a boat, you may be legally required to toss a life preserver to a drowning swimmer, for example. Here the law will not excuse you if you fail to help on the ground that you believe the drowning swimmer to be a murderer. This is indeed not a judgment you should be making, but one you should leave to the trial process, as you say.
In regard to supererogatory help, on the other hand, you are surely free to discriminate. If trying to save a drowning swimmer requires that you jump into treacherous waters where you yourself might well drown, you are morally permitted not to run this risk -- and morally permitted to run it. This means that you may risk your life for someone you truly like or admire even while you would not do the same for someone you dislike or suspect of being a murderer.
This leaves the middle category: help that is morally but not legally required. Here I would think that what you are morally required to do would vary with the person in need. Suppose, for example, that you owe your life to Susan's supererogatory rescue, a few days ago, at considerable risk to herself. In this case, you are morally required, I would think, to be prepared to bear greater cost or risk to save her life than you would be required to bear for the sake of saving the life of a stranger. The same may be true if Susan had earlier risked her life to save not yours but that of someone else. This, too, makes her more deserving of rescue and thereby increases, at least slightly, the cost you ought to be willing to bear to save her life.
If deservingness indeed matters in this way, then undeservingness would seem to matter symmetrically. Suppose Slop did not make a morally required rescue effort for you when you were almost killed by a rip tide last week. In this case, it would seem that the cost or risk you ought to be prepared to bear for the sake of rescuing him is smaller than the cost or risk you would be morally required to bear for the sake of rescuing a stranger (and definitely smaller than the cost or risk you would be morally required to bear for the sake of rescuing Susan).
In the cases of Susan and Slop, their (un)deservingness is closely related to the decision you face. One might say then that the amount of cost or risk you are required to bear for the sake of rescuing Susan or Slop contains an element of reciprocity. Those who are more (less) altruistic in their rescue behavior deserve more (less) altruism from potential rescuers when they themselves are at risk of drowning.
Does the relevance of desert hold up when its basis is quite different and perhaps also distant in time? One problem facing an affirmative answer is that you are unlikely to know more than merely a tiny fraction of all the good and bad things some endangered swimmer has done. Still, you may have some rough overall judgment of the swimmer's character, and I would think that this judgment should influence you in deciding how much cost or risk you ought to be prepared to bear to rescue this person. The cost or risk you ought to be prepared to bear for the sake of saving the life of someone you, with good reason, believe to be a murderer is then less than the cost or risk you ought to be prepared to bear for the sake of saving the life of a stranger which in turn is less again than the cost or risk you ought to be prepared to bear for the sake of rescuing someone you, with good reason, believe to be a benefactor of humanity.
The question of cost arises in different accounts of morality in different ways.
Consequentialist accounts may center around the moral imperative to act so as to make the outcome best. Here cost is factored in by considering how a candidate course of conduct will affect various people, including the agent. I shouldn't try to help a stranger, for example, if the cost to me and third parties of doing so is larger than the benefit to the stranger and other beneficiaries.And I should not help this stranger if doing so came at the expense ("opportunity cost") of something even better I might do instead.
Duty-focused moralities often say very little about cost. They may issue the moral imperative not to lie, for example, without addressing the cost that such abstention might impose on the agent and on others. Kant was famously taken to task for this by Benjamin Constant (see also Sartre's much later story "The Wall"). Some duty-focused moralists have addressed the question of cost by delimiting the relevant duties in such a way that excessive costs cannot arise. For example, one is assigned a duty "not to lie unless the harm the lie will prevent is very large and much larger than any harm the lie produces".
An interesting and severely understudied question is that of moral cost. What should a duty-focused morality say about an agent's compliance with her duties when such compliance foreseeably leads to a great deal of non-compliance by others with their duties? And, similarly, what should a virtues-focused morality say about an agent's devotion to developing her own moral excellences when this devotion foreseeably leads to a great deal of vice on the part of other people?
Moralities that command people to comply with simple moral imperatives regardless of consequences are not plausible in a world where such conduct can really make the heavens fall. Economists and psychologists can and do talk about costs, of course, and often in very illuminating ways. But they cannot solve the philosopher's task of formulating a plausible, conduct-guiding morality that is sensitive to also the remoter effects our conduct has on ourselves and others. Economists can model how a homo œconomicus would rationally take certain costs into account in a strategic environment, and psychologists can help us understand how people actually tend to do so. But all this still leaves us with the philosophical questions of whether it is morally right to act like a homo œconomicus or to think, feel and act the way people generally tend to do -- the question what we morally ought to do in situations where otherwise morally attractive conduct options are foreseeably associated with substantial moral or prudential costs for the agent and/or others.
I think it should be noted that Professor Pogge's reply invokes, or alludes to, the controversial Doctrine of Double Effect (or else something close to that doctrine). For an account of the doctrine and of some of the controversy surrounding it, see this SEP article. One problem for the doctrine emerges when we consider a reply that B2 might make: "I didn't intend the trolley to hurt the fat man. I intended only that he stop the trolley (although of course I foresaw that his being harmed would be a side-effect of his stopping the trolley). So I didn't intend harm any more than B1 did." B2's reply seems lame, no?