A doctor who changes his last name to a Jewish sounding name in order to attract more patients is taking advantage of stereotypes about Jews and doctors in order to be more successful. This tends to reinforces the stereotype (unless this doctor is obviously a terrible doctor). And it thereby increases any prejudice against non-Jewish doctors. This is one reason why the change of name is, in my opinion, unethical. It is also unethical because it is deliberately deceptive (even though it does not involve the explicit telling of a lie). Physicians are held to high standards of truth telling because they are in positions of trust.
This argument seems to be one against tolerating gay athletes in locker rooms (not on sports teams). And if the argument is correct, we'd need a lot of locker rooms....two for avowed heterosexuals (with some cutoff for bisexual attraction) and one each for everyone else! I think it is far more comfortable and respectful for us to simply tolerate any discomfort one might feel undressing and showering in front of someone who might view them with sexual interest. Or perhaps those who experience the discomfort can have their own private locker rooms. In any case, the reasons for having separate sex locker rooms is not (merely?) to separate groups that are likely to be sexually attracted to one another; it may be in part because of fears of rape (sexual violence). People don't get as upset about women being in men's locker rooms as they do about men being in women's locker rooms.
Students who live in dorms with co-ed bathrooms manage their various sexual attractions just fine.
This is a timely question, since medical residencies typically begin on July 1, so we will soon have some new MDs starting the learning curve! If we don't permit the inexperienced to treat patients we will not be able to train the next generation. To keep saving lives, then, we will have to tolerate some harm. Computer and other kinds of simulation used for training purposes can avoid some of this harm. Patients keep going to training hospitals, presumably because the learning curve harm is compensated for by the skilled supervision that residents get from attending physicians. But yes, physicians in training (and even those already qualified) do have to accept that they will cause some harm. A classic book to read about this is Charles Bosk's Forgive and Remember.
I like your response very much--you are aiming to keep the maximum number of people alive by rotating time in the lifeboat. This is a consequentialist approach that demands selflessness from the people already on the lifeboat. They would have to agree to take their turn and jump back in the water. And it would have to be possible to get people in and out of the water without danger (probably unlikely).
Other approaches may also be relevant. For example, Flintheart should probably be the first to give up his place, since it is the duty of the captain to take care of the passengers. Unless his presence is necessary for navigating the small boats.
In ethical reasoning there is not usually one "right answer"; rather there are several reasonable answers that use moral reasoning.
Great question! At first blush, "self-plagiarism" seems absurd, like forging one's own signature or stealing from oneself. But just as we can imagine odd circumstances when even these other seemingly absurd cases might be attempted (imagine I have amnesia and forgotten I am Charles Taliaferro, and think, instead, my real name is John Doe; I go to a bank and pretend to be Charles Taliaferro and sign a check with that name, I then break into Charles Taliaferro's apartment, take everything and sell it on the black market using the name John Doe).
On inspection,though, self-plagiarism is actually less odd than the strange adventures of John Doe. It usually consists in re-using work you have published elsewhere and raises copy-right concerns. Self-plagiarism occurs when, say, you have an essay published in The Journal of Philosophy but then use 90% of the article to form a new essay with a new title in a book, say, published by Princeton University Press without crediting the original publisher or paying any copyright fees. While extensive re-using of one's own writing published elsewhere (without citations or paying fees) is illegal or in breach of contract, it is often more frowned upon than subject to legal action. In fact, it is very difficult not to repeat points you have made elsewhere if you are advancing the same argument in multiple contexts, and this can easily lead to using the same language. For example, the prominent philosopher Alvin Plantinga has advanced an important argument that naturalism is self-refuting in many different debates, conferences, anthologies, journal articles, books. Though I have not checked on this, I think it is highly unlikely that there is not some repeating of phrases and language, and this is neither undesirable (imagine he had to change the terms of his argument on each occasion!) or dishonorable (in fact, it might be more honorable and helpful to both his critics and supporters if there was some constancy of language in these multiple publications).
NB: I must add that I am NOT claiming Plantinga either has done the above or is guilty of self-plagiarism, I use his case as a hypothetical one of a highly respected philosopher who has advanced a substantial position (which I happen to think is convincing) in more than one publication.
I think it is unethical for psychologists to lie to their clients about such test results. It would be best practice for a psychologist to ask their client why they want to take such a test and what they think the result means, as part of the process of consent for taking the test. Apparently this was not done in your case, and this is regrettable, since you seem to think that the test has great significance.
A well known book you might enjoy is Howard Gardner's "Frames of Mind" which discerns seven different kinds of intelligence.
The claim that something "symbolically supports tyranny..." is not a claim about the act in itself, but a claim about the meaning of the act. Your vegan friend may see troublesome meanings in the act of eating artifical bacon flavored chips or wearing fake fur. But not everyone does, and there is much ambiguity and complexity about what things mean. To take another example, I have friends who will not marry because they think that "marrying symbolically supports an institution that oppresses women." I don't doubt that marriage has historically oppressed women, but I think marriage has multiple meanings (commitment, family, for example) and any decision whether to marry or not needs to take all these meanings into account. Back to the fake fur: Personally I prefer fake fur that looks fake, so that I don't make unnecessary enemies or set a bad example. Your friend is so passionate about veganism that he focuses on one set of meanings.
You do ask a big question, and an important one. I can offer a couple of thoughts to use in exploring the issues:
- happiness is not the only measure of human wellbeing, important though it may be. (life expectancy, increased knowledge, quality of life experiences etc also matter)
- there is no need to show that "overall" technological progress is good or bad. We don't need overall measures. We just need to assess each technology in its proposed context of use
- you assume that it is a moral requirement to choose the job that can best advance human wellbeing. (A utilitarian might argue this, but many other moral theorists would say that it is praiseworthy but not morally required to do so)
You ask a good question that I have wondered about myself. The classic examples of immoral work in science are Nazi experiments on human physiology and the Tuskegee syphilis study. Neither were up to current methodological standards, but both were OK science for their time. In a way it would be more convenient if these cases would be bad science as well as immoral science, because then no questions need be asked about whether it is permissable to use the results. Perhaps it is difficult to acknowledge that science can be used successfully in ways that are immoral. But I think we learned this lesson with Hiroshima and Nagasaki.
You make an insightful observation. Perhaps one reason is that there is a close coincidence between lying (which is often although perhaps not always morally wrong) and telling falsehoods. Perhaps another is that we sometimes regard the search for the truth (in science or other fields) as morally praiseworthy, which might lead to thinking of falsehoods as improper conclusions to inquiry. In any case, I think you are correct to distinguish what philosophers call epistemic correctness from moral correctness.