Advanced Search

Where can I find literature within philosophy on the "good judgment" that Miriam

Where can I find literature within philosophy on the "good judgment" that Miriam Solomon (june 5, 2014) describes as essential to philosophy? Is this "good judgment" something like the "tacit knowledge" explored by Michael Polanyi? If so, what is the current philosophical status of his work (as opposed to status within cog-sci)?

There is a vast literature on reasoning and rationality that tries to understand what "good judgment" consists in. It includes Michael Polanyi's work, which is still very much respected among some philosophers, especially those influenced by Ludwig Wittgenstein. But the literature starts (in the Western tradition) with Plato and Aristotle's ideas about knowledge and continues through contemporary work on scientific methodology.

There is a vast literature on reasoning and rationality that tries to understand what "good judgment" consists in. It includes Michael Polanyi's work, which is still very much respected among some philosophers, especially those influenced by Ludwig Wittgenstein. But the literature starts (in the Western tradition) with Plato and Aristotle's ideas about knowledge and continues through contemporary work on scientific methodology.

I'm sure that, for almost any position I take on a controversial political issue

I'm sure that, for almost any position I take on a controversial political issue, there is an expert out there who has investigated it more than I have and, as a result, rejected my position -- or who *would* reject my position *if* they investigated it more thoroughly. (Take, for example, the question of whether Obamacare is good public policy.) This humbles me and makes it difficult for me to be fully confident in my conclusions and work up the motivation to fight for what seems like the right thing. More generally, careful reflection on how I could be wrong often removes or severely diminishes the passion I might have originally felt about a political issue. My reflection breeds a sort of apathy. Is that inappropriate? How do philosophers who passionately fight for political causes deal with the uncertainty that they could be wrong, or with the fact that there is (or could be) someone out there who is more of an expert *and* has the opposite view?

The quick answer to your question is that most people have more self-confidence--even arrogance--than you seem to have about their opinions, especially if they are "experts." So, they might be wrong, but they don't worry about it like you do (as my husband the surgeon says about surgeons "sometimes wrong but never in doubt"). A more thoughtful answer to your question, which draws on epistemological ideas, is that so-called experts--just as non-experts--are susceptible to various kinds of bias, such as confirmation bias (evidence for one's position is weighed more heavily than evidence against) and salience bias (one's personal experiences are weighed more heavily than the experiences one has merely heard about). And so-called non-experts can in fact be more knowledgeable than so-called experts about their own experiences of e.g. what it feels like to be poor. So, you shouldn't defer to the experts, although you can sometimes learn from them.

There is now a philosophical literature on "peer disagreement" which covers the question of what you are supposed to do if your "epistemic peers" (other experts if you are an expert) disagree with you on a topic.

The quick answer to your question is that most people have more self-confidence--even arrogance--than you seem to have about their opinions, especially if they are "experts." So, they might be wrong, but they don't worry about it like you do (as my husband the surgeon says about surgeons "sometimes wrong but never in doubt"). A more thoughtful answer to your question, which draws on epistemological ideas, is that so-called experts--just as non-experts--are susceptible to various kinds of bias, such as confirmation bias (evidence for one's position is weighed more heavily than evidence against) and salience bias (one's personal experiences are weighed more heavily than the experiences one has merely heard about). And so-called non-experts can in fact be more knowledgeable than so-called experts about their own experiences of e.g. what it feels like to be poor. So, you shouldn't defer to the experts, although you can sometimes learn from them. There is now a philosophical literature on "peer...

The quick answer to your question is that most people have more self-confidence--even arrogance--than you seem to have about their opinions, especially if they are "experts." So, they might be wrong, but they don't worry about it like you do (as my husband the surgeon says about surgeons "sometimes wrong but never in doubt"). A more thoughtful answer to your question, which draws on epistemological ideas, is that so-called experts--just as non-experts--are susceptible to various kinds of bias, such as confirmation bias (evidence for one's position is weighed more heavily than evidence against) and salience bias (one's personal experiences are weighed more heavily than the experiences one has merely heard about). And so-called non-experts can in fact be more knowledgeable than so-called experts about their own experiences of e.g. what it feels like to be poor. So, you shouldn't defer to the experts, although you can sometimes learn from them. There is now a philosophical literature on "peer...

What is value of knowledge?

What is value of knowledge?

Knowledge may be valuable in itself i.e. for its own sake. When you ask "What is the value of knowledge?" you may be asking what else is it valuable for. There is hardly any human activity that is not aided by relevant knowledge. Medicine and technology are the result of applying scientific knowledge to a vast range of human needs.

Knowledge may be valuable in itself i.e. for its own sake. When you ask "What is the value of knowledge?" you may be asking what else is it valuable for. There is hardly any human activity that is not aided by relevant knowledge. Medicine and technology are the result of applying scientific knowledge to a vast range of human needs.

Suppose someone brings John a glass of tap water, which John watcher being

Suppose someone brings John a glass of tap water, which John watcher being poured from an entirely normal tap. Yet suppose that the water from that particularly tap was somehow laced with poison. When asked what the glass contains, John, not knowing of the poison, says "That's water." Let's put aside the issue of whether witnessing tap water being poured is sufficient grounds for knowledge that the substance is in fact tap water, and assume that, were the water not poisoned, John would have a justified true belief about the contents of the glass. Presented with the poisoned water, does John have knowledge about the contents of the glass? I ask because, normally, our tap water contains a great deal of things besides water, yet we would not intuitively say that calling the stuff that comes from taps "water" is incorrect. But if some of the stuff was poison, it suddenly seems that John's belief that the glass contains water is incorrect (despite, in a sense, being obviously true), because if he were to...

Questions should be understood contextually. In your story about John, we are led to assume that John is about to drink the contents of the glass, and not, for example, use it in a chemistry experiment requiring high levels of purity. The suggestion is that it is water and not e.g. orange juice or beer. A small amount of harmless impurities don't make any difference to its drinkability-as-water. A tiny amount of cyanide, however, makes all the difference in the world to its drinkability-as-water. It's not the amount of impurity that matters, its the difference the impurity makes to our intended use of the water. (This is a case that shows the pragmatic functions of language. Sometimes you miss things if you take language "too literally" i.e. devoid of context.)

Questions should be understood contextually. In your story about John, we are led to assume that John is about to drink the contents of the glass, and not, for example, use it in a chemistry experiment requiring high levels of purity. The suggestion is that it is water and not e.g. orange juice or beer. A small amount of harmless impurities don't make any difference to its drinkability-as-water. A tiny amount of cyanide, however, makes all the difference in the world to its drinkability-as-water. It's not the amount of impurity that matters, its the difference the impurity makes to our intended use of the water. (This is a case that shows the pragmatic functions of language. Sometimes you miss things if you take language "too literally" i.e. devoid of context.)

There was something that I wanted so badly for so long. Now, I got it but I am

There was something that I wanted so badly for so long. Now, I got it but I am not as excited as I thought. How can we know what we want (our goal) in life?

Some recent papers by the psychologist Daniel Kahneman suggest that we are not very good at predicting what will make us happy. It is a good idea to read these to get a feel for human fallibility.

Philosophers often argue that reflecting rationally on our values and goals can lead us to pursue what we "really" want, and thereby lead to greater satisfaction. You might try this and see whether it helps.

Some Buddhists, and some psychologists, argue that pursuit of a goal is more exciting than achieving it. They suggest focussing on the activity rather than the desired result.

Some recent papers by the psychologist Daniel Kahneman suggest that we are not very good at predicting what will make us happy. It is a good idea to read these to get a feel for human fallibility. Philosophers often argue that reflecting rationally on our values and goals can lead us to pursue what we "really" want, and thereby lead to greater satisfaction. You might try this and see whether it helps. Some Buddhists, and some psychologists, argue that pursuit of a goal is more exciting than achieving it. They suggest focussing on the activity rather than the desired result.

Some recent papers by the psychologist Daniel Kahneman suggest that we are not very good at predicting what will make us happy. It is a good idea to read these to get a feel for human fallibility. Philosophers often argue that reflecting rationally on our values and goals can lead us to pursue what we "really" want, and thereby lead to greater satisfaction. You might try this and see whether it helps. Some Buddhists, and some psychologists, argue that pursuit of a goal is more exciting than achieving it. They suggest focussing on the activity rather than the desired result.

If a person does not believe P, where P is some proposition, is it fair to say

If a person does not believe P, where P is some proposition, is it fair to say that they then positively believe not-P?

Here's an example to help make the question concrete:

Let P be "It is raining."

I don't believe that it is raining. However it does not follow that I believe that it is not raining. (I don't have any beliefs about raining at the moment.)

Here's an example to help make the question concrete: Let P be "It is raining." I don't believe that it is raining. However it does not follow that I believe that it is not raining. (I don't have any beliefs about raining at the moment.)

Here's an example to help make the question concrete: Let P be "It is raining." I don't believe that it is raining. However it does not follow that I believe that it is not raining. (I don't have any beliefs about raining at the moment.)

Are there any other ways of arguing against the apparent abilities of Mediums

Are there any other ways of arguing against the apparent abilities of Mediums other than by pointing towards the alternative naturalistic explanations for their 'results'. I'm getting very tired of having to provide answers to the 'well how else do you explain X?' response (and pointing out that even if I can't explain X, it doesn't make X true by default). Perhaps there's a way of showing how the idea of disembodied souls is flawed in the first place, or debunking a similar aspect of the background theory. I'm getting very concerned that my Mother-in-law's fascination with the Mediums on TV and and in the books she reads may lead to her wasting lots of money she can't really afford.

If Mediums did get "results" then it would be rational to go to them for advice i.e. not a waste of time!

So I think the thing to focus on is the results, and ask the question, are the amazing pronouncements just coincidences and/or wise vague sayings? There's no shortcut to answering this--it is an empirical question. Remembering of course, that Mediums, TV and authors don't always report reliably.

If Mediums did get "results" then it would be rational to go to them for advice i.e. not a waste of time! So I think the thing to focus on is the results, and ask the question, are the amazing pronouncements just coincidences and/or wise vague sayings? There's no shortcut to answering this--it is an empirical question. Remembering of course, that Mediums, TV and authors don't always report reliably.

It is well understood that we are prone to have certain "biases" in our

It is well understood that we are prone to have certain "biases" in our perception of the world, which are caused by some form of highly relative sociocultural conditioning or another. With that in mind, how could we be sure that we can trust our perception of objective reality? Wouldn't that "complete perspective" always out of reach, because with all our sophisticated science and knowledge we are still just human subjects experiencing the world as members of a particular set of social conditions? Also, what about the bias of human perception itself, compared to animal forms of perception which might rely on completely different systems of space and time? Like, the fly that only lives for a day or two, or giant squids who live at extreme temperatures and pressures for 100 years or more. The point is that it seems like a truly comprehensive system for understanding and categorizing objective reality into workable concepts would have to account for the fact that we are limited to our experience as...

It is true that we can only see/interact/cognize as human beings do. But it does not follow that knowledge is generally "biased." Particular biases (that we can discover through cognitive psychology, social analysis etc and check for) may lead to specific faulty knowledge claims. We do our best: we check for the biases that we know humans make. Animals/extra terrestrials might see/interact/cognize differently. We can learn about them, and see how their knowledge might differ from ours. There is no perspectiveless point of view (some philosophers call it a "God's eye" point of view).

It is true that we can only see/interact/cognize as human beings do. But it does not follow that knowledge is generally "biased." Particular biases (that we can discover through cognitive psychology, social analysis etc and check for) may lead to specific faulty knowledge claims. We do our best: we check for the biases that we know humans make. Animals/extra terrestrials might see/interact/cognize differently. We can learn about them, and see how their knowledge might differ from ours. There is no perspectiveless point of view (some philosophers call it a "God's eye" point of view).