Advanced Search

In several answers in AskPhilosophers, philosophers say that some uttered words

In several answers in AskPhilosophers, philosophers say that some uttered words express emotions, feelings, sensations and the like (but you always use the word "express"), and that this is not the same as some words saying or stating that such emotion (etc.) occurred. So you make a big difference between expressing and saying (or perhaps stating). For instance, "ouch" expresses pain, while "I am feeling pain" states that such pain exists. Sometimes you say that expressing cannot be true or false, but statements can. It is very difficult for me to understand this difference. I understand that "ouch" is much more immediate than "I am feeling pain", and that "ouch" is slightly humorous, and there may be other differences, but basically these two sentences just say the same thing. They convey the same basic information and both can be used to give a false information. Would you be so kind as to explain me what is the difference between expressing and saying (stating) in cases where what is expressed can be...

A very interesting question touching on complicated territory! Probably the best response I can give is to recommend the SEP article on "Pragmatics," available at this link. I think you'll find it contains lots of information highly relevant to your question.

Could someone explain in layman's terms the difference between truth conditions

Could someone explain in layman's terms the difference between truth conditions and assertability conditions, and what is at stake between them? Thanks for your time.

Truth conditions are often held to be independent of assertability. Thus, the claim that 'snow is white' or '6 is the smallest perfect number' are true, regardless of whether anyone is warranted in asserting these claims. The reason why some philosophers might object to this format is that it appears to open the door to a radical skepticism, e.g. it may be claimed that there are truths that elude our best cognitive posers. Such philosophers thus advance what may be called an epistemic understanding of truth that would make it incoherent to think there are truths that outstrip our warranted assertability. Although I am not a radical skeptic, I am inclined to think that a wide-ranging skepticism is at least coherent --why limit truth to what we have (or ideally might have) justification in asserting?

You might find the work of Roger Trigg of interest in such matters, e.g. Reality at Risk.

Quine has put forward several arguments against the Analytical/Synthetic

Quine has put forward several arguments against the Analytical/Synthetic Distinction in the paper named "Two dogmas of empiricism" (I have not read the paper myself), one of arguments being that there is no non-circular definition of Analytic. while I argue with Quine on that, I do not find that to be a problem since I don't have any reason to think that Circular Definitions to be a problem. since Definitions are ultimately circular (Since the definition of words are relies on the use of other words), meaning that you have to reject the use of language all together (which is absurd since you have use language to come to that conclusion). Why are circular definitions bad definitions?

You should certainly read "Two Dogmas of Empiricism" -- it's one of the best-known papers in analytic philosophy and can be said to have set a large part of the agenda for Anglo-American philosophy since its publication in 1951. Better to read the paper itself, anyway, than to read things things about it, as you evidently have.

Circularity doesn't itself play that much of a role in Quine's paper. His attack on the analytic-synthetic distinction is waged on two broad fronts (to oversimplify a bit), a logical one and a linguistic one. The logical one is largely implicit in "Two Dogmas" but gets more attention in some later writings of Quine's. It derives from Gödel's first incompleteness theorem, which showed that there can be (under certain conditions) true sentences in an axiomatically defined language not provable from its axioms (i.e. not "analytic" as that is understood by e.g. Frege). So if you want a distinction between sentences that are simply an artifact of the language you've chosen and those that actually convey some empirical information about the world, you'll need a criterion other than provability. That's hard, and no one has come up with anything nice and simple that applies across the board and not just in special cases. For Quine, that was reason enough to give up on the distinction altogether. Others (including scientists such as Einstein) thought the distinction was absolutely critical to science and weren't too concerned that it couldn't be pinned down by a precise criterion.

Quine's other attack was on a quasi-empirical front; in ordinary language, he claimed, you can't find any such analytic-synthetic distinction, you can't find a built-in "criterion of analyticity." (This also gets more attention in his book Word and Object a few years later.) It's a distinction that has to be imported into ordinary language from our constructed logical and mathematical languages. However Rudolf Carnap, against whose ideas "Two Dogmas" was mainly directed, had no problem with that, and this part of Quine's critique no longer gets so much attention.

Your final question about circular definitions is hard to answer briefly. There are some philosophers who think that the point of philosophical analysis is not "reductive" analysis, whereby you analyse (or define) all concepts in terms of more basic ones and so get down to some foundation or small set of basic concepts (as in Bertrand Russell, as he's usually seen), but rather "connective" analysis, whereby we figure out how the various concepts we use in our language fit together. For those philosophers (Peter Strawson, for instance, who coined the term "connective analysis"), circularity in some wide sense is not a bad thing.

In most of science and mathematics, on the other hand, circularity is obviously a defect because a circular definition does no work. To say that table salt is composed of two elements, sodium and chlorine (each of which we know a lot about), in a certain electrochemical combination of units or atoms (one each), is informative and can predict certain things about table salt that you couldn't otherwise, whereas to say that table salt is table salt, or is salty by virtue of having a tendency to saltiness (extreme version of a circular definition) gets you nowhere and predicts nothing.

Hello,

Hello, My question is: what makes a swear-word/curse/cuss offensive? I submitted to a friend that in order for a word to be offensive three criteria have to be filled. 1) The speaker must utter the word with the intention to offend. 2) The speaker and hearer must both be aware of the background context of the word as an offensive word. 3) The hearer must hear the word and react; taking offence The justification for this is that a word is just a sound and that many languages use sounds that in another language are curses. It is irrational to take offence to a sound if the speaker is ignorant of it's vulgar connotations. Without a shared contextual understanding of a word's history as offensive, a speaker seeking to offend through uttering a word (without using other signs of contempt or emphasis) is just making a sound to the hearer which has no offensive connotations to them. The hearer upon hearing the word reacts, consciously or unconsciously actively taking offence. A person intending to offend...

I'd suggest that we need to keep three things separate: 1) whether the word is offensive, 2) whether offense was intended, and 3) whether the hearer was offended. All eight possibilities are real. To take the most relevant, a word might be offensive, and yet the person using it might not have intended to offend and the hearer might not be offended.

For example: suppose someone who's not a native speaker uses a deeply racist term to refer to someone. The speaker is not at all a racist and would be deeply mortified if she knew how the word is normally used. She intended no offense. But that's because she didn't know that the word is an offensive word.

The person she was speaking to, meanwhile, is a racist. The speaker doesn't know that; she's just met him. He's not offended, but only because of is racism. On the contrary: he thinks he's met a kindred spirit.

There's no mystery here. The word is offensive because of its history, its usual meaning, and the way people typically respond to it. None of that changes if the speaker is unaware of this or the hearer, for whatever reason, doesn't have the usual reaction.

You're right, of course, that if I come to learn that a speaker didn't realize the full connotations of his words, it might be unreasonable to hold on to my offense. But if I tell the speaker "You might want to know: that word is actually a very offensive one," I could be right even if in light of the full situation I'm not offended.

If something can’t be defined can it exist? and vice versa

If something can’t be defined can it exist? and vice versa

Some things can be defined that cannot exist, such as "A square circle in two dimensional space" or "2+2=1" --and some things can be described that do not exist but could have existed or might come to exist (unicorns). And, I suggest, that there may be indefinitely many things that exist for which we do not have any successful definition. "Consciousness" might be a candidate, insofar as some philosophers are right in thinking we may never have a good or at-least problem-free definition.

As an aside, your question raises the need for a good definition of definitions. I will not attempt such a philosophy of definitions here, but you might check out the Stanford Encyclopedia entries bearing on philosophy of language for further, useful material. Paradoxically, if nothing can exist than cannot be defined, and we have no definition of being defined, we all might be in trouble.

Thinking further: I suspect you may be principally concerned with the problem of affirming that something (X) exists, and whether this affirmation is meaningful if we lack a definition of X. On the face of it, there would be a problem with someone claiming: "Call the reporters. There is something I will refer to as 'N,' but I have absolutely no idea or definition of what 'N' might be. It could be an animal or number or time of day, for I know." Such a claim would be as bizarre as what we find in Alice in Wonderland. Even so, I suggest that we should distinguish claims about meaningful speech and claims about what does or does not exist. Even if we cannot make claims about what does or does not exist without (at least vague) definitions, it is another thing to claim that there only exists things we can make meaningful claims about. Sadly, we can imagine the whole human species perishing from some force which we cannot comprehend (and thus we cannot define) That is such a grim thought to end this reply, let me change the example: we can imagine that cancer and depression might be eradicated by a force that we human beings cannot comprehend or define.

Does Quine's critique of the analytic-synthetic distinction also apply to

Does Quine's critique of the analytic-synthetic distinction also apply to circular definitions? For example: a 'bachelor' is an 'unmarried male' seems analytic, and 'bachelor' and 'unmarried male' are synonyms. But consider: 'condescension' means a 'patronizing' attitude. Of course, 'condescension' and 'patronizing' are defined in terms of each other. Are all definitions that are circular in this way still susceptible to Quine's critique of the analytic-synthetic distinction, because they trade on the synonymy of the definiens and definienda?

This question reflects what I think is a widespread conception of Quine's critique, which is that it applies to ordinary colloquial language. Quine actually went much further than that. He was fundamentally skeptical of synonymity as well, and thought he could cast doubt even on the idea that you could stipulate synonymity, by setting up, say, an axiom system or, on a less formal basis, local "meaning postulates." You can regiment all you like, but you can't control what becomes of your regimentations; the most eloquent recent articulation of this view, in endlessly fascinating scientific detail, is Mark Wilson's work (see esp. his book Wandering Significance). So the answer to the question is "yes."

Quine didn't think in the local "circularity" terms in which the question is posed; he considered all human knowledge, starting with the most elementary common-sense knowledge and reaching to the most abstract representations of theoretical physics, to be one gigantic reciprocally-supportive circle.

I think it's hard to argue with Quine's case where colloquial language is concerned, even if you take away the behaviorist viewpoint he brings to bear, most notoriously in his demand for a "behavioral criterion" of synonymity. But where stipulations are concerned I'm less convinced. Almost any contract, for instance, contains a list of defined terms, and these stipulated synonymities are universally upheld by courts in the sense that they are practically never questioned. The legal profession might thus be held, by Quine's criteria, to accept an analytic-synthetic distinction along with a robust form of stipulated synonymity, and indeed to require such a distinction as a constitutive tool of its practice, just as Einstein said, in his lecture on "Geometry and Experience," that it was constitutive of his discovery of relativity.

I am sometimes struck by how we use language in an exaggerated manner. We often

I am sometimes struck by how we use language in an exaggerated manner. We often say "That is SO GOOD!" when it is not that good; we say "it has been a pleasure to talk to you" simply out of convention, regardless whether we derive any pleasure from the conversation. I am troubled by this because first when I hear people say those words I cannot help doubting their sincerity. Also, it is because those words become devalued: when I want to express my genuine praise by saying "this is really good," it just sounds like what everybody else will say no matter what. So how should we view those uses of words?

If I'm writing a letter to someone I don't know very well, I might begin it "Dear _____" and end it "Yours truly." But nobody is under the slightest impression that the recipient really is dear to me, nor that I'm declaring any sort of fealty.

I said "nobody," but of course that's not quite right. Nobody who's even noddingly familiar with the conventions of letter writing will be confused, though someone from a very different culture might be. What someone means by using certain words isn't just a matter of what you find when you look the words up in a dictionary.

Or suppose I run into a nodding acquaintance by chance. I hug them and say "Good to see you." Is the hug an expression of intimacy? Am I really pleased to see this person? Maybe or maybe not, but at least in my part of the world, this is how people great one another. I don't make judgments about people's overall sincerity based on interactions like this, because in following the conventions of polite greeting, sincerity isn't the issue.

Do conventions like this really undermine the usefulness of words like "good?" I'm not convinced. There are all kinds of contextual cues that help us figure out what people mean, and typically we pick up the cues more or less automatically. For example: if I'm having dinner at a mutually-agreed-on restaurant with a friend and he spontaneously says "This risotto is really very good!" it's a fair bet that he means it.

Is it always easy to tell? No. Are people sometimes insincere in social situations? Yes. Is this a bad thing? Not necessarily and certainly not always. We have to interact with people we like and people we don't like. I may not like John, but there may be no good reason to rub his nose in that fact. None of us likes being snubbed, and often there's nothing to be gained by putting our true feelings on display.

We use words to state facts, but we use words for many other things as well. Social conventions and forms of politeness do something important: they help us get along, sometimes by papering over differences. By and large, getting along is good. Often it's at least as important as saying exactly what we think.

I have a question about Verificationism. As I understand it Verificationists

I have a question about Verificationism. As I understand it Verificationists criticise theists whose beliefs aren't verifiable. How would they respond to the following scenarios; (1) A theist determines her belief based on a single coin toss. It came up heads this verifying her belief in God. She went into the test accepting it could come out either way and saying she would genuinely disbelieve if it came out tails and genuinely believe if it came out heads. (2) She repeats this process every morning. And thus ends up some days believing others not. Or, something different; (3) A particular believer believes Christ will return in 10, 000 years. Thus his belief is meaningful and verifiable, one needs only wait a very long time. Would they say he should remain in a suspension of belief? I have heard of the theory of eschatological verification, did verificationists disregard this too? On what grounds?

Verificationists typically say that for a claim to be meaningful it must be empirically testable. Tossing a coin might test claims about gravity, mechanics, or the symmetry of the coin, but it does not test an unrelated claim.
It is probably meaningful to believe that Christ will return in 10,000 years (so long as we're specific about what "Christ" and "return" mean) but that does not mean it is plausible.
In thinking about what is rational to believe we need to consider both meaningfulness and plausibility.

I'm grateful for Allen Stairs' response to question 5821, but he, like Richard

I'm grateful for Allen Stairs' response to question 5821, but he, like Richard Heck and Stephen Maitzen when answering question 5792, ASSUMES that words like "all" have the same meaning in everyday English as they have when used by logicians. That's what seems very strange to me. At least, everyday "all" is ambiguous. Professors Stairs, Heck and Maitzen believe that "all the strawberries he has" always means "all the strawberries he may have", and never "all the strawberries he does have". But look at the latter example ("does have"): you're still using the word "all", but here it is clearly said that he has some strawberries. Why can't that happen (in the right context) with "all the strawberries he has"? By the way, in several Romance languages, there is a difference between (e.g., in Portuguese) "todos os morangos que tem" (indicative) and "todos os morangos que tenha" (subjunctive). Both can be translated as "all the strawberries s/he has", but the first sentence indicates that he (or she) does have...

I'm not convinced that your expression "all the strawberries he does have" is a recognized way of disambiguating the expression that you say is ambiguous: "all the strawberries he has." When would we use the expression "all the strawberries he does have"? As far as I can see, only in special contexts such as this one: "He doesn't have all the strawberries in the county. But all the strawberries he does have are organic." In that example, "does" isn't used to signal the indicative mood; instead it's used merely to emphasize a contrast.

Nor am I convinced that "does" + infinitive always carries existential import (i.e., implies the existence of at least one thing satisfying the verb phrase). Consider:

(P) "All the intelligent extraterrestrials our galaxy does contain are extraterrestrials."

Again, P will sound awkward except in a context such as this:

(Q) "Our galaxy may not contain any intelligent extraterrestrials. But all the intelligent extraterrestrials our galaxy does contain are extraterrestrials."

Whether or not you believe our galaxy contains intelligent extraterrestrials, it would be wrong to deny the second sentence in Q, wouldn't it?

Pages