Truth conditions are often held to be independent of assertability. Thus, the claim that 'snow is white' or '6 is the smallest perfect number' are true, regardless of whether anyone is warranted in asserting these claims. The reason why some philosophers might object to this format is that it appears to open the door to a radical skepticism, e.g. it may be claimed that there are truths that elude our best cognitive posers. Such philosophers thus advance what may be called an epistemic understanding of truth that would make it incoherent to think there are truths that outstrip our warranted assertability. Although I am not a radical skeptic, I am inclined to think that a wide-ranging skepticism is at least coherent --why limit truth to what we have (or ideally might have) justification in asserting?
You might find the work of Roger Trigg of interest in such matters, e.g. Reality at Risk.
You should certainly read "Two Dogmas of Empiricism" -- it's one of the best-known papers in analytic philosophy and can be said to have set a large part of the agenda for Anglo-American philosophy since its publication in 1951. Better to read the paper itself, anyway, than to read things things about it, as you evidently have.
Circularity doesn't itself play that much of a role in Quine's paper. His attack on the analytic-synthetic distinction is waged on two broad fronts (to oversimplify a bit), a logical one and a linguistic one. The logical one is largely implicit in "Two Dogmas" but gets more attention in some later writings of Quine's. It derives from Gödel's first incompleteness theorem, which showed that there can be (under certain conditions) true sentences in an axiomatically defined language not provable from its axioms (i.e. not "analytic" as that is understood by e.g. Frege). So if you want a distinction between sentences that are simply an artifact of the language you've chosen and those that actually convey some empirical information about the world, you'll need a criterion other than provability. That's hard, and no one has come up with anything nice and simple that applies across the board and not just in special cases. For Quine, that was reason enough to give up on the distinction altogether. Others (including scientists such as Einstein) thought the distinction was absolutely critical to science and weren't too concerned that it couldn't be pinned down by a precise criterion.
Quine's other attack was on a quasi-empirical front; in ordinary language, he claimed, you can't find any such analytic-synthetic distinction, you can't find a built-in "criterion of analyticity." (This also gets more attention in his book Word and Object a few years later.) It's a distinction that has to be imported into ordinary language from our constructed logical and mathematical languages. However Rudolf Carnap, against whose ideas "Two Dogmas" was mainly directed, had no problem with that, and this part of Quine's critique no longer gets so much attention.
Your final question about circular definitions is hard to answer briefly. There are some philosophers who think that the point of philosophical analysis is not "reductive" analysis, whereby you analyse (or define) all concepts in terms of more basic ones and so get down to some foundation or small set of basic concepts (as in Bertrand Russell, as he's usually seen), but rather "connective" analysis, whereby we figure out how the various concepts we use in our language fit together. For those philosophers (Peter Strawson, for instance, who coined the term "connective analysis"), circularity in some wide sense is not a bad thing.
In most of science and mathematics, on the other hand, circularity is obviously a defect because a circular definition does no work. To say that table salt is composed of two elements, sodium and chlorine (each of which we know a lot about), in a certain electrochemical combination of units or atoms (one each), is informative and can predict certain things about table salt that you couldn't otherwise, whereas to say that table salt is table salt, or is salty by virtue of having a tendency to saltiness (extreme version of a circular definition) gets you nowhere and predicts nothing.
I'd suggest that we need to keep three things separate: 1) whether the word is offensive, 2) whether offense was intended, and 3) whether the hearer was offended. All eight possibilities are real. To take the most relevant, a word might be offensive, and yet the person using it might not have intended to offend and the hearer might not be offended.
For example: suppose someone who's not a native speaker uses a deeply racist term to refer to someone. The speaker is not at all a racist and would be deeply mortified if she knew how the word is normally used. She intended no offense. But that's because she didn't know that the word is an offensive word.
The person she was speaking to, meanwhile, is a racist. The speaker doesn't know that; she's just met him. He's not offended, but only because of is racism. On the contrary: he thinks he's met a kindred spirit.
There's no mystery here. The word is offensive because of its history, its usual meaning, and the way people typically respond to it. None of that changes if the speaker is unaware of this or the hearer, for whatever reason, doesn't have the usual reaction.
You're right, of course, that if I come to learn that a speaker didn't realize the full connotations of his words, it might be unreasonable to hold on to my offense. But if I tell the speaker "You might want to know: that word is actually a very offensive one," I could be right even if in light of the full situation I'm not offended.
Some things can be defined that cannot exist, such as "A square circle in two dimensional space" or "2+2=1" --and some things can be described that do not exist but could have existed or might come to exist (unicorns). And, I suggest, that there may be indefinitely many things that exist for which we do not have any successful definition. "Consciousness" might be a candidate, insofar as some philosophers are right in thinking we may never have a good or at-least problem-free definition.
As an aside, your question raises the need for a good definition of definitions. I will not attempt such a philosophy of definitions here, but you might check out the Stanford Encyclopedia entries bearing on philosophy of language for further, useful material. Paradoxically, if nothing can exist than cannot be defined, and we have no definition of being defined, we all might be in trouble.
Thinking further: I suspect you may be principally concerned with the problem of affirming that something (X) exists, and whether this affirmation is meaningful if we lack a definition of X. On the face of it, there would be a problem with someone claiming: "Call the reporters. There is something I will refer to as 'N,' but I have absolutely no idea or definition of what 'N' might be. It could be an animal or number or time of day, for I know." Such a claim would be as bizarre as what we find in Alice in Wonderland. Even so, I suggest that we should distinguish claims about meaningful speech and claims about what does or does not exist. Even if we cannot make claims about what does or does not exist without (at least vague) definitions, it is another thing to claim that there only exists things we can make meaningful claims about. Sadly, we can imagine the whole human species perishing from some force which we cannot comprehend (and thus we cannot define) That is such a grim thought to end this reply, let me change the example: we can imagine that cancer and depression might be eradicated by a force that we human beings cannot comprehend or define.
This question reflects what I think is a widespread conception of Quine's critique, which is that it applies to ordinary colloquial language. Quine actually went much further than that. He was fundamentally skeptical of synonymity as well, and thought he could cast doubt even on the idea that you could stipulate synonymity, by setting up, say, an axiom system or, on a less formal basis, local "meaning postulates." You can regiment all you like, but you can't control what becomes of your regimentations; the most eloquent recent articulation of this view, in endlessly fascinating scientific detail, is Mark Wilson's work (see esp. his book Wandering Significance). So the answer to the question is "yes."
Quine didn't think in the local "circularity" terms in which the question is posed; he considered all human knowledge, starting with the most elementary common-sense knowledge and reaching to the most abstract representations of theoretical physics, to be one gigantic reciprocally-supportive circle.
I think it's hard to argue with Quine's case where colloquial language is concerned, even if you take away the behaviorist viewpoint he brings to bear, most notoriously in his demand for a "behavioral criterion" of synonymity. But where stipulations are concerned I'm less convinced. Almost any contract, for instance, contains a list of defined terms, and these stipulated synonymities are universally upheld by courts in the sense that they are practically never questioned. The legal profession might thus be held, by Quine's criteria, to accept an analytic-synthetic distinction along with a robust form of stipulated synonymity, and indeed to require such a distinction as a constitutive tool of its practice, just as Einstein said, in his lecture on "Geometry and Experience," that it was constitutive of his discovery of relativity.
If I'm writing a letter to someone I don't know very well, I might begin it "Dear _____" and end it "Yours truly." But nobody is under the slightest impression that the recipient really is dear to me, nor that I'm declaring any sort of fealty.
I said "nobody," but of course that's not quite right. Nobody who's even noddingly familiar with the conventions of letter writing will be confused, though someone from a very different culture might be. What someone means by using certain words isn't just a matter of what you find when you look the words up in a dictionary.
Or suppose I run into a nodding acquaintance by chance. I hug them and say "Good to see you." Is the hug an expression of intimacy? Am I really pleased to see this person? Maybe or maybe not, but at least in my part of the world, this is how people great one another. I don't make judgments about people's overall sincerity based on interactions like this, because in following the conventions of polite greeting, sincerity isn't the issue.
Do conventions like this really undermine the usefulness of words like "good?" I'm not convinced. There are all kinds of contextual cues that help us figure out what people mean, and typically we pick up the cues more or less automatically. For example: if I'm having dinner at a mutually-agreed-on restaurant with a friend and he spontaneously says "This risotto is really very good!" it's a fair bet that he means it.
Is it always easy to tell? No. Are people sometimes insincere in social situations? Yes. Is this a bad thing? Not necessarily and certainly not always. We have to interact with people we like and people we don't like. I may not like John, but there may be no good reason to rub his nose in that fact. None of us likes being snubbed, and often there's nothing to be gained by putting our true feelings on display.
We use words to state facts, but we use words for many other things as well. Social conventions and forms of politeness do something important: they help us get along, sometimes by papering over differences. By and large, getting along is good. Often it's at least as important as saying exactly what we think.
Verificationists typically say that for a claim to be meaningful it must be empirically testable. Tossing a coin might test claims about gravity, mechanics, or the symmetry of the coin, but it does not test an unrelated claim.
It is probably meaningful to believe that Christ will return in 10,000 years (so long as we're specific about what "Christ" and "return" mean) but that does not mean it is plausible.
In thinking about what is rational to believe we need to consider both meaningfulness and plausibility.
I'm not convinced that your expression "all the strawberries he does have" is a recognized way of disambiguating the expression that you say is ambiguous: "all the strawberries he has." When would we use the expression "all the strawberries he does have"? As far as I can see, only in special contexts such as this one: "He doesn't have all the strawberries in the county. But all the strawberries he does have are organic." In that example, "does" isn't used to signal the indicative mood; instead it's used merely to emphasize a contrast.
Nor am I convinced that "does" + infinitive always carries existential import (i.e., implies the existence of at least one thing satisfying the verb phrase). Consider:
(P) "All the intelligent extraterrestrials our galaxy does contain are extraterrestrials."
Again, P will sound awkward except in a context such as this:
(Q) "Our galaxy may not contain any intelligent extraterrestrials. But all the intelligent extraterrestrials our galaxy does contain are extraterrestrials."
Whether or not you believe our galaxy contains intelligent extraterrestrials, it would be wrong to deny the second sentence in Q, wouldn't it?