My answer is a little different from Olilver's. Why do so many scholars and intellectuals think that language is necessary for thought? Answer: Because it really is easier to think about definite rather than indefinite things. But indefinite and formless things also have to be thought about. It takes more of an effort of course to think in a pathfinding sort of a way about something new, and one may or may not be thinking "in" language, whatever that means (muttering to oneself, sub-vocally?) If one is trying to come to an understanding of some hard and new logical or mathematical matter, it may be more like shaping forms in ones mind, and then moving them, and less like chattering in French. If one insists on calling "shaping forms", or whatever the metaphor is "a kind of language", then of course the claim is drained of any content, and with that of any interest. People of say that mathematics is a language, or a "language", something like a language. But it has a function and a status very different from those of a language.
I don't see what is so good about brevity in language. What is wrong with lots of synonyms? You then get to choose which word to use. Perhaps it seems that it does not matter, since the choice is between equivalents. Well, they may mean the same thing but they don't sound the same or look the same.
It is worth trying to avoid a Gradgrind theory of semantics!
I really dislike doing this, but variations on your questions have been asked before, and some good answers put up. Please see:
Other questions and answers pertain to your broader question, which is not about works of fiction, but about 'speakers and writers' more generally.
But I'll add, re your nicely absurd J. K. Rowling interpretation, that even this would have SOME bearing on how we read the Harry Potter books. We might suspect that Rowling was clinically insane, and scour the books for further evidence; we might suspect that she was a prankster, and again scour the books looking for meta-fictional jokes; we might suspect she was writing a time-travelling cyborg novel and looking to promote it, and again we might then return to Harry Potter using it to help us imagine what the new novel would be like. The point is, the author clearly has some connection to the book(s). Either, then, we ENTIRELY discount the author when interpreting -- 'moderate, plausible interpretations' regardless -- or we do not -- in which case anything the author says could in principle serve as evidence.
Sure it is. Obviously, one would have to defend any such claim with specific examples in mind, but here's one that is now famous among philosophers.
For a long time, people used and made decorative items made from "jade." But then, chemical analysis showed that what people were calling "jade" was actually two distinct materials, which are now called "jadeite" and "nephrite." Both are still generically called "jade," but sophisticated buyers now know well that there are differences between these two materials, and these differences may have an impact on the value of artifacts made from each material.
Here's the point: a language that simply has the term "jade" in it will not be as effective for describing the world as a language that has both "jadeite" and "nephrite" in it. So there's your answer.
As someone who teaches ancient Greek philosophy in translation (almost all the time, at any rate), I have worried a lot about questions like yours. I have also been a translator of some of the texts I and others teach, and so I have also encountered the problem from that side, too. It's a thorny one, for sure. The easiest answer is the purist one: translations are simply never adequate. But in the end, I also think this is far too easy an answer, to the point of actually being worthless.
Here's why: What happens when some student decided he or she really wants to avoid the pitfalls of working from someone else's translation? Well, he or she must learn the original language. OK, good choice. But wait: do the teachers of that language themselves somehow manage to avoid the cultural and social aspects of the culture(s) of teacher and student so completely or effectively that the process of learning the new language is not itself just as likely to continue whatever misunderstandings the student was trying to avoid? I hope you see the conundrum: learning a different language (not one's native language, in other words) itself generally takes place within a social and cultural context other than the one native to the language learned. This is why teachers of modern languages emphasize in-culture learning as something that is very important to mastery. But such opportunities don't exist for "dead" languages like Greek or Latin, or even the older versions of still-living languages. (I'm assuming that time machines don't exist, of course!)
So...some of the problems you are worried about are simply not removed by learning the original languages. But here's the deal: Good translators know this very well, and when they provide their translations, one of the challenges they are alert to is that of cultural distance.
There is a great quote from Aristotle that I love to use with my students to make this very point. I will give it (for obvious reasons) in translation. In Nicomachean Ethics Book I chapter 7, Aristotle says (in Martin Ostwald's translation): "To call happiness the highest good is perhaps somewhat trite." The Greek word translated as "happiness" here (now in transliteration) is "eudaimonia." As I say to my students, forget "happiness" for a minute and just think about what word we could put into the blank where it now appears--"to call ________ the highest good is perhaps somewhat trite" and make the sentence something true in English, since Aristotle thought that what he was saying (in Greek to a Greek-speaking audience) was so obvious as to be "trite." There is no word that will make the sentence true in English, I claim, because English speakers do not have a shared common view about what word they would apply to "the highest good." But the Greeks, it seemed--though they may have had some disagreements about how best to understand or analyze more closely what the word meant--did have a word for this that was commonly shared and accepted as appropriate. So that really compels recognition that there is a cultural difference at work here. So what do I do about this, as a teacher, or as a translator? The answer is, alert to the problem, I at least put in a footnote or find some way to add explanation and words of caution about my own translation decisions and how a simple substitution of English for Greek creates potential misunderstanding.
But once I have done this song-and-dance, I frankly do not see that my own students, working from translation, are just as alert to the questions that apply to this part of the original text (for us) as are those who have learned the original language. In other words, alert and well-assisted users of translation are in a much better position than what I earlier called the purist view seems to acknowledge.
I strongly doubt that all definitions are human-made. Given the staggering number of stars that astronomers say exist, it seems highly likely that intelligent life has arisen in at least one other place -- intelligent life capable of creating languages and capable of creating explicit definitions for at least some of the items in those languages. All the definitions we currently know of are human-made, but the region of spacetime we've sampled is exceedingly small compared to what's out there.
It is a rhetorical flourish, designed to show how open minded and liberal one is, but as you suggest, it is really equivalent to saying there are no rules. Of course, if there are no rules about anything then one wonders how the statement could be understood, since presumably it depends on rules of grammar.
The most obvious reason why counterfactual talk is taken seriously by philosophers is that it's virtually impossible to avoid it. We constantly find ourselves asking -- for good reason -- what would happen in certain circumstances, and so understanding more deeply what that sort of talk might amount to seems to be a reasonable project.
You offer a dilemma. We consider a counterfactual "If A were the case, then C would be the case." You then give us a choice between determinism and indeterminism. So suppose determinism is true. Then even if 'A' is false as things are, the deterministic story you're imagining can still be applied in a hypothetical case in which A is true. After all, we do that sort of thing all the time when we solve physics problems! If the result of applying the theory is that C also turns out to be true, then it's true as things actually are that if 'A' were true, 'C' would be true as well. Why is that vacuous? It's certainly not trivial; otherwise physics itself would be trivial.
On the other hand, if assuming 'A' rules out 'C,' then it's false as things actually are that if 'A' were true, 'C' would also be true. That's not vacuous, and it doesn't make the counterfactual a contradiction. Keep in mind: the laws of nature are contingent truths.
But in fact, we're over-simplifying. Suppose the world is deterministic. Suppose Johnny is about to strike a match. Will it light? Our two assumptions don't answer the question. Whether the match would light depends not just on the laws but also on the background conditions, as you're aware. But notice: when I say "If Johnny were to strike a match, it would light" I'm saying (on a Lewis/Stalknaker-type account) that in the non-actual situations that most closely resemble how things actually are except that Johnny strikes a match, the match lights. That's something I could well be wrong about, or right about, consistently with determinism and with the actual laws of the world. Whether Johnny's match lights in the nearest possible situations where he strikes isn't just obvious. It depends (among other things) on all sorts of contingent facts about the actual world, and these are facts about which I might well be mistaken. The point of spelling out truth conditions is to give an account of what being right or wrong would amount to.
Things don't change if we consider indeterministic worlds. One reason is that even if things aren't fully deterministic, there would still be true counterfactuals. Some aren't so interesting. For example: if I were 6' tall, I'd be over 5' 10." That's true, even if it's true as a matter of logic/mathematics. Others wouldn't have to be so trivial. There could be cases where strict causal relations hold even if not all events have strict causes. But suppose everything is, so to speak, loose and separate. Then it might be that all counterfactuals that aren't true as a matter of logic or math are false. (False, by the way; not indeterminate.) That would be a big deal but it wouldn't make the non-logical counterfactuals vacuous and it wouldn't make then contradictory. It would just make them false. It would also leave us with a lot of true "might"-counterfactuals. For example: if Johnny were to strike the match, it might light, and it might not. Lewis's account spells out truth conditions for "might" counterfactuals, and also allows us to state truth conditions for "would" counterfactuals in terms of "might." From "If it were the case that A then it might be the case that not-C," it follows on Lewis's account that "If it were the case that A then it would be the case that C" is false.
As for rigor: the everyday use of counterfactuals may lack rigor in various sorts of ways, but this isn't as bad as it might sound. The everyday use of language in general lacks various sorts of rigor, but that doesn't make the study of semantics pointless. And it's also worth keeping in mind: Lewis saw it as a virtue of his theory that it can take straightforward account of certain kinds of lack of rigor. You say there's no one answer to questions about which possible worlds are nearest? Lewis would agree. He'd point out, however, that once you're settled on the criterion of closeness that fits your purposes, you can apply his apparatus.
A closing thought: suppose the reply to my comments is that I still haven't addressed the issue about applying rigor to the non-rigorous. (It's not clear to me that your original worry amounts to this, but no matter.) Even to apply Lewis' apparatus contextually goes beyond anything we can do with absolute rigor, or so it could be argued. But now the criticism proves too much. We're almost never in a position to apply physics (or any other science, for that matter) with the sort of rigor that criticism has in mind.
Theories in philosophy are often like theories in science, or so I'd suggest: they're more or less useful intellectual tools. My own take on what Lewis and Stalnaker have bequeathed us is that this intellectual tool has more than proved its usefulness. That's not to say it's beyond criticism or will never be replaced. But it's a considerable accomplishment.
You might find it useful to think of question as a set of propositions. E.g. 'Is Paris the capital of France?' would correspond to the propositions: Paris is the capital of France and It is not the case that Paris is the capital of France. 'What is the capital of France?' would correspond to a large set of propositions of the form x is the capital of France. An answer to a question would be a proposition that rules out some or all of the propositions that make up the question. A request or command, in the abstract, could just be a proposition of the same kind as that expressed by a declarative: 'Shut the door' addressed to individual x would just be the proposition that x will shut the door. The idea of request or command would come it at the level of the language expressing the proposition rather than the kind of proposition expressed. For example, one could think of the meaning of 'Shut the door' in terms of fulfillment conditions rather truth conditions