Advanced Search

Why should movies get the science right?

Why should movies get the science right? I have long heard that some/many sci-fi movies get the science wrong and I just sit there thinking -"well what's wrong with that?". I've managed construct a few bad reasons as to why they should get it right, but most of these are somewhere along the lines of: 'it might mislead people'. Your help will be much appreciated.

I don't think there's any general injunction about getting the science right, but sometimes getting it wrong can be a distraction. One example that's been discussed by various critics comes from Lord of the Flies. Piggy's glasses are used to focus sunlight and start a fire. But Piggy is nearsighted; his lenses would be concave rather than convex and couldn't be used to start a fire. (Thanks to John Holliday for this example, which he discusses in his dissertation.) Many readers won't realize the problem, but the glasses and Piggy's nearsightedness aren't just an incidental plot element. This is the sort of detail that Golding could have gotten right and once you know that it's wrong, you may never be able to read those scenes in the same way.

Needless to say, this doesn't show that getting the science right always matters. It surely doesn't. It's also plausible that these things will be matters of degree. The more esoteric the bit of science, and the less central to the story, the less it's likely to matter whether the author gets it right. Also, if we couldn't reasonably expect the author to get it right (say, because the relevant bits of science weren't known when the story was written), we will be more forgiving, though even we may need to make an effort not to let ourselves be jarred by the inaccuracy.

We could add that it's not just science that matters. I remember as a boy reading a Hardy Boys story in which part of the action took place in eastern Canada, in "St. John's, New Brunswick." This annoyed me and distracted me; there is no St. John's, New Brunswick. There is a St. John's, Newfoundland. There is a Saint John New Brunswick (and yes, the spelling matters.) The author could easily have gotten this detail right with a minimum of research. The story, of course, is a fiction. But like most stories, it's a fiction intended to bear a certain relationship to the real world. Imagine, for instance, an author who set a scene in Faneuil Hall in Boston, Connecticut. I'd need a pretty good reason to forgive the author for that oversight. I'd also need a pretty good reason to forgive an author who wrote a story with a physics professor character who said that electrons are bosons.

So even though fictions are, well, fictional, getting the facts right can make an aesthetic difference, and scientific facts can be among the ones that matter.

I don't think there's any general injunction about getting the science right, but sometimes getting it wrong can be a distraction. One example that's been discussed by various critics comes from Lord of the Flies . Piggy's glasses are used to focus sunlight and start a fire. But Piggy is nearsighted; his lenses would be concave rather than convex and couldn't be used to start a fire. (Thanks to John Holliday for this example, which he discusses in his dissertation.) Many readers won't realize the problem, but the glasses and Piggy's nearsightedness aren't just an incidental plot element. This is the sort of detail that Golding could have gotten right and once you know that it's wrong, you may never be able to read those scenes in the same way. Needless to say, this doesn't show that getting the science right always matters. It surely doesn't. It's also plausible that these things will be matters of degree. The more esoteric the bit of science, and the less central to the story, the less it's likely to...

What insight can babies in scientific experiments provide philosophy? If we

What insight can babies in scientific experiments provide philosophy? If we really are born with blank slates, how does that explain why many babies will choose to look and gesture at the side by side photo of the model instead of the photo of the grandma? I really think philosophy will answer this alone instead of neuroscience.

I don't have a clear fix on the question, but insofar as I do, I don't see how philosophy alone could answer it. You seem to be saying that there's a real-world, repeatable phenomenon: babies in certain situations behave this way rather than that. That may be true—is true, far as I know. But if it's true, there's nothing a priori about it; the opposite behavior is perfectly conceivable and might have been true for all we could have said in advance. I don't see how philosophical analysis could tell us why things turned out one way rather than another. At least as I and many of my colleagues understand philosophy, it doesn't have any special access to contingent facts. A philosopher might come up with a hypothesis, but insofar as the hypothesis is about an empirical matter, it will call for the usual sort of empirical investigation that empirical claims call for.

As for blank slates, philosophy can't tell us by itself whether we're born with blank slates as minds, but as a matter of fact, there seems to be reason to think we aren't. The mind seems to come pre-wired in certain ways. Understanding what that amounts to calls for doing some science, whether it be cognitive science or neuroscience or whatever. Philosophers may have contributions to make to clarifying relevant notions and questions, sorting out methodological issues and the like, but what they can't do is sit in their studies and settle the answers by themselves.

I don't have a clear fix on the question, but insofar as I do, I don't see how philosophy alone could answer it. You seem to be saying that there's a real-world, repeatable phenomenon: babies in certain situations behave this way rather than that . That may be true—is true, far as I know. But if it's true, there's nothing a priori about it; the opposite behavior is perfectly conceivable and might have been true for all we could have said in advance. I don't see how philosophical analysis could tell us why things turned out one way rather than another. At least as I and many of my colleagues understand philosophy, it doesn't have any special access to contingent facts. A philosopher might come up with a hypothesis, but insofar as the hypothesis is about an empirical matter, it will call for the usual sort of empirical investigation that empirical claims call for. As for blank slates, philosophy can't tell us by itself whether we're born with blank slates as minds, but as a matter of fact, there...

I am a scientist with very strong desire for personal growth.I acknowledge the

I am a scientist with very strong desire for personal growth.I acknowledge the undeniable practical values of science in making better world. However, I am wondering how being a scientist would contribute to my own growth and self-actualization.(regardless of financial or social gain of being a scientist). Also is it worthy to put my life on practicing science which mostly involve in a very narrow research area. I mean if putting so much time and energy on such tiny bit of knowledge is really good and in accordance with my ultimate goal of being self-actualized?

I think the best place to start is by asking yourself what "self-actualization" is supposed to be and why it's so important. The phrase "self-actualized" has a sort of aura about it, but I'm not sure it's a helpful one for thinking about how we should live. One of my problems with the phrase is that as it's often used, it seems to mean something that has to do with a rather narrow sense of bettering oneself.

Wanting to live a good life is a noble goal. Part of living a good life has to do with making good use of the gifts one has been given, to borrow language from the religious tradition. And I sense that that's part of your concern. One doesn't want one's life to be devoted to trivial things. But most of us have to make a living, and making a living by doing routine science doesn't seem ignoble—not least since one can never be sure what the larger consequences will be. So if you find satisfaction in doing science and do it well and conscientiously, I'd say you have nothing to be ashamed of.

But on the larger question that I think may concern you, I have a lot of sympathy with broadly Aristotelian ideas of what "self-actualization" might amount to: the cultivation of virtue. I don't mean this in some prim and proper sense. I mean that there really are traits of character that we think of as virtuous: kindness, courage, fairness, honesty, generosity, and a great many others. On this view, how well a person is living is measured by the extent to which they lead a virtuous life. This doesn't amount to living the life of a prig. The people we often admire have traits like humor, appropriate irony, adventurousness and various others that make them into what we often described as well-rounded people.

To return to your specific question, all of this is quite compatible with making science the center of your working life, if that's what you want to do. Of course, if you feel that being a scientist leaves you unsatisfied, it's obviously just fine to consider what else you might do. But if you like being a scientist, that leaves ample room for living a life worth emulating; no need to feel guilty.

I think the best place to start is by asking yourself what "self-actualization" is supposed to be and why it's so important. The phrase "self-actualized" has a sort of aura about it, but I'm not sure it's a helpful one for thinking about how we should live. One of my problems with the phrase is that as it's often used, it seems to mean something that has to do with a rather narrow sense of bettering oneself. Wanting to live a good life is a noble goal. Part of living a good life has to do with making good use of the gifts one has been given, to borrow language from the religious tradition. And I sense that that's part of your concern. One doesn't want one's life to be devoted to trivial things. But most of us have to make a living, and making a living by doing routine science doesn't seem ignoble—not least since one can never be sure what the larger consequences will be. So if you find satisfaction in doing science and do it well and conscientiously, I'd say you have nothing to be ashamed of. But...

Look at what I've just read on the Internet Encyclopedia of Philosophy: "There

Look at what I've just read on the Internet Encyclopedia of Philosophy: "There are no laws of nature that hold just for the planet Earth (or the Andromeda Galaxy, for that matter), nor are there any that hold just for the Eighteenth Century or just for the Mesozoic Era." I agree that this looks absolutely true, but why is it so? I suppose science cannot prove that there is no fundamental law of physics that holds only in a small part of the universe or only during some short period. Sure, such a law would be unexplainable, at least scientifically unexplainable, but aren't ALL fundamental laws of physics unexplainable? That's why they are fundamental. If the above quotation is only stipulating some meaning of "laws of natures", isn't it arbitrary? Thank you.

I just wanted to add to Allen's remarks (with which I largely agree).

First, the claim that there are no laws of nature that hold just for (e.g.) the planet Earth may require the qualification "no fundamental laws of nature". After all, if it is a law of nature that like electric charges repel, then it is a law of nature that like electric charges on planet Earth repel. The latter is a derivative law, however. So there could easily be non-fundamental laws that hold just for the planet Earth.

Second, on Lewis's own version of the Best System view, the laws of nature must all be truths. There is no trade-off between "complete and perfect truth" and "greater generality." Of course, a modified version of Lewis's account might be more liberal.

Third, it could be that all fundamental laws of physics have no explanations (that's what makes them fundamental, as you say), and yet there is a reason why all fundamental laws of physics cover all of space-time and (to put it roughly) say the same things about every spatiotemporal region. It could be a meta-law of nature that all fundamental first-order laws of physics cover all of space-time and say the same things about every spatiotemporal region. This meta-law would explain why all first-order fundamental laws of physics have these properties. After all, if all of the first-order fundamental laws of physics are like this, then it seems like this fact would not be a coincidence. There would be a common "cause" for all of the first-order laws' being like this.

Finally, notice that we have moved here from laws of nature to laws of physics. Perhaps there are laws of "special sciences" that are restricted in space or in time.

It's a good question and I don't think it has an easy answer. On the one hand, if laws aren't truly "global" (i.e., could hold only at particular times and/or places), then we have a potential problem of arbitrariness. I'm pretty sure this is a true generalization: All men born in Canada and typing an answer on December 27, 2014 in the city of Washington DC to a question about laws on askphilosophers are wearing cotton sweaters. On the other hand, I'm quite sure that it's not a law of nature and I can't imagine why anyone would think otherwise. You could just stipulate that all true generalizations are laws of nature, but that seems truly arbitrary, and in particular it seems to ignore all the reasons we think it's worth looking for laws of nature. So from a certain point of view, requiring that laws of nature can't be restricted to particular places or times seems like a way of avoiding rather than introducing arbitrariness. That said, it hardly follows that we would never have...

When does successful prediction provide strong evidence?

When does successful prediction provide strong evidence?

Here's a sort of rule-of-thumb answer that I find useful. Roughly, we should ask ourselves how surprising the evidence would be if the hypothesis were not true. Suppose the question is whether Harvey robbed the bank. Our evidence for Harvey being the thief is that a witness saw him outside the bank around the time of the robbery. If Harvey really is the robber, this isn't unlikely, but suppose Harvey works in the barber shop on the block where the bank is, and the time he was seen was a few minutes before opening time for the barber shop. Then seeing him outside the bank wouldn't be surprising even if he wasn't the robber. It's not strong evidence.

On the other hand, suppose the evidence is that a search of Harvey's apartment turns up a large bag of bills whose serial numbers identify them as the ones that were stolen.Then things look bad for Harvey. If he wasn't the robber, it would be surprising to find the money in his apartment. (Of course, this isn't conclusive proof. Maybe someone has planted the money to frame Harvey.)

That's the rough version. What really matters is the ratio of two probabilities: the probability of the evidence assuming the hypothesis is true (write that as p(E|H), and the probability of the evidence assuming the hypothesis is false (write that as p(E|not-H). The ratio

            p(E|H) 
         p(E|not-H)

is called the likelihood ratio. The higher the likelihood ratio, the stronger the evidence.

There's a good deal more to be said, but the little test sketched here is especially useful in its negative form. If the evidence isn't surprising by this test, then it's not strong.

Here's a sort of rule-of-thumb answer that I find useful. Roughly, we should ask ourselves how surprising the evidence would be if the hypothesis were not true. Suppose the question is whether Harvey robbed the bank. Our evidence for Harvey being the thief is that a witness saw him outside the bank around the time of the robbery. If Harvey really is the robber, this isn't unlikely, but suppose Harvey works in the barber shop on the block where the bank is, and the time he was seen was a few minutes before opening time for the barber shop. Then seeing him outside the bank wouldn't be surprising even if he wasn't the robber. It's not strong evidence. On the other hand, suppose the evidence is that a search of Harvey's apartment turns up a large bag of bills whose serial numbers identify them as the ones that were stolen.Then things look bad for Harvey. If he wasn't the robber, it would be surprising to find the money in his apartment. (Of course, this isn't conclusive proof. Maybe someone has planted the...

Is there a good definition of magic which does not rule out the existence of

Is there a good definition of magic which does not rule out the existence of magic, but also does not imply that actually magic exists? Magic cannot be "the ability to do impossible things", since this is a contradiction. I wonder if we could define magic as "the ability to violate the laws of physics". The problem is that if we discovered, for instance, that uttering "abracadabra" was a good way to make rabbits appear inside hats, he would have found a new law of physics, wouldn't we? And is it possible to argue that there is no magic without implying that most religions are false? My feeling is that the concept of magic has a reasonable sense only if we accept some religion: magic would be something like the wrong use of entities posited by such religion.

It's an interesting question, and I think it's best considered the context of times and settings in which the idea of magic was taken seriously. I also doubt that there's a lot to be gained by looking for a full-blown definition, but we can learn something by looking at broad commonalities.

First on the bit about magic words and rabbits. If it turned out that saying the right words in the right way could make rabbits appear in hats, then we would have discovered a new regularity in the world, though whether we had discovered a new law of physics is a lot more doubtful. After all, the regularities of the special sciences aren't usually classed as laws of physics, even though physics has to be consistent with them.* We might want to say that this regularity is "natural" because all the events take place in nature (saying the words, the rabbit appearing...) but it wouldn't follow that it wasn't magical. Older notions of magic explicitly included a concept of natural magic.

What counted as "natural magic?" There's no tidy answer, but part of the background was the idea of an "occult quality." "Occult" here means "hidden." The behavior of lodestones (magnets) would have counted as a case of natural magic on some views. From the point of view of Renaissance thinkers, the operation of the lodestone depended on hidden properties. It also acted over distances, which tended to be characteristic of things that were labeled magical.

Neoplatonic thought had room for a concept of magic. The reason was that everything was in contact with everything else by virtue of everything being contained within/infused with the World Soul.

Was this a religious idea? There's no easy answer. It wasn't associated with any particular religion, but it clearly had a strong kinship with ideas that we think of as religious.

Some particularly important magical ideas were bound up with astrological beliefs. Belief in "astral influences" was very common in the ancient world and in the Renaissance. Some of these influences were considered benign. The Renaissance neo-Platonist Marsilio Ficino wrote quite charmingly about the things one needed to do to capture these beneficial astral influences—particularly the solar influences. But there was nothing especially "religious" about these beliefs. They were part of a broadly accepted view of how nature worked.

Some astral magic was more problematic. It called for commanding not-merely-human beings to do one's bidding. This was "demonic" magic. Were the demons supernatural? There's no good, simple answer. They were one of the kinds of things taken to populate the world, but their realm was the super-lunar—beyond the moon. The Church certainly objected to demonic magic, but one could believe in the existence of the beings themselves whether or not one was a Christian.

A good deal of what we might call folk magic didn't have much in the way of theory about it at all. My mother told a story of having had a wart removed from her hand by having it "charmed." In one version of the wart charm, the charmer would "buy" the wart for a penny. Many people believed that this worked without any particular view about how it worked. But they would likely have been willing to call it magic.

All this is just the tip of a large and very fascinating iceberg. But the most important lesson to draw is that there neither is nor ever was a single, unified conception of "magic." Magic is an excellent example of a "family resemblance" notion. Furthermore, many magical ideas existed against a background of broader views about the cosmos that have either faded entirely or exist only in attenuate from among contemporary, educated people. But even here we need to be careful. There are thoughtful, intelligent twenty-first century people who would tell you that they believe that there's such a thing as magic. Such people tend to believe more broadly that the mind can influence matter in ways that you and I might reject. However, many of these people wouldn't see any necessary conflict between their own views and science. Their particular views about science might be mistaken (for example, might include misunderstandings of quantum theory) but it wouldn't be because what they believe is somehow essentially incompatible with science.

Occasionally I'll hear philosophers trying to make claims about the concept of magic. My experience tends to be that what they say is crude, ahistorical and far too simple. If you want to get an idea of what it would be like to be a contemporary believer in magic, I'd highly recommend Tanya Luhrmann's book Persuasions of the Witch's Craft. It's a rich ethnographic study by a philosophically-informed anthropologist. And if you'd like a better understanding of magical ideas in the Renaissance, you might want to have a look at Frances Yates' The Occult Philosophy in the Elizabethan Age, among other of her works. Yates' scholarship certainly has its critics, but it's hard to read her work without getting a glimpse of a much richer idea about what "magic" might once have meant.

--------------

*Notice, by the way: I didn't say that the upper-level laws need to be consistent with physics. If we discovered a new psychological regularity that didn't fit with what physics tells us, then if the regularity really was stable and robust, it would represent a problem for physics, not for psychology. The regularities are what they are.)

It's an interesting question, and I think it's best considered the context of times and settings in which the idea of magic was taken seriously. I also doubt that there's a lot to be gained by looking for a full-blown definition, but we can learn something by looking at broad commonalities. First on the bit about magic words and rabbits. If it turned out that saying the right words in the right way could make rabbits appear in hats, then we would have discovered a new regularity in the world, though whether we had discovered a new law of physics is a lot more doubtful. After all, the regularities of the special sciences aren't usually classed as laws of physics, even though physics has to be consistent with them.* We might want to say that this regularity is "natural" because all the events take place in nature (saying the words, the rabbit appearing...) but it wouldn't follow that it wasn't magical. Older notions of magic explicitly included a concept of natural magic. What counted as ...

Am I guilty of some kind of inconsistency if I reject scientific consensus about

Am I guilty of some kind of inconsistency if I reject scientific consensus about evolution, global warming, the big bang, etc., but still make regular use of modern technology?

No inconsistency. Your computer works whether you accept the quantum story that explains its microprocessors or not. You might run a risk of unreasonableness; the evidence for the things you mention is pretty good, and the same might hold for whatever gets included in your "etc." And if some of the things falling under your "etc." are routine parts of the science we use to produce the technologies you rely on, someone might wonder whether the success of those technologies doesn't give you good reason to accept the science. But consistency is not a very high bar, and though inconsistent views are arguably unreasonable, unreasonable views can be consistent.

No inconsistency. Your computer works whether you accept the quantum story that explains its microprocessors or not. You might run a risk of unreasonableness; the evidence for the things you mention is pretty good, and the same might hold for whatever gets included in your "etc." And if some of the things falling under your "etc." are routine parts of the science we use to produce the technologies you rely on, someone might wonder whether the success of those technologies doesn't give you good reason to accept the science. But consistency is not a very high bar, and though inconsistent views are arguably unreasonable, unreasonable views can be consistent.

what is the fundamental difference between science and non-science? aware of

what is the fundamental difference between science and non-science? aware of popper's theory of falsification, i still am unsure of how a theory can only be scientific if it can be proven false? this seems rather contradictory; what about if a scientific theory had been rigourously tested so much that it is in fact true, and cannot be proven false? thanks in advance :)

I'm not sure there's a fundamental difference between science and non-science. But the point about falsifiability isn't that a true theory can be proven false. It's that scientific theories can be tested, and we know what sorts of results would count against the theory in principle

Keep in mind that even a theory that's survived a long string of rigorous tests might still be overthrown. The point of the falsifiability requirement is that we know what sorts of results would count against the theory - whether or not they ever turn up.

One more point, though. It's one thing to say we know what would count against a theory. It's another to say that some particular bit of evidence would refute a theory conclusively. Things are seldom if ever that simple.

I'm not sure there's a fundamental difference between science and non-science. But the point about falsifiability isn't that a true theory can be proven false. It's that scientific theories can be tested, and we know what sorts of results would count against the theory in principle Keep in mind that even a theory that's survived a long string of rigorous tests might still be overthrown. The point of the falsifiability requirement is that we know what sorts of results would count against the theory - whether or not they ever turn up. One more point, though. It's one thing to say we know what would count against a theory. It's another to say that some particular bit of evidence would refute a theory conclusively. Things are seldom if ever that simple.

Is religion the true enemy of freedom in a democratic society since it teaches

Is religion the true enemy of freedom in a democratic society since it teaches us that we have to think a certain way or is science since it teaches us that nobody is truly free but a product of deterministic forces?

Or another mode of reply: First suppose that science DOES suggest determinism. How would anything be different in our lives? Wouldn't democratic processes work precisely the same way as they have been? (After all, our behavior has been deterministic all along, so why would discovering/proving/merely believing that it is deterministic change anything?) Or since 'freedom' seems to be the larger concern for you, again, what would be different? All the cases where we've held people responsible for their behaviors, we still would hold them, wouldn't we? we'd still lock up bad people, teach our children to be good, etc.... So it isn't clear to me why scientific results would threaten anything, really. Ditto for religion: if we think religions are in the business of generating true claims about the world, then, where they succeed, we should be happy to endorse their claims (assuming we want the truth). Whichever dogmatic religions you're thinking of ARE dogmatic because they believe they have the truth which, I suppose, isn't necessarily a bad thing. Of course, greater humility about knowledge is probably more appropriate -- but then very little stops most people from believing their religious beliefs along WITH the humility of recognizing they may be wrong -- so it isn't religion itself which 'suppresses freedom (of thought)', but dogmatic bossy people (some of whom are religious, but many of whom are not) ....

hope that's useful! ...

ap

How about neither? Let's start with religion, about which only a few words. Some forms of religion are dogmatic and deeply invested in doubtful beliefs, but it's a mistake to think all religion is like that, contrary to the persistent insistence of some apologists for atheism. And "science" writ large hasn't settled whether everything is a product of deterministic forces, let alone about what that would imply if it were true. On the first point: it's open to serious doubt whether quantum processes are deterministic. And it's simply not true that the macro-world would be sealed off from all quantum indeterminism. More important, it's simply not settled that determinism has the dire implications you suppose it has. Most philosophers, I'd guess, accept some version of compatibilism, according to which physical determinism and human freedom can coexist. A bit of searching around this website will find various discussions. Here's one that might be helpful. Of course, it might be that the...

Isn’t it true that ultimately all truth is conventional? The system of logic,

Isn’t it true that ultimately all truth is conventional? The system of logic, our inferences we accept, our physics, our views on reality; are all grounded in our presuppositions? To be intellectually honest there is no argument for objectivity. We have to retreat to commonsense realism and agreement among people and communities...so truth in reality is a matter of consensus! Even though none of us wishes to cop to that label. So logic, physics, science is all rhetoric or the art of convincing one of our views? Even if we hold that there is one God and His truth is absolute and objective - this is still a convention one must accept?

The way you begin your question hints at a problem we'll get to below, but before that, let me suggest a distinction. It's one thing to presuppose or assume something; it's another thing for it to be a matter of convention.

There's a lot to be said on the matter of convention; there's not just one idea under that umbrella. But let's take an example from philosophy of space and time. Adolph Grünbaum argued many years ago that given our usual view of space and time, lengths are a matter of convention. The gist of the idea was this: if space and time are continuous, then any two lines contain the same number of points. In Grünbaum's view, that meant there is nothing in space and time themselves to ground the difference between different possible standards for assigning lengths. We have to pick one (think of it as deciding what counts as a ruler) and only after we've done that do questions about lengths have answers. If Grünbaum were right (I'm not convinced, but that's not our issue), then there would be no hard fact of the matter about whether two non-overlapping lines have the same length. There would be no such fact because there'd be nothing in the world to ground it. But notice: Grünbaum's argument has nothing to do with what we presuppose or assume. It's an argument about how much structure the world really has. Grünbaum believed that it's not a matter of convention whether space and time are continuous. That's a fact about the world itself.

You might ask how we could know such a thing. That's a perfectly good question, and it might not be easy to answer. Furthermore, at some point in sorting out our beliefs, we'll have to assume that some things are true without being able to prove them (otherwise we end up in circularity or regress.) But whether I assume something without proof and whether it's true aren't the same question. (And for that matter whether I assume something without proof and whether I know it aren't the same thing. Knowing might be a matter of being connected to the facts in a reliable way. I could be connected that way whether or not I could prove it.)

This might be a good time to have a look at the way you pose your question: "Isn’t it true that ultimately all truth is conventional?" Unless I'm missing something, you're trying to show that it's true, full stop, that all truth is conventional; not that this is a convention we adopt (after all, we don't) nor something that we assume or presuppose (on the contrary: most people assume no such thing) but that this is how things really are. But if that's right, then there's at least one truth that isn't merely conventional. And in that case, the claim that all truths are conventional is false.

This might seem trivial; I don't think it is. Radical conventionalism is a thought we don't have a way to think. Once that's clear, you might start to suspect that there's a good deal more in the way of truths that we don't make.

The way you begin your question hints at a problem we'll get to below, but before that, let me suggest a distinction. It's one thing to presuppose or assume something; it's another thing for it to be a matter of convention. There's a lot to be said on the matter of convention; there's not just one idea under that umbrella. But let's take an example from philosophy of space and time. Adolph Grünbaum argued many years ago that given our usual view of space and time, lengths are a matter of convention. The gist of the idea was this: if space and time are continuous, then any two lines contain the same number of points. In Grünbaum's view, that meant there is nothing in space and time themselves to ground the difference between different possible standards for assigning lengths. We have to pick one (think of it as deciding what counts as a ruler) and only after we've done that do questions about lengths have answers. If Grünbaum were right (I'm not convinced, but that's not our issue), then there would...

Pages