Over the last few weeks, if anybody has been reading these, I’ve been talking about free will. The conclusion I reached was that the important question is not simply “do we have it?”, because that begs the question of what it actually is. There are senses in which we have it and senses in which we don’t. The usual arguments hinge on a definition of free will that not only trivially doesn’t exist, but also wouldn’t do what we’d want it to do if it did. What this highlights to me is the vital importance in philosophy, and in rational inquiry in general, of asking the right questions. Closely related is the value of using appropriate and transparent definitions, which I have also written about before.
Today, I’d like to turn the discussion onto the concept of knowledge itself. What does it mean to know something? This is the field of epistemology.
For some people, saying you know something is just making a distinction between saying you believe it, and saying you really believe it. But that’s not an appropriate use of the word; the difference between knowledge and belief, we feel, should be qualitative, and an appropriate definition needs to capture that. Yes, knowledge seems to imply certainty, and it definitely involves belief, but there’s something else it requires too. You can believe something that isn’t true. You can’t know something that isn’t true.
But a true belief isn’t necessarily knowledge either. Democritus, in the fourth century BCE, believed that the material world was composed of tiny, indivisible atoms whose varying properties and relationships gave rise to different substances. His theory was, as far as it goes, more or less correct. Certainly it is more true than its contradiction. But we wouldn’t say he knew that the world was composed of atoms, no matter how certain he felt about it, and no matter how true it is that the world actually is composed of atoms. Why?
Because there wasn’t a good enough reason for him to assert it. Nobody had ever seen an atom, and Democritus was more or less speculating. Not only did he have no physical evidence for his theory, but he also had no idea what evidence favouring it over its rivals might even look like; and, on top of that, he didn’t particularly mind. The formulation of the scientific method, in any recognisable form, was centuries off. He was going on intuition, and he arrived at his ideas in the same way as his contemporaries arrived at rival (and much less correct) metaphysical theories.
This is not to say that there was no such thing as knowledge at all prior to the development of the scientific method, of course. Just because you couldn’t see evidence pointing to atomic theory didn’t mean you couldn’t see the stars, or predict the seasons, or have confidence that your new bridge wouldn’t fall down. But all these beliefs, like more rigorous scientific ones, were justified. Your prediction, one spring day, that the Sun would rise earlier tomorrow than yesterday, was justified by the fact that you had always observed it to be so in the spring. And so Plato in his turn was justified when he laid down the definition of knowledge that philosophers accepted for over two millennia: that it is no more or less than justified true belief.
And then Edmund Gettier supposedly came along and spoilt all that. You may note the past tense accepted there. Gettier gave examples of justified true beliefs that were not knowledge. To do this, however, required him to use a definition of justification that allows for justified beliefs to be false.
Most epistemologists take Gettier’s examples as proof that whatever knowledge really is, it isn’t justified true belief. But this is nonsense. If we actually try to derive an adequate definition of justification, the best candidate is one that does not allow for a belief to be both entirely justified and false.
That definition is the following. A belief in a proposition is justified if and only if the belief is causally descended from, or in less wanky language caused by, the truth of the proposition itself. Someone who believes he sees a barn in a field (to take another traditional example), and believes it to be a real barn, is justified in that belief only if what he saw was in fact a real barn, and not a false façade erected by a prankster to fool passing epistemologists, because in the case of a false façade, his belief in a real barn is not caused by the objective existence of one.
But this, of course, merely removes the problem from the object level to the meta level. How could such a person know whether his belief were justified, if false façades and real barns looked identical? He couldn’t. And in turn, if he cannot know whether his belief is justified, he cannot know whether it counts as knowledge at all. This is the reason Gettier uses a weaker definition of justification that allows for justified beliefs to be false — the stronger, more accurate causal definition rules out knowledge at all.
This is why I believe epistemology requires a normative approach. “What do/can we know?” is the wrong question. The actual question we’re trying to answer when we do epistemology is instead “What ought we to believe?” And this also takes away the requirement for complete certainty. After all, there are very few things of which we can be completely certain. You can derive a lot of assertions from I think therefore I am, but most of those assertions don’t come with complete mathematical certainty.
Framing epistemology in normative terms, although it does away with the necessity of a definition for knowledge, does give us a more useful, workable definition than the Platonic justified true belief. JTB is useless because of the meta problem of never knowing whether your beliefs count. Rather, a better definition would be one on which you had enough confidence in a belief to act upon it. So while we don’t — and can’t — ever have complete, formal mathematical certainty about whether vaccines work, or the climate is changing, or God exists, we can legitimately say that we know they do, it is, and he doesn’t respectively, because the balance of evidence in each case is so overwhelming that it would be unjustifiable to act as though it were otherwise.
As a secondary matter, this way of looking at epistemology neatly sidesteps the fundamental bootstrapping problem of ethics — that is, that you can’t derive a normative statement from an existential one, or in simpler terms, you can’t get an ought from an is. By framing the validity of assertions in a normative way, we already have them in the form of oughts.