On Responsibility.

I recently got into a discussion about how a big problem with “modern leftists”, whatever that means, is that their ideology paints people as responsible for things they had no role in bringing about. I’m a straight white cis man, therefore I’m somehow culpable for the oppression of people less privileged. This is not a part of any mainstream modern leftist ideology I’m familiar with, but it is a common misconception (or deliberate misrepresentation), so I think it deserves an explanation.

On the one hand, we’re only directly responsible for making amends for wrongs we ourselves have done. On the other hand, leftist ideals like human rights, empathy, compassion, and so on impose on us some sense of responsibility (commensurate with our means) for fixing harms wherever and whomever they may befall. The misconception is that since, on such ideals, we are responsible for fixing these harms, those ideals claim we are somehow responsible for causing them, perhaps by virtue of unwittingly benefiting from unequal power structures or being raised in casually bigoted societies.

The key is in the word responsibility. It’s one of those annoying words that are very easy to equivocate; here, it’s being used in at least three different senses. The first is that of causal responsibility: if you hit someone while driving, you’re causally responsible for their injuries, which means that (and only that) you had a causal role in bringing them about.

The second is moral responsibility, which we can also refer to as culpability: if you hit someone while driving, you can be morally responsible for injuring them — although, if they were being reckless, your culpability may be lessened. Causal responsibility is a necessary condition for moral responsibility — you can’t be culpable for something you had no role in causing — but not a sufficient one.

The third sense is that of liability or duty, which we can refer to as normative responsibility, or more prosaically responsibility to as opposed to responsibility for. If you’re morally responsible for hitting somebody while driving, then you can be held liable for covering their medical bills, and perhaps some compensation over and above that. If, on the other hand, they deliberately hid and jumped in front of your vehicle (say if they were feeling suicidal), as your moral responsibility is lessened, so is your liability.

It will be seen that each of these senses supervenes upon the former. You can’t have normative without moral, and you can’t have moral without causal. But normative responsibility can arise from things we don’t usually consider when we think about the first two (although in strict ethical terms, they do count). We can incur liability from things other than moral transgressions. We can be liable for repayments due to entering into a loan contract, for example. We can be liable for sales taxes as a result of making purchases. We take on many diverse responsibilities when we choose to have children.

The responsibilities we have to the unfortunate, to alleviate poverty and inequality, to create a better and fairer society — these are responsibilities we incur through the basic social contract, and we would have them no matter how little we were (as individuals) causally responsible for the situation. All that said, though, it is the case that we can be held culpable for actively doing things that make the situation worse, and that those in positions of particular power to make it better — the wealthy, the privileged, the politically powerful, the media — have a bigger obligation to do so.

A Delicate Balance.

I’ve been sitting on this post for a while. I wasn’t sure what I ought to do with it: each paragraph could easily be a whole post in its own right. But I think now is the right time to put them out there. Many of these beliefs are more than a year old in my mind, but I have been thinking about them a lot in the past year; and as I look back, I realise that I haven’t really gotten them out in a coherent, accessible form. They’ve mostly appeared in the form of ephemeral comments on Facebook or Tumblr, and it’s high time I put them in one place. The ideas discussed below also follow a common theme to which I find myself returning again and again, and I could easily find more to which it is applicable.


My beliefs are not typical of any particular orthodoxy. On the whole, I think this is something to be proud of. One should not believe a thing just because other people do, but rather because the thing stands on its own merits. That said, you simply can’t investigate everything fully enough to be justified in believing it from first principles; but in cases where you can’t afford the effort, it’s not appropriate to just throw up your hands or to believe what you like. It’s imperative to accept the expert consensus, if there is one, or the null hypothesis, if there isn’t. This is precisely an example of the sort of nuanced balance I’m talking about. It’s not that both sides of an issue necessarily have a point, so much as that it’s very rarely the case that any given “side” has things entirely right; and that one side has a given thing wrong is no guarantee that the other doesn’t also.

Some other examples:

I’m a feminist, and I think both that Germaine Greer is a bigot for thinking trans women aren’t women and that Caitlyn Jenner doesn’t deserve most of the praise she gets just for transitioning — she’s still a privileged conservative and wholly unrepresentative of most trans people’s experiences.

I’m an environmentalist, and I think nuclear power is not nearly as objectionable as fossil power. I believe this for environmentalist reasons, such as that it releases less carbon dioxide (and, for that matter, radioactivity), and is orders of magnitude less dangerous per GWh. As such, it would be far preferable to use it as a stop-gap rather than keep burning coal and oil until renewables take over. The only reason I’m not advocating more fervently for its use is that we don’t need it as a stop-gap any more — we can already make the switch to renewables in the time it would take to replace fossil power with nuclear.

I’m a Bayesian rationalist, and accordingly don’t believe in things like gods or the utility of death. But I think a lot of the things that are popular in the “rationalist” community, such as strict utilitarianism, advocacy of cryonics or the idea that the many-worlds interpretation of quantum mechanics is the only coherent one, are utter bunk. And the common disdain for “politics” among rationalists generally only serves as a mask for libertarianism or even neoreactionary beliefs, which are hardly rational.

I’m in favour of many fundamental structural reforms to the way society is run, such as a universal basic income funded by very high externality and rent taxes on harmful activities like mining, polluting, or being Rupert Murdoch. I’m equally certain that no matter how revolutionary some of these reforms may be, revolutions are almost always terrible means of achieving them. (This particular dichotomy is one of the main themes of my forthcoming novel, of which I wrote what I expect will be the final line a few weeks ago, although there do remain a few crucial chapters still incomplete.)

I am a keen believer in the usefulness of having a standardised language, especially for such a broad lingua franca as English for most of whose speakers it is a second or third language. And I don’t think this contradicts my similarly firm belief that those who speak nonstandard dialects shouldn’t have to unlearn them in order to be taken seriously. Similarly, I don’t like linguistic prescriptivism as a principle, but I abhor misuses like alternate for alternative that only serve to muddy the language’s ability to make often useful distinctions. (I’d like to think that these dichotomies and others like them make me a good editor; having solid reasons for supporting certain prescriptivist practices makes it much easier to let go of any prescriptivist instinct in the cases where the reasons don’t apply. Conversely, it also makes it much easier to objectively explain my work in the cases where they do, and providing quality feedback is one of the most important parts of an editor’s job.)

This does make it difficult to explain myself concisely, and it’s very easy to be misinterpreted. People assume that because I hold one belief I subscribe to an entire ideology of which it is a part, when that is seldom if ever meaningfully true. In acknowledging that Islam deserves much of the blame for acts of Islamist extremism, for example, I don’t want to be taken as condemning the rest of the Muslim population, because they are not to blame — much less as condoning violent and misguided retaliation.

Asking the Right Questions.

Over the last few weeks, if anybody has been reading these, I’ve been talking about free will. The conclusion I reached was that the important question is not simply “do we have it?”, because that begs the question of what it actually is. There are senses in which we have it and senses in which we don’t. The usual arguments hinge on a definition of free will that not only trivially doesn’t exist, but also wouldn’t do what we’d want it to do if it did. What this highlights to me is the vital importance in philosophy, and in rational inquiry in general, of asking the right questions. Closely related is the value of using appropriate and transparent definitions, which I have also written about before.

Today, I’d like to turn the discussion onto the concept of knowledge itself. What does it mean to know something? This is the field of epistemology.

For some people, saying you know something is just making a distinction between saying you believe it, and saying you really believe it. But that’s not an appropriate use of the word; the difference between knowledge and belief, we feel, should be qualitative, and an appropriate definition needs to capture that. Yes, knowledge seems to imply certainty, and it definitely involves belief, but there’s something else it requires too. You can believe something that isn’t true. You can’t know something that isn’t true.

But a true belief isn’t necessarily knowledge either. Democritus, in the fourth century BCE, believed that the material world was composed of tiny, indivisible atoms whose varying properties and relationships gave rise to different substances. His theory was, as far as it goes, more or less correct. Certainly it is more true than its contradiction. But we wouldn’t say he knew that the world was composed of atoms, no matter how certain he felt about it, and no matter how true it is that the world actually is composed of atoms. Why?

Because there wasn’t a good enough reason for him to assert it. Nobody had ever seen an atom, and Democritus was more or less speculating. Not only did he have no physical evidence for his theory, but he also had no idea what evidence favouring it over its rivals might even look like; and, on top of that, he didn’t particularly mind. The formulation of the scientific method, in any recognisable form, was centuries off. He was going on intuition, and he arrived at his ideas in the same way as his contemporaries arrived at rival (and much less correct) metaphysical theories.

This is not to say that there was no such thing as knowledge at all prior to the development of the scientific method, of course. Just because you couldn’t see evidence pointing to atomic theory didn’t mean you couldn’t see the stars, or predict the seasons, or have confidence that your new bridge wouldn’t fall down. But all these beliefs, like more rigorous scientific ones, were justified. Your prediction, one spring day, that the Sun would rise earlier tomorrow than yesterday, was justified by the fact that you had always observed it to be so in the spring. And so Plato in his turn was justified when he laid down the definition of knowledge that philosophers accepted for over two millennia: that it is no more or less than justified true belief.

And then Edmund Gettier supposedly came along and spoilt all that. You may note the past tense accepted there. Gettier gave examples of justified true beliefs that were not knowledge. To do this, however, required him to use a definition of justification that allows for justified beliefs to be false.

Most epistemologists take Gettier’s examples as proof that whatever knowledge really is, it isn’t justified true belief. But this is nonsense. If we actually try to derive an adequate definition of justification, the best candidate is one that does not allow for a belief to be both entirely justified and false.

That definition is the following. A belief in a proposition is justified if and only if the belief is causally descended from, or in less wanky language caused by, the truth of the proposition itself. Someone who believes he sees a barn in a field (to take another traditional example), and believes it to be a real barn, is justified in that belief only if what he saw was in fact a real barn, and not a false façade erected by a prankster to fool passing epistemologists, because in the case of a false façade, his belief in a real barn is not caused by the objective existence of one.

But this, of course, merely removes the problem from the object level to the meta level. How could such a person know whether his belief were justified, if false façades and real barns looked identical? He couldn’t. And in turn, if he cannot know whether his belief is justified, he cannot know whether it counts as knowledge at all. This is the reason Gettier uses a weaker definition of justification that allows for justified beliefs to be false — the stronger, more accurate causal definition rules out knowledge at all.

This is why I believe epistemology requires a normative approach. “What do/can we know?” is the wrong question. The actual question we’re trying to answer when we do epistemology is instead “What ought we to believe?” And this also takes away the requirement for complete certainty. After all, there are very few things of which we can be completely certain. You can derive a lot of assertions from I think therefore I am, but most of those assertions don’t come with complete mathematical certainty.

Framing epistemology in normative terms, although it does away with the necessity of a definition for knowledge, does give us a more useful, workable definition than the Platonic justified true belief. JTB is useless because of the meta problem of never knowing whether your beliefs count. Rather, a better definition would be one on which you had enough confidence in a belief to act upon it. So while we don’t — and can’t — ever have complete, formal mathematical certainty about whether vaccines work, or the climate is changing, or God exists, we can legitimately say that we know they do, it is, and he doesn’t respectively, because the balance of evidence in each case is so overwhelming that it would be unjustifiable to act as though it were otherwise.

As a secondary matter, this way of looking at epistemology neatly sidesteps the fundamental bootstrapping problem of ethics — that is, that you can’t derive a normative statement from an existential one, or in simpler terms, you can’t get an ought from an is. By framing the validity of assertions in a normative way, we already have them in the form of oughts.

On Definitions: Planet.

In 2006, the International Astronomical Union voted for the first time to approve a formal scientific definition for the word planet. Until then, they had gotten along perfectly fine without one. The question of what was a planet and what wasn’t one seemed not really worth asking. But the discovery of a number of objects similar to Pluto, that really should be planets if Pluto was one, necessitated the move. The IAU defines a planet as an object that:

  1. is in orbit around the Sun;
  2. has sufficient mass to assume hydrostatic equilibrium (ie. a spheroid shape); and
  3. has cleared the neighbourhood (ie. is the dominant gravitational force) around its orbit.

This made the news because the IAU’s definition reclassified Pluto as a minor, or dwarf planet, alongside Ceres (previously designated the largest asteroid), Eris, and a handful of other objects in the outer Solar System. Five are presently recognised, but the number to actually exist is estimated at somewhere between 100 and 10,000, so there’s clearly plenty of work still to be done.

Pluto was initially classed as a planet because it was thought to be much bigger than it really is. It was discovered in the course of a search for a planet thought to be perturbing the orbits of Uranus and Neptune, but is far too small to have done so; its discovery was a mere coincidence. (Discrepancies in Uranus’ orbit had previously led to the discovery of Neptune.) Shortly after discovery, Pluto was believed to be comparable in size to the Earth. Over the decades since, its estimated size has been revised down. It has been known to be roughly two thousandths of the Earth’s mass since the 1970s. It differs in size from any other planet (and from some of the larger moons) by at least two orders of magnitude. Its classification as a planet had arguably been anomalous for a few decades even before the IAU came along with their formal definition.

But for some reason, people didn’t like their precious Pluto not being a planet any more. They had learned a list of nine planets in school, and they didn’t like the idea of shortening it. A “popular vote” last year revived the argument, and was widely (and erroneously) reported as reclassifying Pluto as a planet again.

Unfortunately for such sentimentality, science and tradition don’t mix very well. You can’t learn new facts if you insist on sticking to what you grew up with. While it might be argued that a formal definition of the word is not particularly necessary (we got along just fine without one until 2006, after all), that call really isn’t the layman’s to make. And without removing Pluto from the catalogue of planets, we’d have to add at least four and possibly (eventually) thousands of objects to it, in order to be consistent, so either way we’d be looking at changing the order of things you learned in school. And precisely because of things like gravitational dominance, clearing the neighbourhood and so on, there are major differences between the major planets and the minor ones to warrant drawing a formal distinction.

This all happened before, by the way. In the early 19th century, when the first asteroids were discovered, they were generally reported as new planets. They were in stable, roughly circular orbits, so they weren’t comets, and at the time there was no other category to put them in. It was quickly realised, however, that they were a new sort of object (being, again, orders of magnitude smaller than the previously-known planets; this was what had kept them undiscovered for so long in the first place, of course). Furthermore, it was decided that adding every new asteroid to the list of planets would quickly become unwieldy, and so they were given a new category of their own. Nobody complains these days about Juno or Vesta having been stripped of their planetary status.

Now, as I said yesterday, having fuzzy or variable or mildly controversial definitions in ordinary everyday language is not hugely problematic. Language evolves. But in science, you need to be consistent in the way you talk about things, and you need to draw distinctions where they occur in the real world; and any definition of the word planet that includes only the nine “traditional” ones is simply going to be too arbitrary to be of any practical use. Does this mean that astronomers who study Mars or Saturn are going to be looking down their noses at Plutologists for studying a “lesser” world? Does it make me any less excited in anticipation of the probe New Horizons‘ arrival at Pluto later this year and of what it might discover? Of course not. Because they (and I) are more interested in learning new things than in preserving what might be wrong.

More than anything, though, the insistence that Pluto remain a planet in the face of scientific consensus is indicative of an arrogant and ignorant mindset. In astronomy, perhaps, it is mostly harmless; but it is the same sort of thinking that does real damage in the form of denialism about all sorts of uncomfortable facts, ranging from the efficacy of vaccines to the causes of global warming to the historicity of the Holocaust.

Nobody’s going to kick in your door and drag you off for calling a thing something it isn’t. But you’re flaunting your fundamental scientific illiteracy by doing so, and that’s not a thing to be proud of.

On Definitions: Capitalism.

Recently on my tumblr I was drawn into a couple of (unrelated) discussions about the usefulness of definitions. I think my thoughts on the matter warrant an extended discussion in their own right here.

It’s pretty clear that, much as many of us would like it to be otherwise, most words we use don’t have very clear-cut, singular, unambiguously definable meanings. Ludwig Wittgenstein’s attempts to provide such a definition for the word game — a simple, uncontroversial, common word, after all — are the archetypal example here. Not all games are competitive, not all involve winning, not all are entertaining or amusing. (In fact, an example of a game that is none of these things will make an appearance later in this post). By contrast, what makes dance, or politics, or sex not games? You simply can’t do it. It doesn’t mean the word has no real meaning, only that you can’t encapsulate it in a neat definition.

And now imagine what trouble we run into when it comes to words which are emotionally loaded, or politically controversial, or just plain complicated. Two such words came up in tumblr discussion yesterday: capitalism and planet. This post grew a bit long in the writing, so I’ll deal with the first of these for now. The second will have to wait for next time.

Capitalism is very broadly applied. The Wikipedia article on the subject lists over a hundred subtopics in its opening sidebar. The discussion was over whether solarpunk — a new aesthetic movement revolving around sustainable technology, organic design and empowering communities — was, or must necessarily be, anti-capitalist, which led in turn to debate over whether capitalism was inherently exploitative. My comments, I think, bear repeating here:

Capitalism, as in a system under which goods are exchanged on markets and the means of production are not centrally planned, is not inherently exploitative; indeed, the alternative is more likely to be so. Capitalism, as in a system under which human rights are exchanged on markets and the means of production are owned by absentee shareholders, is very probably inherently exploitative. Therefore: taboo the word capitalism. Talk about the issues of this form of ownership or that type of exchange without using the word.

The concept of tabooing a controversial word is derived from a party game similar to Charades or Articulate, in which players must have their partners guess a certain word without using the word (or a handful of related ones). I went on to explain:

Someone who objects to the latter thing may end up in a fruitless argument with someone who wishes to defend the former thing, and without eliminating the word from their discussion, they may not realise where the misunderstanding arose from. Tabooing a word, especially one like capitalism that has many implications and ideological flavourings and nuances and takes in a whole range of real-world phenomena … is a well-established and very useful method of cutting straight to the core of an argument.

As a tool for argument, the game of taboo was popularised in its modern form in this article by Eliezer Yudkowsky, although the basic idea predates him. Rather than prefacing your arguments with gerrymandered or hyper-specific or just very complex definitions, see if you can get away with just not using the words you were trying to define.

There are, however, cases where crisp, specific definitions are very useful, and where deliberately deviating from them is obfuscatory, unhelpful and can sometimes mask something quite sinister. I’ll turn to these cases when I discuss the matter of planet tomorrow.

Faith is not a dirty word.

I recently got into a discussion on the subject of faith, and it occurred to me that when it comes to arguments between religious believers and nonbelievers, there is often a serious lack of understanding of this concept.

This is, as is very common in such instances, because there are two closely related meanings of the word. To have faith in something can simply mean that you trust it; or it can serve as a reason to believe it.* In the first case, you have a belief, in which you place your faith; in the second, you have faith, out of which you form a belief.

A scientist’s faith does impinge on her beliefs, in that her faith in her experiments is a necessary condition for her to accept their results; but that faith didn’t arise ex nihilo, out of nothing. It came from her prior beliefs, which ultimately — if she’s doing science right — came from raw observations made without preconceptions or faith in anything. (This is how science, properly done, avoids an infinite regress, but that’s a topic for another day.) For the non-questioning religion, on the other hand, the faith is supposed to come first. We might call the scientist’s faith informed faith, while the proselyte has blind faith.

The faith my wife and I have in each other is of the first sort. We trust each other to be considerate of each other’s interests, to not stray, and all the other things we promised. This is entirely reasonable; and it doesn’t mean that my trust in her is placed blindly. If either of us didn’t have good reason to believe the other would be worthy of our faith, we’d never have married in the first place. I have faith in her because I have reason to.

Science has faith in its methods and results, also precisely because it has reason to trust them: they have always worked before. They haven’t been entirely without error; but in cases of error, the methods still tell us what to do. This is a bit of a sticking point for some rationalists, especially those who grew up being taught by a religion that having faith in something (ie. God) meant not questioning it. It’s quite understandable to be uneasy about the word in this situation. But this is only because — sometimes deliberately — religious advocates blur the difference between blind and informed faith.

To be fair, most theists actually don’t have blind faith in their God; the belief itself comes from other sources. This sort of faith, in and of itself, is fine. (I think there is always an error in the processes that inform the belief itself, but that’s another separate matter.) The problem only arises when, as does sometimes happen (and disproportionately among the more vocal, argumentative believers), the cart of faith is put before the horse of belief.

*A reason to believe in this sense is not necessarily a good reason. But it’s dishonest to say that bad reasons, in terms of how people actually think, are qualitatively different from good ones.