On Responsibility.

I recently got into a discussion about how a big problem with “modern leftists”, whatever that means, is that their ideology paints people as responsible for things they had no role in bringing about. I’m a straight white cis man, therefore I’m somehow culpable for the oppression of people less privileged. This is not a part of any mainstream modern leftist ideology I’m familiar with, but it is a common misconception (or deliberate misrepresentation), so I think it deserves an explanation.

On the one hand, we’re only directly responsible for making amends for wrongs we ourselves have done. On the other hand, leftist ideals like human rights, empathy, compassion, and so on impose on us some sense of responsibility (commensurate with our means) for fixing harms wherever and whomever they may befall. The misconception is that since, on such ideals, we are responsible for fixing these harms, those ideals claim we are somehow responsible for causing them, perhaps by virtue of unwittingly benefiting from unequal power structures or being raised in casually bigoted societies.

The key is in the word responsibility. It’s one of those annoying words that are very easy to equivocate; here, it’s being used in at least three different senses. The first is that of causal responsibility: if you hit someone while driving, you’re causally responsible for their injuries, which means that (and only that) you had a causal role in bringing them about.

The second is moral responsibility, which we can also refer to as culpability: if you hit someone while driving, you can be morally responsible for injuring them — although, if they were being reckless, your culpability may be lessened. Causal responsibility is a necessary condition for moral responsibility — you can’t be culpable for something you had no role in causing — but not a sufficient one.

The third sense is that of liability or duty, which we can refer to as normative responsibility, or more prosaically responsibility to as opposed to responsibility for. If you’re morally responsible for hitting somebody while driving, then you can be held liable for covering their medical bills, and perhaps some compensation over and above that. If, on the other hand, they deliberately hid and jumped in front of your vehicle (say if they were feeling suicidal), as your moral responsibility is lessened, so is your liability.

It will be seen that each of these senses supervenes upon the former. You can’t have normative without moral, and you can’t have moral without causal. But normative responsibility can arise from things we don’t usually consider when we think about the first two (although in strict ethical terms, they do count). We can incur liability from things other than moral transgressions. We can be liable for repayments due to entering into a loan contract, for example. We can be liable for sales taxes as a result of making purchases. We take on many diverse responsibilities when we choose to have children.

The responsibilities we have to the unfortunate, to alleviate poverty and inequality, to create a better and fairer society — these are responsibilities we incur through the basic social contract, and we would have them no matter how little we were (as individuals) causally responsible for the situation. All that said, though, it is the case that we can be held culpable for actively doing things that make the situation worse, and that those in positions of particular power to make it better — the wealthy, the privileged, the politically powerful, the media — have a bigger obligation to do so.


A Delicate Balance.

I’ve been sitting on this post for a while. I wasn’t sure what I ought to do with it: each paragraph could easily be a whole post in its own right. But I think now is the right time to put them out there. Many of these beliefs are more than a year old in my mind, but I have been thinking about them a lot in the past year; and as I look back, I realise that I haven’t really gotten them out in a coherent, accessible form. They’ve mostly appeared in the form of ephemeral comments on Facebook or Tumblr, and it’s high time I put them in one place. The ideas discussed below also follow a common theme to which I find myself returning again and again, and I could easily find more to which it is applicable.

My beliefs are not typical of any particular orthodoxy. On the whole, I think this is something to be proud of. One should not believe a thing just because other people do, but rather because the thing stands on its own merits. That said, you simply can’t investigate everything fully enough to be justified in believing it from first principles; but in cases where you can’t afford the effort, it’s not appropriate to just throw up your hands or to believe what you like. It’s imperative to accept the expert consensus, if there is one, or the null hypothesis, if there isn’t. This is precisely an example of the sort of nuanced balance I’m talking about. It’s not that both sides of an issue necessarily have a point, so much as that it’s very rarely the case that any given “side” has things entirely right; and that one side has a given thing wrong is no guarantee that the other doesn’t also.

Some other examples:

I’m a feminist, and I think both that Germaine Greer is a bigot for thinking trans women aren’t women and that Caitlyn Jenner doesn’t deserve most of the praise she gets just for transitioning — she’s still a privileged conservative and wholly unrepresentative of most trans people’s experiences.

I’m an environmentalist, and I think nuclear power is not nearly as objectionable as fossil power. I believe this for environmentalist reasons, such as that it releases less carbon dioxide (and, for that matter, radioactivity), and is orders of magnitude less dangerous per GWh. As such, it would be far preferable to use it as a stop-gap rather than keep burning coal and oil until renewables take over. The only reason I’m not advocating more fervently for its use is that we don’t need it as a stop-gap any more — we can already make the switch to renewables in the time it would take to replace fossil power with nuclear.

I’m a Bayesian rationalist, and accordingly don’t believe in things like gods or the utility of death. But I think a lot of the things that are popular in the “rationalist” community, such as strict utilitarianism, advocacy of cryonics or the idea that the many-worlds interpretation of quantum mechanics is the only coherent one, are utter bunk. And the common disdain for “politics” among rationalists generally only serves as a mask for libertarianism or even neoreactionary beliefs, which are hardly rational.

I’m in favour of many fundamental structural reforms to the way society is run, such as a universal basic income funded by very high externality and rent taxes on harmful activities like mining, polluting, or being Rupert Murdoch. I’m equally certain that no matter how revolutionary some of these reforms may be, revolutions are almost always terrible means of achieving them. (This particular dichotomy is one of the main themes of my forthcoming novel, of which I wrote what I expect will be the final line a few weeks ago, although there do remain a few crucial chapters still incomplete.)

I am a keen believer in the usefulness of having a standardised language, especially for such a broad lingua franca as English for most of whose speakers it is a second or third language. And I don’t think this contradicts my similarly firm belief that those who speak nonstandard dialects shouldn’t have to unlearn them in order to be taken seriously. Similarly, I don’t like linguistic prescriptivism as a principle, but I abhor misuses like alternate for alternative that only serve to muddy the language’s ability to make often useful distinctions. (I’d like to think that these dichotomies and others like them make me a good editor; having solid reasons for supporting certain prescriptivist practices makes it much easier to let go of any prescriptivist instinct in the cases where the reasons don’t apply. Conversely, it also makes it much easier to objectively explain my work in the cases where they do, and providing quality feedback is one of the most important parts of an editor’s job.)

This does make it difficult to explain myself concisely, and it’s very easy to be misinterpreted. People assume that because I hold one belief I subscribe to an entire ideology of which it is a part, when that is seldom if ever meaningfully true. In acknowledging that Islam deserves much of the blame for acts of Islamist extremism, for example, I don’t want to be taken as condemning the rest of the Muslim population, because they are not to blame — much less as condoning violent and misguided retaliation.

Asking the Right Questions.

Over the last few weeks, if anybody has been reading these, I’ve been talking about free will. The conclusion I reached was that the important question is not simply “do we have it?”, because that begs the question of what it actually is. There are senses in which we have it and senses in which we don’t. The usual arguments hinge on a definition of free will that not only trivially doesn’t exist, but also wouldn’t do what we’d want it to do if it did. What this highlights to me is the vital importance in philosophy, and in rational inquiry in general, of asking the right questions. Closely related is the value of using appropriate and transparent definitions, which I have also written about before.

Today, I’d like to turn the discussion onto the concept of knowledge itself. What does it mean to know something? This is the field of epistemology.

For some people, saying you know something is just making a distinction between saying you believe it, and saying you really believe it. But that’s not an appropriate use of the word; the difference between knowledge and belief, we feel, should be qualitative, and an appropriate definition needs to capture that. Yes, knowledge seems to imply certainty, and it definitely involves belief, but there’s something else it requires too. You can believe something that isn’t true. You can’t know something that isn’t true.

But a true belief isn’t necessarily knowledge either. Democritus, in the fourth century BCE, believed that the material world was composed of tiny, indivisible atoms whose varying properties and relationships gave rise to different substances. His theory was, as far as it goes, more or less correct. Certainly it is more true than its contradiction. But we wouldn’t say he knew that the world was composed of atoms, no matter how certain he felt about it, and no matter how true it is that the world actually is composed of atoms. Why?

Because there wasn’t a good enough reason for him to assert it. Nobody had ever seen an atom, and Democritus was more or less speculating. Not only did he have no physical evidence for his theory, but he also had no idea what evidence favouring it over its rivals might even look like; and, on top of that, he didn’t particularly mind. The formulation of the scientific method, in any recognisable form, was centuries off. He was going on intuition, and he arrived at his ideas in the same way as his contemporaries arrived at rival (and much less correct) metaphysical theories.

This is not to say that there was no such thing as knowledge at all prior to the development of the scientific method, of course. Just because you couldn’t see evidence pointing to atomic theory didn’t mean you couldn’t see the stars, or predict the seasons, or have confidence that your new bridge wouldn’t fall down. But all these beliefs, like more rigorous scientific ones, were justified. Your prediction, one spring day, that the Sun would rise earlier tomorrow than yesterday, was justified by the fact that you had always observed it to be so in the spring. And so Plato in his turn was justified when he laid down the definition of knowledge that philosophers accepted for over two millennia: that it is no more or less than justified true belief.

And then Edmund Gettier supposedly came along and spoilt all that. You may note the past tense accepted there. Gettier gave examples of justified true beliefs that were not knowledge. To do this, however, required him to use a definition of justification that allows for justified beliefs to be false.

Most epistemologists take Gettier’s examples as proof that whatever knowledge really is, it isn’t justified true belief. But this is nonsense. If we actually try to derive an adequate definition of justification, the best candidate is one that does not allow for a belief to be both entirely justified and false.

That definition is the following. A belief in a proposition is justified if and only if the belief is causally descended from, or in less wanky language caused by, the truth of the proposition itself. Someone who believes he sees a barn in a field (to take another traditional example), and believes it to be a real barn, is justified in that belief only if what he saw was in fact a real barn, and not a false façade erected by a prankster to fool passing epistemologists, because in the case of a false façade, his belief in a real barn is not caused by the objective existence of one.

But this, of course, merely removes the problem from the object level to the meta level. How could such a person know whether his belief were justified, if false façades and real barns looked identical? He couldn’t. And in turn, if he cannot know whether his belief is justified, he cannot know whether it counts as knowledge at all. This is the reason Gettier uses a weaker definition of justification that allows for justified beliefs to be false — the stronger, more accurate causal definition rules out knowledge at all.

This is why I believe epistemology requires a normative approach. “What do/can we know?” is the wrong question. The actual question we’re trying to answer when we do epistemology is instead “What ought we to believe?” And this also takes away the requirement for complete certainty. After all, there are very few things of which we can be completely certain. You can derive a lot of assertions from I think therefore I am, but most of those assertions don’t come with complete mathematical certainty.

Framing epistemology in normative terms, although it does away with the necessity of a definition for knowledge, does give us a more useful, workable definition than the Platonic justified true belief. JTB is useless because of the meta problem of never knowing whether your beliefs count. Rather, a better definition would be one on which you had enough confidence in a belief to act upon it. So while we don’t — and can’t — ever have complete, formal mathematical certainty about whether vaccines work, or the climate is changing, or God exists, we can legitimately say that we know they do, it is, and he doesn’t respectively, because the balance of evidence in each case is so overwhelming that it would be unjustifiable to act as though it were otherwise.

As a secondary matter, this way of looking at epistemology neatly sidesteps the fundamental bootstrapping problem of ethics — that is, that you can’t derive a normative statement from an existential one, or in simpler terms, you can’t get an ought from an is. By framing the validity of assertions in a normative way, we already have them in the form of oughts.

On Free Will: Philosophy.

The first two posts in this series have effectively treated free will as an unanalysable concept. This is not necessarily a bad thing; you can get away with unanalysable concepts if they’re simple enough, unambiguous and not the subject of the controversy at hand. But that’s unfortunately not the case here. So toward the end of part two, I began to move away from this, when I referred to free will as entailing that the actions of free agents must be irreducibly unpredictable: that is, unpredictable even with mathematical probabilities. Most arguments about the subject, even when they aren’t framed in quite those terms, rest on whether or not our actions are irreducibly unpredictable, so this is a good definition to use when analysing those arguments.

I believe, and will attempt to explain why below, that science does not disprove the existence of free will as usually understood. But as usually defined, as irreducible unpredictability, it does. The mind is a function of the brain, albeit a supremely complicated one, and the brain obeys the same (predictable or at least predictably random) laws of physics as the rest of the universe. Even if there were literally some form of soul or spirit of a separate substance to the brain, we can be pretty certain that it leaves no evidence of itself in the physical world — which is just another way of saying that it has no effect on the physical brain and its actions. If it has no effect on what we do, then it cannot be a mechanism to explain anything about the mind, free will or otherwise. There is, quite simply, no room for irreducible unpredictability in the brain.

Irreducible unpredictability makes for some fun arguments. But those arguments, for all their flair and rigour, are not actually about free will, in terms of what we actually want the concept to do once we’ve wrung it out. We don’t want to think we have free will so as to reassure ourselves that we (or other people) are fundamentally unpredictable. Rather, we want to think so in order to feel we have some legitimate basis on which we can take credit for our actions and blame others for theirs. Or, more rarely, blame ourselves and praise others. On an even more fundamental level, it seems necessary in order for us to have a sense of ownership over our thoughts, as well as our actions. And it’s not at all clear how defining free will as (entailing) irreducible unpredictability could help us do this.

We don’t think people are free to the extent to which they’re unpredictable. We don’t think a completely unpredictable person is normal; we think he’s a looney. We might consider a particularly predictable person to be dependable (if we like them) or boring (if we don’t), but we don’t think they’re less of a whole person for it.

We want our actions, and those of others, to be determined. We explain villains in terms of their tragic past, while acknowledging that this does nothing to excuse their actions. Our minds are determined entirely by what goes into them. You can’t imagine a colour that you haven’t seen in the same way as one you have. This can be very direct. If I tell you to not think about elephants, you immediately think of them. You can’t help it. But does that really mean you don’t have free will at all?

It only means that the freedom of your will is not unlimited. But of course it isn’t. You can’t calculate the value of π to a million digits in your head just by willing it. You can’t picture a colour you’ve never seen in the same way as you can picture the blue of the sky on a clear day. Most people can’t even consciously choose which gender they are, or which flavour of ice cream to prefer, or who they fall in love with. But we can choose what to do about those things, and to that extent, we have free will. Having murderous impulses doesn’t make you a murderer; acting upon them does. There is an extent to which we can deliberately affect our internal states. Consciously trying to become more generous of spirit, for example, or to genuinely appreciate a genre of art we find challenging. But such attempts too rely on a deterministic conception of the mind, as something that can be affected by things and affect things in its turn, including itself. This reflexivity is important, and I’ll come back to it later.

It is, to be sure, a more limiting conception of free will than the incompatibilist version used by both sides of the traditional argument. But it’s both more accurate in terms of describing the real world, and closer in its effects to what we actually want the concept of free will to do for us. We don’t need a less limiting conception of free will, any more than we need one that says we can fly.

And that point about what we want the concept to do for us is very important. The first question you should ask when you encounter an abstract concept like free will is what you want the concept to achieve. When you ask “are human actions irreducibly unpredictable?”, you’re not asking the same thing as “do people have free will?” But often people don’t realise that, because they load a preconceived definition of “free will” into the question. And their preconceived definition has nothing to do with the question they really want the answer to, that is, whether we can own our own thoughts and take credit or blame for our actions.

Some opponents prefer to describe this compatibilist definition of free will as autonomy, as it emphasises that we have free will to the extent we’re not entirely controlled by other wills. But that’s purely a semantic quibble. And as I’ve demonstrated, the concept in question — whatever you prefer to call it — does anything we could reasonably ask of the concept of free will, and does it better than the alternatives, which aren’t really free will at all.

There remain a couple of points I want to discuss. The first comes back to the predictability of the mind. There is a very good reason why people think about free will in terms of irreducible unpredictability, and it has nothing to do with being stupid or wanting to simplify the problem. Rather it has to do with the reflexive, or introspective, nature of consciousness. Consciousness involves having a privileged view of your self. You alone are aware of what your thoughts really are, in a qualitative way that nobody else could be. It’s difficult, if not impossible, to imagine what it would even be like to see inside someone else’s mind in the way you see inside your own. How could that happen without you effectively becoming that person? And conversely, how could anyone or anything outside your mind predict your thoughts without effectively becoming you?

If human minds are, even in theory, determinable, then it follows that everything that happens in your mind, including this sense of self-awareness, could be predicted from outside. You don’t yourself know what you’ll be thinking about in an hour from now, but someone with a perfectly accurate Prediction Machine™ could tell you. Surely, if it’s possible to know what you’ll do before you decide to do it, you’re not free in any meaningful sense — you don’t own your own thoughts, decisions, or actions after all.

In fact, we don’t need to go so far as to imagine a Prediction Machine™ so accurate it could never exist except in theory. A neurologist with a brain scanner can tell what decision you’re going to make — for example, whether to raise your arm — whole tenths of a second before you’re aware you’ve made it. Such experiments have been repeated many times, and there’s little doubt as to their veracity; they’ve been hailed by scientists and philosophers alike as the final nail in the coffin of free will.

Not so. A blip on a brain scanner is evidence of your decision to raise your arm in the same way that your arm ascending is — no more, no less. The blip is caused by your decision to raise your arm, not the other way around. The few tenths of a second it takes for you to be fully cognisant of that decision is analogous to the few tenths of a second it takes for you to process images from your eyes or sounds from your ears. The fact that a camera’s sensor registers an image faster than the vision centres of the brain hardly dealt a blow to the theory of human perception. The fact that the brain scanner can detect brain states quicker than the brain itself should be similarly untroubling.

Even if things were determinable on scales much longer than a tenth of a second, on a scale of minutes or days or even decades, all that this would mean would be that the results of any such determination would be the result, not the cause, of what was going to happen in your brain. But the catch here is that we’re back in the territory of a Prediction Machine™ again, and any Prediction Machine™ able to even remotely accurately predict (rather than merely observe, as the brain scanner does) a person’s thoughts must be powerful enough not just to perfectly simulate their brain, but also the brain of every person who causally interacted with them (which on a long enough timescale adds up to everyone in the world, due to the chaotic nature of causation).

And it also, and here’s the kicker, would have to be able to fully simulate its own operations, on top of all that, in order to know the effects of its predictions on the people it was simulating. In other words, while I couldn’t say exactly or even roughly how complex it would have to be, I can say with complete certainty that it would have to be more complex than itself. Such a machine is not just practically but logically impossible. And even then, its predictions would be determined by what was going to happen inside the minds it was simulating — not the other way around. We should be thankful that it’s impossible, not because it preserves our autonomy, but because the logic of time and causality would be violated if it weren’t.

On Free Will: Science.

Right, that’s the theologians ticked off. For my next trick: scientists.

Unlike theology, of course, we can’t dismiss scientific facts as untrue, no matter how inconvenient we might find them. But the implications of scientific discoveries get misinterpreted all the time.

Newtonian classical mechanics describes a mechanistic universe, one in which everything happens according to fairly simple, predictable, mathematical laws. If we know the exact state of everything at a given time, we can in principle know all future states, just by working out the maths. Such a universe is, in philosophical terms, determinist. But if people have free will, the argument went, it should be impossible even in theory to predict what they will do. If it’s possible to predict them, then they can’t be said to have any control over what they do, because it’s already determined before they decide to do it.

This was traditionally dealt with by means of substance dualism: the theory that the human mind was fundamentally separate from the body in such a way that preserved its freedom. This separate thing might be the soul, or the spirit — the important thing was that while it could affect, and be affected by the body, its internal processes were not bound by those effects. Descartes’ (in)famous I think therefore I am was part of an attempt to prove substance dualism.

Classical mechanics, of course, took a hell of a beating about a century ago, from relativity and quantum theory. Crucially, quantum theory is indeterminist. The most it can give us on the atomic scale, even in theory, is the probability of a thing happening. This is why we speak of the half-lives of radioactive materials, for instance: over any given interval, there’s a certain probability of a given atom decaying, and these probabilities add up over the billions of billions of atoms in a sample of plutonium to predict how much will be left after a certain amount of time. On a macroscopic level, then, the universe still appears deterministic, but fundamentally it isn’t.

Dualism has also, deservedly, fallen well out of favour, to the point where in some philosophical circles it’s more of a term of abuse, like creationism among biologists. If a theory entails or requires dualism, that’s all you need to dismiss it. There are a handful of (mostly theistic) philosophers who stick by it, but they’re a small minority; and, much more importantly, they’re simply wrong.

The indeterminism of quantum physics, however, doesn’t actually help us out of the apparent implications of determinism for free will. Firstly, the human brain mainly works on a macroscopic level; quantum effects are relevant to its electronics just as they are to artificial ones, but only to the extent that they are predictable. But even if the brain were entirely or mostly governed by quantum probabilities, it’s not at all clear how that would entail, or even allow for, free will in the usually understood sense of being free from determinism. We’d still be just as subject to the laws of chance as to those of physics.

Quantum effects and “free will” are supposed to be unpredictable, although in both cases they add up to predictable effects on large enough scales through the aggregation of probabilities. But the unpredictability of quantum effects isn’t caused by the free action of some unseen thing; it’s inherent in the fabric of reality. It’s no more indicative of intent than the result of a random dice roll, and there’s no reason to think it allows for free will as generally understood. Indeed, if it did, one of two things would be the case, neither of which we observe in reality.

The first would be that different probabilities would apply to quantum effects depending on whether they were inside a brain or not, or that quantum effects inside brains obeyed no predictable probabilities. That would be strong evidence for some sort of dualism, but no such thing has been observed — I don’t know if anyone’s tried, or how easy or obvious it might be; but it would be the story of the year in both physics and neurology if anyone had succeeded, and it’s not the sort of thing we can assume without checking. The alternative would be that everything in the universe has a conscious will. That would be interesting as a thought experiment (it shows up in Philip Pullman’s His Dark Materials fantasy series as a productive metaphor for a lot of ideas), but like the first option, it begs the question in the real world. We’d have to have independent evidence for pan-sentience, which, again, we don’t.

So physics doesn’t seem to allow for truly unpredictable free will. In fact, the fact that the actions of groups of people can often be more predictable the larger the group would seem to rule it out on its own. But this is a mistake. The essence of the problem, in one sentence, is that we’re looking for evidence of free will in the wrong place. We assume the evidence must take the form of irreducible unpredictability, that is, the sort that can’t be expressed even in terms of mathematical probabilities. There is an answer, but I’m trying to keep these posts to a manageable length, so I’m afraid it will have to wait for part three.

At least, that’s the deterministic explanation. The free-will explanation is that I like cliffhangers.

On Free Will: Theology.

Free will. It’s supposed to be one of those Big Questions™, isn’t it. It’s supposed to be the solution to the Problem of Evil (ie. Why Does God Let Bad Stuff Happen?), but it’s not often explained how this is supposed to work. Science, with its atheistic, mechanistic view of the world, is repeatedly said to have disproven it. Philosophers argue about whether there’s even any coherent definition of it at all, or at least one which can help with what we’re trying to achieve by using the concept.

I’ll take these one at a time.

As for theology, it’s a simple matter of fact that there’s no such thing as God and never was, so the Problem of Evil is really a non-starter. But so much has been said on the matter that would be thoroughly wrong even if there were a God, and it can occasionally be helpful to speculate about what one might or should be like. Omnibenevolence, after all, is hardly a bad model for us if we’re trying to be merely benevolent.

The basic idea is that if there were no evil in the world, we would have no free will to choose good instead. This applies both to “natural” evil, or misfortune, and to “human” evil, or what we usually think of when we see the word. No natural disasters means no heroes to save people from them. No disease means no superlative doctors. And so on. And if nobody ever did bad things, not only would it mean the rest of us couldn’t distinguish ourselves as Better Than Them by choosing not to do them, but it would also mean that nobody could have even freely chosen to do them. Not only would there be no point in recognising or rewarding good behaviour, but there’d be no intrinsic merit in it either. You can’t hold a person responsible for their actions if they couldn’t have done otherwise.

Incidentally, the usual interpretation of Jesus as partaking in his dad’s omnibenevolence paints him this way. If a being is omnibenevolent, that means that they always, by definition, must do the right thing. They do effectively have no choice in the matter. They have no free will, and therefore when they do the right thing, there’s no merit for them in doing it. If Jesus is an omnibenevolent god, then his sacrifice at the end of the story is meaningless even if you ignore the part where he gets better (spoiler alert). Same goes for big-G God himself, for the same reason. A mere mortal who does even the slightest shred of good off his own bat is more worthy of praise than a divine creator who, if you believe the stories, does all the good in the world because he can’t help it.

But that’s all nonsense, really. If a real person maimed, infected, and killed his children on a regular basis, in the name of seeing who among them was really virtuous, we’d lock them up. If a real person had the power to heal the sick with a snap of his fingers, we’d get him to do it — and if he’d been the one making them sick in the first place, we’d think him a monster, no matter what his reasons. There are real people who advocate a complete absence of laws, or at least of their mandatory enforcement, but the sane majority of us rightly consider them to be loonies.

A Hybrid Ethical System.

There are generally understood to be three major schools of ethical thought: consequentialism, on which the morality of an action is judged based on its (expected or actual) consequences (measured by amount of suffering, pleasure, welfare or other difficult-to-quantify metric); deontology, which deals in rights and obligations and people-as-ends-in-themselves; and virtue ethics, whose focus is on what sort of person it is good to be, rather than what sort of things it is good to do.

There are variants and hybrid systems. Rule consequentialism, for example, is deontological in form — it lists rights and obligations which it is morally imperative to observe — but it derives its rules by determining what rules, if followed and/or imposed, would have the best overall consequences.

I believe the reason the three systems have survived alongside each other is that they do not, or should not, actually compete, but rather complement each other. In the course of researching and constructing arguments for my thesis (and forthcoming book) on reproductive ethics, I stumbled on a natural, intuitive way of delineating appropriate domains for consequentialism and deontology. In brief, it has to do with the nature of personal identity, particularly as described by Derek Parfit in Reasons and Persons (1984) and On What Matters (2011). By analogy, a similar delineation can be made for a domain for virtue ethics.

Explaining how the three systems can (and should) apply to complementary domains would be a book in itself, which I intend to sit down and write once the reproductive ethics one is finished. But I recently found myself having to defend the model in conversation, and I thought it appropriate to share the brief description of it I gave here.

It’s my belief that a comprehensive moral theory should be a hybrid system, which would involve consequentialist policy, deontological rules, and virtue education.

So if you’re deciding what policies to adopt, or what sort of society to have, on a scale of lots of people, you use a consequentialist approach. Public policymakers have to use some sort of scale on which lives are commensurable, or else they’d get nothing done; there’s an argument in my book as to how this can work, but the basic principle is illustrated very clearly in Caspar Hare’s 2007 paper “Voices From Another World”.

But such a system is inappropriate on the level of interpersonal interactions, for a number of reasons such as information asymmetry, limited processing ability, the action/inaction distinction and so on. Both the laws that apply to such interactions and the informal/personal/moral rules people follow in them should instead have a structure based around people-as-ends, rights and responsibilities, clear-cut obligations and so on.

This should all, of course, be taught to the members of such a society, but even more important on the intrapersonal level are the notions of virtue ethics — what kind of person is it good to be? What kind of life is good to lead? There are crucial senses in which each person must answer these questions for herself, and each person’s answers will be different; but a good moral education will give her the mental scaffolding necessary to do so, on top of teaching her about the other two systems described above.