The Multiversal Sigh

Edit (3-25-14): added a summation at the end for clarity.

I ended up giving Elisa’s post, spurred by Jamess argument, on the possibility of morality in the multiverse a lot of thought on a long bus ride on Friday, and I finally got around to putting fingers to keyboard.  I originally intended to post a comment on her blog, but I got carried away and decided to publish my book here instead.

Like the other commenters, I’m not actually bothered by James’s concern; in fact, I think I’m psychologically incapable of being bothered by it, because the notion of the “multiverse” he appeals to is so removed from my experiences and nature, and it is my experiences and nature that are the primary bases of my ethical judgments.  Even if I were convinced of the validity of James’s concern, I wouldn’t change any of my behavior, and I wouldn’t lose any sleep over it.  I intuitively care about this universe because I experience it, and no philosophical argument can possibly convince me of something as counterintuitive as the position that I am making a mistake if I care about the consequences of any of my actions.  (I’m not saying philosophical arguments never have practical weight, just that there are certain intuitions that are too powerful for them to override, especially in the realm of ethics, where intuition is the ultimate arbiter.)

That said, I’m going to argue that many of the responses to James are overly dismissive and prove too much.

I want to start by making what I think is the intuitive case for the validity of James’s concern, assuming the existence of his version of the multiverse.  It seems that his concern is based on the principle that an action can only have moral significance if it has an expected or actual net effect on the world.  I’m not well-versed in ethical theory, but this strikes me as a pretty intuitively sensible principle.  Morality, after all, is practical – it is meant to guide our judgments and actions – so how can an action possibly have moral significance if it can’t have any impact on the world?  And if an action must have an expected or actual net effect on the world in order to have moral significance, it follows trivially that in a world in which all possibilities are necessarily realized (i.e. a Jamesian multiverse), no action can have moral significance.

I see two basic ways of dealing with this claim, which are analogous to the “hard determinist” and “compatibilist” approaches to the problem of justifying our moral practices in a deterministic/random world (the problem being we feel compelled to praise or blame people for certain actions despite acknowledging that these actions are the result of the same physical laws that govern the wind).  One is to simply accept that no action can possibly have moral significance (though we may nevertheless feel compelled to treat actions as having such).  Based on my limited familiarity with the literature, I think this is what hard determinists are up to with respect to moral responsibility.  I take it they maintain that any worthwhile and intuitively satisfactory notion of moral responsibility inescapably rests on the assumption of metaphysical, you-could-have-done-otherwise free will, which unfortunately is ruled out by physics.  Needless to say, this puts hard determinists in a bind: they either have to treat their moral practices as bunk (good luck), or they have to live with the belief that these moral practices, which they find compelling, are bunk. 

In contrast, compatibilists avoid this bind by maintaining that moral responsibility is possible, because the moral judgments we feel compelled to make do not actually require the assumption of free will of the impossible metaphysical variety; rather, they rest on physically possible preconditions, such as freedom from external compulsion, and any sense we have that they require something more is, as a descriptive matter, baseless.  (I’ve heard Dennett’s book Elbow Room persuasively argues this, that metaphysical free will is not actually something we actually care about when deciding to assign praise and blame.)  And some compatibilists further argue that to the extent one’s moral judgments do rest on the assumption of metaphysical free will, one should revise her conception of moral responsibility, because it’s less useful than a compatibilist one.  (I’m not sure how such arguments are supposed to have traction.)  Essentially, compatibilists reject the premise that one must have metaphysical free will in order to be morally responsible for his actions.  Similarly, a compatibilist-style response to James would involve rejecting his premise that an action must have an expected or actual net effect on the multiverse in order to have moral significance. 

It strikes me that Elisa, among others, is enthusiastically taking a hard-determinist-style approach to dealing with James’s concern, and I think this approach suffers from similar drawbacks to hard determinism itself.  I think Elisa’s “resolution fallacy” argument is tantamount to a hard determinist saying: “sure, at the high resolution of physics, metaphysical free will is impossible, but we experience having metaphysical free will, and morality is only concerned with the low resolution of experience, so it’s cool for us to treat metaphysical free will as real when making moral judgments.”  Elisa herself actually compared James’s concern about the multiverse to worrying about determinism when making decisions.  She tweeted: “Do you let free will arguments muck with your internal moral compass?” – as if to suggest that free will arguments simply aren’t relevant to using one’s moral compass, even if one’s moral compass is sensitive to free will.  That’s a pretty big bullet to bite.  As outlined above, the compatibilist position strives to dodge this bullet by explaining that one’s moral compass is not, or should not be, muck-uppable by determinism, because one’s moral compass does not, or should not, rely on anti-determinist assumptions.  Compatibilism confronts determinism head-on; it grabs the purported dilemma by the horns, looks it in the eye, and says, “you’re not really a dilemma.”  It does not say, in the vein of Elisa’s response: “I acknowledge that determinism undermines my moral compass, but I’m going to continue to rely on my moral compass as if nothing’s the matter, because I still experience it as working.”  There is a certain intuitive appeal to this response, but it raises two serious problems: (i) why is it okay for morality to tolerate a high res/low res distinction, such that it does not hold up under “high res scrutiny” but somehow nevertheless has obligatory force under “low res scrutiny”; and (ii) even if this is acceptable, where do you draw the line between the two levels of scrutiny? 

I think another one of Elisa’s analogies further illustrates these problems.  She compares James’s concern about the multiverse to worrying about using Newtonian mechanics when making decisions about the macro world.  But it’s not clear to me that we should be comfortable with analogizing our moral practices to Newtonian mechanics, because that would entail conceiving of a correct moral argument as one that is false (like Newtonian mechanics) but happens to direct one to act in accordance with what morality actually requires (like Newtonian mechanics providing the same results as our better theories in the macro world).  We can justify our reliance on Newtonian mechanics because although it works on elephants and not on electrons, it provides the same results as our theories that work on both elephants and electrons.  In a very real sense, the problems with Newtonian mechanics are irrelevant if all we care about is measuring an elephant.  But in what sense could certain problems with our moral practices be similarly irrelevant?  And what better theory can we appeal to to justify the use of false moral practices?  (And why wouldn’t we just use that theory?  It presumably, unlike quantum mechanics, wouldn’t involve a lot of complicated math.)  I don’t think there are good answers to these questions.

Thus, I’m inclined to favor a compatibilist-style approach to addressing James’s concern.  As noted above, this approach entails rejecting James’s premise that an action must have an expected or actual net effect on the multiverse in order to have moral significance.  However, it’s not clear that this premise should go.  (So maybe the existence of the Jamesian multiverse really would undermine our moral theories.)  Perhaps it should be rejected on the grounds that while it makes sense to have a net-effect requirement for moral significance in our universe, it doesn’t make sense to apply this requirement across universes.  Perhaps the causal independence of the separate universes is somehow relevant. 

I’m not sure where to go from here (other than to greener pastures), but I hope I’ve established that James’s concern – at least on a purely theoretical level, and on the assumption that his version of the multiverse exists – is not obviously silly (at least not more so than plenty of other philosophical worries!).  I think the easy ways out that people have advocated draw untenable distinctions and/or render the moral playing field even more of a free-for-all than it inherently is.

In sum:
1. Our moral practices seem to endorse the principle that an action can only have moral significance if it has an expected or actual net effect on the world.
2. Assuming the Jamesian multiverse exists, no action can possibly have a net effect on the world, so arguably no action can possibly have moral significance.
3. One way of dealing with this is to simply continue engaging in our moral practices, even if they rely on the net-effect requirement, on the grounds that morality is only concerned with our experiences, and nothing feels any different if it turns out we live in the multiverse.  Elisa and many of her commenters seem to have initially made responses along these lines.  I think this sort of approach is misguided, because it embraces unprincipled incoherence.  It effectively says: so what if it turns out one of the principles underlying our moral practices can't be satisfied, our moral practices still feel compelling, and that's good enough.  It's analogous to a hard determinist qualmlessly dishing out praise and blame, on the grounds that praising and blaming feel justified, while maintaining the praising and blaming aren't actually justified (because they assume that people have impossible metaphysical free will).
4. Another response, which I prefer, is to challenge the net-effect requirement, either by saying it's completely bunk (as I imagine, e.g., a Kantian would do), or by saying it only makes sense with respect to this universe, not across universes (as Elisa and James do below).  (This response is akin to the compatibilist approach to moral responsibility, which rejects the notion that our practices of praising and blaming rely on the assumption that people have metaphysical free will.)  I'm sympathetic to the latter move, but, as I argue in this comment below, it also seems to raise real problems.
5. Thus, it's not obvious that living in the multiverse wouldn't shake the foundations of our moral practices.


  1. "Moral significance" is an interesting word choice, in this context. How much net effect counts as significant? Even just in our universe, or our galaxy, solar system, planet, etc? What if it has a positive net effect in your neighborhood or in America but a negative net effect if you take the whole ecosystem or Western world into account? Etc etc? The further out you zoom, the more difficult those moral decisions/calculations become (and/or the less significant they seem).

  2. I agree with that, but the power of the multiverse hypo is actions can't possibly have any net effects in that world. So whatever threshold for moral significance you believe in, if it's net-effect based, it can't possibly be satisfied in the multiverse. Hence my position that you either have to say (i) so what, morality doesn't really care about these sorts of "high res" arguments (which I think is problematic and hard to distinguish from accepting that actions can't have moral significance), or (ii) the net effect requirement has got to go; an action can have moral significance even if it can't possibly have any net effect on the multiverse (which I think is counterintuitive but preferable).

    1. I don't see any compelling reason to say your actions have to have a net effect across ALL POSSIBLE UNIVERSES versus the one you happen to be in.

      I actually think both i and ii can make sense, it just depends on your framework. You seem to be suggesting that there can be some kind of ultimate, absolute, uber-morality that doesn't care about frameworks? I don't buy that at all. Even if you believe that utilitarianism is possible, you have to define values for various outcomes (e.g. human life/death) and those values aren't objective. Our subjective human values for various outcomes are meaningless in the scope of the solar system, much less the multiverse.

    2. Maybe there's not a compelling reason to say that, but I think there's a highly intuitive one, which is that if we assume all the other universes actually exist in the same way this one does, then all of the life and death and pleasure and pain in those universes are just as real as in this one, even though we can't perceive or interact with them. Maybe those differences are relevant, and we can therefore privilege this universe in our moral calculus, but I don't think it's obvious why. The moral arguments we buy seem to appeal to universe-independent principles such as reducing net suffering, on the grounds that suffering is suffering and is bad regardless of where it takes place. A drowning baby isn’t any less of a bad thing if it’s 100 miles away as opposed to 100 feet, so why is it okay to treat it as less bad if it’s another universe away? I don’t think you can simply say, “that was in another universe, so it doesn’t matter that the wench is dead.”

      There may be a good answer to this (one that doesn’t simply say, “screw the other universes”), perhaps along the lines of the one James and you endorse below, but I think James is right to note that it relies on radically reconceptualizing morality. Not being able to reduce net death or net suffering really changes things! Instead of evaluating the morality of actions on such grounds, we have to resort to very different ones, which really changes the nature of morality (maybe not if we didn’t actually care about net death or net suffering to begin with, but who really doesn’t?).

      I don’t think anything I’ve said relies on the notion of there being one true moral system. I definitely agree that any attempt to identify and systematize moral principles is doomed to failure. Every moral theory yields results that are incoherent or sufficiently counterintuitive to undermine it (notable example). However, in order for moral debate to not be a complete free-for-all, there must be some governing principles, some common ground. For example, there must be meta-ethical principles to distinguish moral from non-moral arguments. For purposes of this debate, I figured that the net-effect requirement for moral significance was sufficiently intuitively appealing to provide the necessary common ground (note that it’s a lot less demanding than assuming, e.g., a particular kind of utilitarianism) – I figured it’s a principle that most of my audience would be inclined to accept, if not totally on board with. I understand that there’s nothing I can really say to someone who claims he doesn’t accept it and insists that his moral practices are consistent with not accepting it. But I think such people are few and far between. There are probably more atheists in foxholes than there are actual Kantians. (Best thing I’ve read about Kant.)

    3. You ignore the other universes because you can't (even theoretically) affect what happens in them. It seems pretty intuitive to me that moral decisions should only involve variables and outcomes that you can actually affect. Occam's razor.

    4. To use another analogy, the past contains a whole lotta suffering, but you can't possibly change or affect it, so it doesn't affect your moral decisions. Neither are you paralyzed, from a moral standpoint, knowing that all past pain/suffering and all future pain/suffering will add up to infinite pain/suffering in the long view (since the universe is expanding and time will go forever, just in this universe).

    5. I worry that the "can't possibly affect" distinction proves too much. Suppose you know that every time you save a drowning baby, a supervillain (let's call him Diceman) will necessarily drown a baby so as to cause suffering equal to the suffering you prevented by saving the baby. I don't think this obligates you to stop saving babies, but it certainly seems to shake up the moral calculus. I think we ordinarily regard saving babies as right because we expect it to reduce net suffering; under this hypothetical we have to appeal to some other principle.

      I don’t see why the same can’t be said about the multiverse hypo – it strikes at the foundation of our consequentialist practices and forces use to articulate some other basis for caring about our actions (like the one James sets forth below, which he correctly points out is quite different from a traditional moral justification).

      My friend who’s a philosopher encouraged me to read this short excerpt from a David Lewis book, which he said takes an approach to this issue similar to mine. I haven’t read it yet (or anything else by Lewis), but I’ve heard good things about him.

    6. "I think we ordinarily regard saving babies as right because we expect it to reduce net suffering." I'd argue that isn't true, we regard saving babies as right BECAUSE we regard saving babies as right. We have no idea if it will reduce net suffering. The baby could grow up to suffer its whole life, in which case it would have been a mercy killing (or mercy-not-dying). It could grow up to be a baby-murderer, etc. Most of our "moral" actions probably DO NOT reduce net suffering, and over time it all evens out. Your time would have been better spent inventing a cure for malaria than saving one baby ... but then overpopulation is leading to a probably collapse of the global economy, so how that does that figure in?

      In other words I think you're starting with a version of morality that doesn't make sense even in a non-multiverse.

    7. Sorry, I meant mercy-letting-it-dying


    8. Those sorts of complications are why I said expect it to reduce net suffering, which I think is a reasonable, albeit far from certain, expectation in an ordinary drowning-baby scenario. If we had good reason to believe that a drowning baby would grow up to be a mass-murderer, I think the case for saving it would be a lot weaker, maybe nonexistent.

      The fact that you seem to feel the same way suggests that you, too, recognize that there is more of a principle at issue here than simply "saving babies is right because saving babies is right." (If that were the only principle, then it seems we'd have to regard saving a baby in the Diceman hypo as equally right as saving the baby under ordinary circumstances!) I think it's "suffering is bad" that is the irreducible principle, and is what enables us to take positions like "saving babies is ordinarily right, but not when we expect it to cause sufficiently greater suffering." Without appealing to such a principle, how could we distinguish between different drowning-baby scenarios?

      To be clear, I'm not saying the one true moral principle is that we should always act so as to reduce expected net suffering; I don't need to say anything that strong. I'm just saying that some net-suffering-based principle seems to be implicit in both your and my thinking -- it's what we ultimately care about.

    9. I don't think saving babies is obviously, objectively right, I think it's "right" in our current moral framework, which breaks down in a high-res multiverse.

      I'm saying I think you and me and everyone we know operates using a framework that necessarily leaves a lot of variables out. We're already leaving out a LOT. Other universes fall way beyond stuff we're already leaving out of our calculations.

    10. I don't think it's obvious that "[o]ther universes fall way beyond stuff we're already leaving out of our calculations," if we assume that other universes exist and are just as real as this one (the assumption I've been operating under all along).

      The high res stuff we leave out when we, e.g., take the "intentional stance" and model people as agents for practical purposes such as making moral judgments is stuff that (i) isn't really helpful to understanding people's behavior and (ii) would be really inefficient to include. In contrast, the stuff from other universes that I'm arguing we maybe shouldn't leave out is the same kind of stuff we care about in ours -- drowning babies, other versions of ourselves, etc.

      This is why I was bothered by your use of "resolution fallacy" to characterize caring about other universes -- caring about low res stuff in other universes is very different from caring about high res stuff in this universe. The arguments people make against each type of caring reflect this. People say high res stuff in this universe is morally irrelevant because it doesn't help understand people's behavior, whereas people say low res stuff in other universes is morally irrelevant because we can't affect it. I'm on board with the first argument, but the second doesn't seem to cut it (as illustrated by my Diceman hypo -- you can't affect him, but I think his existence would definitely be morally relevant). So we either need a better account of why we shouldn't care about other universes, or we need to accept that the existence of the multiverse would muck with our moral practices (because we could no longer appeal to net-effect rationales; we could no longer say x is bad because it increases net suffering, we would instead have to reconceive of morality, as James explains in his comment below).

    11. I feel like you're ignoring pertinent arguments I'm making and insisting upon stuff that doesn't make any sense to me so I'm going to bow out of this discussion. Tweets may be short but the slow-mo nature of this is equally frustrating to me.

    12. No worries. Definitely trying to argue in good faith, but I may just be coming at this from too different a place, in part because I haven't read any of the stuff that informs your thinking. (I'll get around to it.)

    13. Just noticed your summation of the post-- I think you're right that we're starting with completely different assumptions. For example I don't agree with your definition of morality in #1. I think we are unable to judge net effects on the world so we give far, far more weight to our immediate circumstances (how will this benefit me, my wife, my kids, my extended family, America, etc.). One aspect of this is making choices we are more comfortable living with (choosing to be in that universe in James's formulation). So even if the baby dies 2 hours later, you save the baby if you know you could live with yourself if you didn't save the baby. (Or if you think people would shun you if they knew you didn't save the baby.) It improves your own situation even if it doesn't ultimately help the baby or the rest of the world at all.

    14. Ugh. I mean: you save the baby if you know you *couldn't* live with yourself if you didn't save the baby.

    15. Yeah, I needed to start with an assumption like #1 in order for the multiverse to be morally problematic (if you don't think the rightness of saving babies is due to its saving net lives or reducing net suffering, then the multiverse ain't no thang), but I didn't realize how controversial this assumption is.

      And I think the reason is that, based on my Philosophy 101-level knowledge of ethical theory, I find consequentialism persuasive, despite the various powerful objections to it. (I guess I think it's the worst moral theory except for all the others.) I don't actually believe that "minimize net suffering" is the one true moral principle or anything like that (all it takes to undermine a grand principle is one sufficiently counterintuitive or incoherent application, and there are plenty), but I've tended to find consequentialist arguments compelling (e.g. I think I should be a vegetarian on Peter Singer grounds). And these arguments rest on the notion that what makes something good or bad is its net effects: these may be impossible to precisely calculate, but they are what really matter, because minimizing their suffering/maximizing their enjoyment is what people (and animals) ultimately care about. So if you're a consequentialist, the multiverse really shakes things up, because suddenly actions don't have net effects, and if you want to justify morally caring about your actions, you must find some other grounds.

      Of course, if you're not a consequentialist, this problem doesn't arise. If morality comes down to standards such being able to live with yourself, then you can simply keep on saving babies, even if you know you’re not actually making a difference.

      The thing is, I want to believe that, if pushed, even people who act as if their immediate circumstances have far, far more weight would acknowledge that they have no good reason for prioritizing, or at least prioritizing so heavily, these circumstances, just as they might acknowledge that they have no good reason for ignoring or minimizing the interests of animals.

      But, of course, wanting doesn’t make it so. Just because I was convinced by Peter Singer in high school doesn’t mean that everyone is, in their heart of hearts, a consequentialist. And perhaps if I read more about this stuff, I’d even change my mind.

      Until then, I rest my case.

    16. I would for sure find it compelling if I thought it was possible for everyone to live that way, even in principle. But I think to aim for something close to that (minimal net suffering) you're forced to define a realm in which you're measuring that effect, and that realm is bound by both time and space.

    17. Yeah I think the new-to-me thing about your resolution argument is that it is an epistemological objection to some varieties of consequentialism; it seems broadly correct to me that you cannot assign enough credence to a world that you can't sense _in principle_ that you would care about suffering in that world, and that a workable-in-principle consequentialism should only consider measurable-in-principle consequences. Alan does not seem to like this line of argument but I don't quite see why.

    18. Thanks: "a workable-in-principle consequentialism should only consider measurable-in-principle consequences" is a nicely succinct way to put it.

    19. I didn't realize that was the point of the resolution argument. Sorry if I was being thick, but I was never quite clear on the analogy between ignoring stuff like atoms when modeling people's behavior (actually high-res stuff), and ignoring people in other universes when making moral judgments (not high-res stuff, but removed in a different sense).

      Anyway, if Sarang's framing sums up the argument, then I think I understand. I take it the idea is that because other universes are causally independent from this one, any knowledge we have of those universes is not "measurable-in-principle," because we, by definition, can't obtain any data from them. Rather, we can only "know" about them theoretically, which isn't epistemologically sufficient to factor them into a consequentialist theory. I'm out of my depth here, but that doesn't sound crazy to me.

      It occurred to me that there have been two distinct lines of argument in this debate, and it would have been good to keep them separated (which may have happened if I had been clearer in my original post):
      A. Whether my assumption that our moral practices care about net effects is reasonable (the argument over whether our moral judgments are generally consequentialist or not).
      B. Whether, even granting my assumption #1, the multiverse is morally irrelevant (the resolution argument and the like).

      I was hoping to avoid A and focus on B, because I think B is the more original and interesting issue, but I understand how A can come under attack if it's not clearly laid out as an assumption (which I guess I didn't do until I added the summation). Oh well, at least we all seem to know where we're coming from (if not where we get off) now.

    20. I wouldn't say that was the whole point of the argument, but feel free to look at it that way if it makes more sense.

      I suspect that B is not the case even granting you A, but I was unwilling to grant you A, for the reason that A seems impossible even if you leave the multiverse out. Sorry! My bad.

  3. So in one version of the concept, you can't change the multiverse, you can only select your path through it. (Depending on how you conceive of identity, this may be incoherent, but bear with me.) So sure, there will be a universe in which the child drowns, but you will live out your life in the universe in which the child is saved. (This is what it means to choose to save the child. Another "you" lives in the other universe, but you you are in this one. Again, bear with me.)

    On that view, morality might come to look much more like aesthetics, or a branch of it. I want to live in a universe in which I saved the child. I want to live in a universe in which I don't steal candy bars. There's nothing "moral" about it in the traditional sense - the child still dies (in some other universe), the candy bar still gets stolen, etc. But there is a feeling that comes with doing the right thing, that is impossible to replicate - and I want to live in a universe where I have that feeling.

    This has the advantage of lining up pretty well with my subjective experience of morality. It has the disadvantage of introducing a bunch of very difficult questions.

    1. James, that's exactly the right way to look at it if you find that compelling. (I thought from your argument that you didn't.) You're never going to encounter that other "you."

      You should read the Brian Greene book. The version of the multiverse you're talking about isn't necessarily the result of any kind of "branching."

    2. Yeah, I guess Twitter isn't the ideal venue for these discussions. Yea, what may not have come across is that to me these ideas are fun to think about but not compelling at all. I am actually a hectoring moralist. I am like Richard III, despising everyone around me for having fun.

      I actually thought up the multiverse morality thing as a theory for a character in a story I was (thinking about) writing - this guy would justify his adultery with these airy philosophical theories, and then his world would come crashing down around him.

      But it turns out people had already thought of all of this - sadness. Still, as I said, I think it's kind of a fun idea to play with.

    3. (That "Yea," was unintentional but I kind of like it.)

    4. It was very Richard the Thirdian.

      LOL at "fun to think about but not compelling at all"