Photo by Nate Rayfield on Unsplash
Photo by Nate Rayfield on Unsplash

Tensions between moral anti-realism and effective altruism

I believe I’ve identified a philosophical confusion associated with people who state that they are both moral anti-realists and Effective Altruists (EAs). I’d be really interested in getting your thoughts on it. Fortunately, I think this flaw can be improved upon (I’m working on an essay about how I think that can be done), but I’d like to be sure that the flaw is really there first (hence why I’m asking you for your feedback now)!

People that this essay is not about

Some Effective Altruists believe that objective moral truth exists (i.e., they are “moral realists”). They think that statements like “it’s wrong to hurt innocent people for no reason” are the sort of statements that can be true or false, much like the statement “there is a table in my room” can be true or false.

I disagree that there is such a thing as objective moral truth, but I at least understand what these folks are doing – they believe there is an objective answer to the question of “what is good?” and then they are trying to figure out that answer and live by it. 

This usually ends up being some flavor of utilitarianism plus maybe some moral uncertainty giving some weight to other theories such as protecting rights. In the 2019 EA survey, 70% of EAs identified with utilitarianism (though this survey did not distinguish between those who do believe in objective moral truth and those who don’t believe in objective moral truth but have utilitarian ethics anyway). I think this group of EAs that believe in objective moral truth is mistaken but that they are being coherent. They are the first group listed in the poll I took below, and they are NOT the group I am focusing on in this post. 

The flaw I see:

The group I am focusing on is represented by the second bar in the poll above. Many (most?) Effective Altruists deny that there is objective moral truth or think that objective moral truth is unlikely. But then I still go on to hear quite a number of such EAs say things like:

• “We should maximize utility.”

• “The only thing I care about is increasing utility for conscious beings.”

• “The only thing that matters is the utility of conscious beings.”

• “The only value I endorse is maximizing utility.”

(Note that by “utility” here, they mean something like happiness minus suffering, not “utility” in the Economics sense of preference satisfaction [unless they are preference utilitarians] or the Von Neumann–Morgenstern theorem sense.)

I find these statements by Effective Altruists very strange. If I try to figure out what they are claiming, I see a few possible disambiguations:


Possibility 1 – Contradictory beliefs: they could believe that maximizing utility is objectively good even though they don’t believe in objective moral truth – which seems to me to be a blatant contradiction in their beliefs. Similarly, they could be claiming that while they have other intrinsic values, they think they SHOULD only value utility (and should value all units of utility equally). But then, what does the word “should” mean here? On what grounds “should” you if there is no objective moral truth?


Possibility 2 – Misperception of the self: they could be claiming that while there is no objective answer to what’s good, the only intrinsic value they have (i.e., the only thing they value as an end in and of itself, not as a means to an end, that matters to them even if it gets them nothing else) is the utility of conscious beings (and that all units of utility are equal). In other words, they are making an empirical claim about their mind (and what it assigns value to).

Here I think they are (in almost every case, and perhaps in every single case) empirically wrong about their own mind. This is just not how human minds work.

If we think of the neural network composing the human mind as having different operations it can do (e.g., prediction, imagination, etc.), one of those operations is assigning value to states of the world. When people do this and pay close attention, they will realize that they don’t value the utility of all conscious beings equally and that they value things other than utility. While I can’t prove there is literally no such person on earth that only has the intrinsic value of utility, even for the most utilitarian people I’ve ever met, when I question them, I discover they have values other than utility.

And it stands to reason that human minds (being created by evolution) are not the sort of things that are likely to only value the utility of all beings equally. For instance, just about everyone I’ve ever met would be willing to sacrifice at least 1.1 strangers to save one person they love (even if they think that person wouldn’t have a higher than average impact or a happier-than-average life). I certainly would, and I don’t feel bad about that!

One very strong intrinsic value I see in the effective altruism community is that of truth – many EAs think you should try never to lie and are suspicious even of marketing. They sometimes try to justify this on utilitarian grounds (indeed, it can often be beneficial from a utilitarian perspective, not to lie). But this sometimes seems like rationalization – a utilitarian agent would lie whenever it produces a higher expected value of utility (but potentially only if it was using naive Causal Decision Theory (CDT) – H/T to Linchuan Zang for pointing this out), whereas many EAs make a hard and fast rule against lying (saying you should try to NEVER lie). This is easily explained as EAs having an intrinsic value of truth that they don’t want to accept as an intrinsic value (and so try to explain in terms of the “socially acceptable” value of utility).

As a side note, I find it upsetting when EAs try to justify one of their (non-utility) intrinsic values in terms of global utility because they think they are only supposed to value utility. For instance, an EA once told me that the reason they have friends is that it helps them have a great impact on the world. I did not believe them (though I did not think they were intentionally lying). I interpreted their statement as a harmful form of self-delusion (trying to reframe their attempts to produce their intrinsic values so that they conform to what they feel their values are “supposed” to be).


Possibly 3 – Tyranny of the analytical mind: they could be saying that while they may have a bunch of intrinsic values, their analytical mind only “endorses” their utility value. But what does “endorse” mean here? Maybe they mean that, while they feel the pull of various intrinsic values, the logical part of their mind only feels the utility pull. But then why should their analytical mind have a veto over the other intrinsic values? Maybe they believe their other intrinsic values are “illogical,” whereas the utility value is logical. But on what grounds is that claim made? If they could prove logically that only utility mattered, wouldn’t we just be back to claim (1) that there is objective moral truth, and they don’t believe that? 

Intrinsic values are just not the sort of thing that can have logical proof, and if they are not that sort of thing, then why give preference to just that one part of your mind? I’m genuinely confused.


Possibility 4 – Maybe they mean something else that I just don’t see. What else could they mean? I’d love to know what you think (or if you’re one of these people)!

It’s certainly possible that there are very sensible interpretations for their claims that I’m just not seeing.


In conclusion, for Effective Altruists who think there is objective moral truth, I think they are wrong, but I understand what they are doing (this post is not about them). But for ones that don’t believe in objective moral truth (which I think is the majority?) I think they are making some kind of mistake when their sole focus is utility. Of course, I could be wrong.

My personal philosophy – which I call Valuism (and which I am working on an essay about), attempts to deal with this specific philosophical issue (in a limited context).

But in the meantime, I’d love to hear your thoughts on this topic! What do you think? If you are an EA who doesn’t believe in objective moral truth, but you’re convinced that only utility matters, what do YOU mean by that? And even if you don’t identify with that view, what do you think might be happening here that I might have missed or misunderstood?

Thanks for reading this and for any thoughts you are up for sharing with me!


Summarizing responses to this post


Edit (1 September 2022): after posting an earlier draft of this post on social media, there were hundreds of comments, some of which tried to explain why the commenter is utilitarian despite being an anti-realist, or presented alternative possibilities not delineated in the original post.

One thing that’s abundantly clear is that there is absolutely no consensus on how to handle the critique in the above post. There are a really wide variety of ways that people use to try to explain why they identify with utilitarianism despite not believing in objective moral truth.

Here are some of the most common types of responses given:


1. Responses related to Possibility  1 (i.e., addressing “contradictory beliefs)

      1.1 Accepting contradiction: many people have contradictory beliefs (and contradictory beliefs may be no more common in moral anti-realist EAs than in other people), and some people are willing to lean into them. As one commenter put it: “many sets of intuitions are *wrong* if you take coherence as axiomatic.” Some people are just okay with self-contradiction.

      1.2 Beliefs that aren’t actually contradictory: my explanation of Possibility 1 might interpret  “we should maximize utility” differently than some people who say that phrase mean it. Here are potential some interpretations by which that statement might actually be consistent with anti-realist views:

           1.2.1 Personal preference: some people do not intend for statements like “we should maximize utility” to be representative of moral truth but instead mean it as an expression of a personal preference that they have for maximizing utility, or an expression of the fact that they will avoid feeling reflexively guilty if they aim to maximize utility, or a statement that they will have a positive emotional response if their focus on maximizing utility. However, these responses still seem to fall victim to another critique from the post, which is the arbitrariness of giving preference to certain feelings/preferences over other ones. 

           1.2.2 Metaethical constructivism: this is defined as “the view that insofar as there are normative truths, they are not fixed by normative facts that are independent of what rational agents would agree to under some specified conditions of choice” (source). Some say this is “best understood as a form of ‘expressivism’“. Constructivism seems compatible with both moral anti-realism and utilitarianism, but it’s unclear to me how many effective altruists would hold this view (I think very few). 

           1.2.3 Valuing a different kind of utility: some people may mean “we should maximize utility” in reference to a different kind of “utility” than the classic hedonistic utilitarian interpretation of the word. For example, “utility” is sometimes used to mean a “mathematical function serving as a representation of whatever one cares about.” By such an interpretati, if someone says they are trying to maximize utility they are presumably referring to maximizing their own utility function (rather than some objective one) – and so they are not the focus of this post.


2. Responses related to Possibility 2 (i.e., “misperception of the self“)

      2.1 Second-order desires: people might not be misperceiving themselves at all but might instead be talking about second-order desires or desires about desires. As one commenter put it: “It might be that, though someone empirically does NOT possess desires consistent with maximising the utility of conscious beings, they possess the desire to possess these desires. They want to be the sort of person who does have a genuine utilitarian psychology, even if they don’t possess one now. This may explain the motivation to act as a utilitarian (most of the time) [despite being a moral anti-realist].” Though in this case, it’s unclear why they would want to or think they should give those second-order desire preference over their first-order desires.

      2.2 Unshakable realist intuitions: people might be acting and/or feeling as if utilitarianism is true while also believing (upon reflection) that moral realism isn’t true. One person commented that “many of our intuition[s] are based on a realist world even when rationally we do not believe in one, so it is easy to accidentally make arguments that work only in a realist world, and then try to rationalize the argument afterwards to somehow work anyway.”

      2.3 Mislabeling one’s metaethics: instead of misperceiving what they value, some people might be mislabeling themselves as moral anti-realists even though they aren’t. In other words, some people who call themselves anti-realists might actually be moral realists without realizing it (e.g., because they haven’t reflected on it). One commenter thought that this would be a common phenomenon: “They are expressing a real, but subjective, truth ‘It is true to me that everyone should maximize utility’…I think that ‘deep down’ you will find that in fact most effective altruists and indeed most people are moral realists but under-theorized ones. Even the anti-realists tend to act as if they were moral realists.”

      2.4 Choosing one’s own values: some argue that you can choose your values for yourself (though it’s unclear by what process one would make such a choice, or whether such a choice really can be made – it may hinge on what is meant by “values”). As one of the commenters put it: “It seems like you are assuming in [Possibility 2] that there is an objective answer to what a mind values, e.g. based on how it behaves. For one thing, it’s not clear that that is right in general. But a particular alternative that interests me here: one could have a model where one can decide what to value, and to the extent that one’s behavior doesn’t match that, one’s behavior is in error.” In other words, according to this view, maybe an individual themselves is the only person who can define their intrinsic values, and there is no objectively correct opinion for them to hold about this. But then, by what criteria (or based on what values) is a person deciding on what their values should be?


3. Reasons why Possibility 3 (i.e., “tyranny of the analytical mind“) may not be a confused approach

      3.1 Identifying with the analytic part of the mind: some people feel that choosing to endorse a particular framework (and choosing to endorse some values over other ones) is part of who they are – part of (or even the most important part of) their self-concept. In other words, the reflective part of them making that choice feels to them like it is “who they are” more so than other parts of them that have other preferences. Here’s how one person explained it: “For my part, the part of my mind that examines my moral intuitions and decides whether I want to act on them feels about as ‘me’ as anything gets.” Another person thought that ​endorsing some values over others makes sense because many people think that their “best” self would live “in accordance with the judgments they make based on arguments and thought experiments.” Another proposed explanation for people being guided by the analytic mind is that being guided in this way might be a normal feature of human psychology (which at least one person saw as needing no further explanation). Yet another explanation put forward was that some people can have a completely arbitrary “personal taste” for giving their analytical mind a veto over other parts of their mind (and, according to this argument, those people don’t need a further justification beyond their arbitrary taste).

      3.2 Simplicity and coherence meta-values: having fewer intrinsic values or having fewer intrinsic values that one allows to dictate their behavior can (some argue) be justified by having an intrinsic value of coherence, simplicity, or consistency. As one commenter put it: “I genuinely think I just have utilitarian intrinsic values. [It seems] relevant here that I also value coherence (in a non-moral sense, probably as an epistemic virtue or something), so if I find myself thinking something that is incoherent with another value of mine, I can debate & discard the less important one.” 


Possibility 4: Moral uncertainty

      4.1 Meta-moral uncertainty – believing that realism might be true: people who don’t identify as moral realists might still feel there is some possibility that moral realism is correct and might act as if it was correct (at least to some degree – say, in proportion to how much weight they give this possibility compared to other action-guiding beliefs). As one commenter put it: “Why do I keep donating (and doing other EA things), albeit to a lesser extent [since switching from moral realism to moral anti-realism]? The main reason is (meta) moral uncertainty: I still feel that it is possible that moral realism is correct, and so I think it should have some say over my behavior.”

      4.2 Misinterpreting moral uncertainty as anti-realism: People who think that their own beliefs are not necessarily objectively true (due to moral uncertainty) might conclude that they must be moral anti-realists, but they might be mistaken in calling themselves that. As one commenter explained it: “believing in moral objectivity is different from believing we are actually able to parse the true moral weights in practice.”


Possibility 5: Precommitment and cooperation arguments

      5.1 Benefiting from pre-committing to impartiality: some argue that acting as if classical utilitarianism is true might be justified on grounds related to resolving collective action problems (without having to believe that moral realism is true). For instance, one commenter wrote: “Being impartial between oneself (and one’s friends / family) vs. random people isn’t something that any human naturally feels, but it’s a ‘cooperate’ move in a global coordination game. If we’d all be better off if we acted this way, then we want a situation where everyone makes a binding commitment to act impartially. It’s hard to do that, but we can approximate it through norms. So EAs might want to endorse this without feeling it.” Though presumably, if this was the justification for utilitarianism, they would then switch to a different moral theory if they thought it better solved collection action problems (e.g., if they came to believe virtue ethics better solved collective action problems).

      5.2 Benefiting from pre-committing to preference utilitarianism: some commenters pointed out that preference utilitarianism could also be justified on self-interested grounds (this post was not intended to be about other forms of utilitarianism such as preference utilitarianism, but it was edited to clarify that only after some people had started commenting). As one commenter put it: “If we’re viewing morality as playing a counterfactual game with others, we should take actions to benefit them in a way essentially identically to preference utilitarianism. That doesn’t require any objective morality, it only requires self-interest and buying into the idea that you should pre-commit to a theory of morality that, if many people embraced it, would increase your personal preferences.” Though in such cases (if they were actually optimizing for self-interest), it seems strange they would choose a moral theory where their interests count equally to people they will never encounter and never be in collective action problems with. (Some might argue that this would make more sense if the person endorsed a form of multiverse-wide cooperation via superrationality, though it’s unclear how this resolves more concrete/real-life collective action problems).


Possibility 6: Social forces – as Tyler Alterman put it (when I was discussing this post with him – he’s named here with permission): “[I felt] that [for some EAs] their actual beliefs were at odds with the cultural norms of other smart people (EAs) that they felt alignment with, so they stopped paying attention to their actual beliefs. I think this is what happened to me for a while. There was an element of wanting to fit in. But then there is an element of – there are so many smart people here [in EA]… EA is full of Oxford philosophers – they must have figured this out already; there must be some obvious answer for my confusion. So I just went along with the obligation and normative language and lifestyle it entailed.” Social forces can be powerful, and in some cases, an explanation for human behavior can be as simple as: the other people around me who I respect or want the approval of do this thing or seem convinced this thing is true, so I do this thing and am convinced it is true.


This essay was first written on August 14, 2022, first appeared on this site on August 19, 2022, and was edited (to incorporate a summary of people’s responses) on September 1, 2022, with help from Clare Harris.



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *


  1. I’d like to suggest another possibility. I think that some people who don’t identify as moral realists might nevertheless assign *some non-zero probability* to moral realism. (I am one of those people – I don’t think there’s any way to prove that objective moral truth exists, but neither is there any way to disprove it, so I can’t rule out the possibility.) _____________________________________________________________________________________

    If there’s a non-zero probability that there’s such a thing as moral truth, it follows (in my view) that one should try to live morally to the extent that that’s possible. That’s what I aspire to do, and for that reason, the general principles of effective altruism happen to be the principles I’ve been aspiring to live by for years now. _____________________________________________________________________________________

    So, in my case, even though the label of moral realist doesn’t fit, the reason that I nevertheless feel very strongly motivated to create moral value (and to reduce disvalue) – to the best of my ability, anyway – is my belief that there’s a non-zero probability (even if it’s small in magnitude) of there being such a thing as moral truth. (This reasoning doesn’t help me to decide *which* things are morally valuable, of course! But it does motivate me to be very curious about what *might* be morally valuable.)

  2. I don’t identify with the combination of views that is at issue here (utilitarian effective altruist and anti-realist), but also don’t think that it’s an inconsistent or illogical stance. Here’s how I imagine an anti-realist utilitarian EA would (or could) justify their views:

    “I derive morality (”what you should do”) from my subjective values. When I sit down to reflect on my values, I notice that I care deeply about the utility of conscious beings and that I rank that more highly than any other abstract value. It is true that I have all sorts of preferences that pull me strongly towards one or another action in any given situation, but when I have time and energy to think about these preferences, I find that either they pale in comparison to the more fundamental value encapsulated in utilitarianism [a desire to have a fancy car might be an example of such a preference], or that I actually only value them instrumentally [e.g., I think many utilitarian EAs will say that they care about inequality because of the suffering endured and not because of something intrinsic to the inequality]. I want to live in accordance with the values I find myself caring about in my more reflective moments, and I want to do so consistently.
    This doesn’t mean that only utility maximization is a logical value and is therefore superior to the others; rather, it is the fact that I care so deeply about utility maximization upon reflection which makes me rank this above any other conceivable value.”

    Another thing, which can but doesn’t have to go together with the above, is related to your possibility 2: My impression is that EAs, including anti-realist ones, mostly don’t claim that everything ought to be subsumed to the utility maximization imperative. Several of them, in my experience, consider utility maximization sufficiently important (given their subjective values) to do many EA-ish things but not so important that they will give up on all of their other pursuits and preferences.

  3. Is this an accurate representation of your view?: Moral anti-realists can’t endorse any value or set of values other than the complete set of values they actually have. They cannot use reason and logic to choose a different axiology than what they actually have. If a moral anti-realist has two values which contradict or are incompatible, the anti-realists must endorse both anyways and cannot choose between them. If they use reason and logic in any way to choose between values, then they are in fact moral realists.

  4. Huh. When I say “the only thing I care about is maximizing utility” I am referring to a social construction of utility, not something objective about the world.

    There’s no territory, but it’s useful for humans to have a shared map.

  5. Many of your responses to the possibilities seem to hinge on the thing under discussion being arbitrary. However, I don’t see why this is a problem. Unless one believes in moral truth, what reason could there be for one’s basic values (which themselves are often reasons for action)? I’m not quite sure what precisely you mean by intrinsic values, but it seems you mean ‘those things which your brain assigns terminal value to’. Given this, one could for example train one’s brain to stop (or reduce) such assigning. If I grow up in a sexist household, I may assign terminal value to men, but would hopefully wish to train my brain out of this habit. Perhaps my choice to do so really is ultimately arbitrary, but what does this matter? If I’m not aligning with moral truth when I train my brain towards a more utilitarian psychology, then maybe I’m just making an arbitrary choice to do so – but what philosophically would the issue with this be?