Valuism and X: how Valuism sheds light on other domains – Part 5 of the sequence on Valuism

By Spencer Greenberg and Amber Dawn Ace 

Image created using the A.I. DALL•E 2

This is the fifth and final part in my sequence of essays about my life philosophy, Valuism – here are the first, second, third, and fourth parts.

In previous posts, I’ve described Valuism – my life philosophy. I’ve also discussed how it could serve as a life philosophy for others. In this post, I discuss how a Valuist lens can help shed light on various fields and areas of inquiry.

Valuism and Effective Altruism

Effective Altruism is a community and social movement about “using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.”

Effective Altruists often operate from a hedonic utilitarian framework (trying to increase happiness and reduce suffering for all conscious beings). But Effective Altruism can alternatively be approached from a Valuist framework.

You can think of Valuist Effective Altruism as addressing the question of how to effectively increase the production of one’s altruistic intrinsic values within the time, effort, and focus you give to those values (as opposed to your other intrinsic values). If you’re an Effective Altruist, chances are two of your strongest intrinsic values are related to reducing suffering (or increasing happiness) and seeking truth.

For people with certain intrinsic values, Effective Altruism is a natural consequence of Valuism. To see this, consider a Valuist whose two strongest values are the happiness (and/or lack of suffering) of conscious beings and truth-seeking. Such a Valuist would naturally want to increase global happiness (and/or reduce global suffering) in highly effective ways while seeing the world impartially (e.g., by using reason and evidence to guide their understanding). This is extremely aligned with (and similar to) the mission of Effective Altruism.

For more on the relationship between Effective Altruism and Valuism, see this post.

Valuism and existential risk

Potential existential risks (such as threats from nuclear war, bioterrorism, and advanced A.I.) are a major area of focus for many Effective Altruists. According to most people’s intrinsic values, existential risk is also incredibly bad. Existential risks threaten many of the things that humans value (happiness, pleasure, learning, achievement, freedom, longevity, legacy, virtue, and so on). So for most people’s intrinsic values, Valuism is compatible with caring about existential risk reduction (depending on one’s estimates of the relevant probabilities).

Valuism and utopias

Utopias are hard to construct. Sure, we pretty much all want a world without poverty and disease, but it’s hard to agree on the specific details beyond avoiding bad things. If we go all-in on one intrinsic value, we end up with a world that seems like a dystopia to many. For instance, a utopia, according to hedonic utilitarianism, might look like attaching each of our brains to a bliss-generating machine while we do nothing for the rest of our lives, or it might look like or filling the universe with tiny algorithms that experience maximal bliss per unit of energy. Of course, these are horrifying outcomes for many people.

If we maximize utopia according to one or a small set of intrinsic values, it will very likely seem like a dystopia according to someone with other intrinsic values. To construct a utopia that is not a dystopia to many, we should make sure that it includes high levels of a wide range of intrinsic values, keeping these in balance rather than going all-in on a small set of values.

Put another way, if we preserve a wide range of different intrinsic values in our construction of potential utopias, we protect ourselves against various failure modes. For instance:

  • The intrinsic value of avoidance of suffering protects us from a world where there is a lot of pain and suffering.
  • The intrinsic value of freedom helps protect us from a failure mode of a world of forced wireheading
  • An intrinsic value of truth helps protect us from a failure mode where we’re all unknowingly in the matrix (e.g., being used for a purpose unknown to us) or living under an authoritarian world government that tries to keep the populace happy through delusion.

Valuism and worldviews

Worldviews usually come with a set of shared intrinsic values. These are the strong intrinsic values that most (though not all) people with that worldview have in common. Of course, in most cases, in addition to these shared intrinsic values, each individual will also have other intrinsic values that are not shared by most people with their worldview. You can learn more about the interface between worldviews and intrinsic values in our essay on worldviews here.

Valuism and mental health 

Mental health may have interesting connections to intrinsic values. For instance, here’s an oversimplified model of anxiety and depression that I find usefully predictive (I developed this in collaboration with my colleague Amanda Metskas):

Anxiety occurs when you think there is a chance that something you intrinsically value may be lost. Anxiety tends to be worse when you perceive the chance of this happening as higher, when you perceive the intrinsic values as more important, or when the potential loss is nearer in time.

Depression occurs when you’re convinced you can’t create sufficient intrinsic value in your future. This could be because you think the things you value most are lost forever, because you see yourself as useless at achieving what you value, or for other reasons.

Valuism and animals

What do animals care about? While some animals (e.g., some insects) may not be conscious (i.e., they may lack something that it’s like to be them), and therefore it may not matter what they care about, for conscious animals, it may be important to understand what they intrinsically value so we know how to treat them ethically.

An intrinsic value perspective on animal ethics is that we should not deprive animals of the things they intrinsically value (and we should help them get the things they intrinsically value, at least when they are easy to provide). So, for instance, we can ask how much a chicken that lives almost its whole life in a small cage (as many chickens raised for food in the U.S. do) is able to have its intrinsic values met. The answer is probably very little.

But what are the sorts of things that animals may intrinsically value? I suspect there are a wide variety of animal intrinsic values and that they depend on species, but here are a few that may be especially common in mammals:

  • Pleasure
  • Not suffering
  • Not experiencing large amounts of fear, stress, and anxiety
  • Surviving
  • Agency (e.g., the ability to choose)
  • Bonding with other animals
  • Protection of their offspring

Valuism and economics

Economics often operates under the assumption that each person has a “utility function”: i.e., a function that maps states of the world into how good or bad the person thinks those states are and that describes the choices people make. According to this frame, if a person chooses A over B, that means that their utility function assigns a higher value to A than B. For example, if I buy a Mac rather than a PC, and they are the same price, this must mean that I predict the Mac gives me more utility (according to my utility function). 

Valuism, on the other hand, says that when A is more intrinsically valuable to us than B (and equivalent along other dimensions such as price), we often will choose A over B because A produces more of what we intrinsically value; however, sometimes we choose B over A instead because we confuse instrumental value with intrinsic value, or we have a habit of doing B, we feel social pressure to do B, etc.

In other words, choosing something is not the same as intrinsically valuing something, and ideally, we want to construct a society where people get more of what they intrinsically value, not merely giving people more of what they would choose

A classic example where intrinsic value and choice come apart is addictive products like cigarettes or video games with upsells: people sometimes choose to pay for them and use them way past the point of benefit, according to their own intrinsic values.

A similar issue comes up when people slip into treating every dollar of GDP or each unit reduction of “deadweight loss” as though they are equally valuable. Imagine that an influencer gets all the hottest celebrities to start wearing the hair of a rare species of sloth and that buzz convinces millions of people that it’s really cool, so consumers spend billions of dollars buying these sloth hair pieces. Unfortunately, the sloth hair is really aesthetically ugly, uncomfortable, and expensive, and making clothes out of it requires torturing the sloths. This will probably increase GDP, yet (on net) intrinsic value will almost certainly have been destroyed. There is no good reason to care about GDP for its own sake, but intrinsic values are precisely the things we care about for their own sake. While increasing GDP may often be aligned with producing more of what people intrinsically value (both now and potentially in the future), in cases when GDP and the long-term production of intrinsic values are out of alignment, I would argue that GDP is no longer a good measure of societal benefit.

Going back to the sloth hair example, having a free market for this sloth hair would, according to simple economic theory, reduce “deadweight loss” (relative to having restrictions on their sale). And yet, the production of this sloth hair will likely be net destructive to what people intrinsically value. We can imagine a multi-faceted accounting of how society is doing that takes into account productivity and wealth but goes beyond it to consider the extent to which people are creating their intrinsic values; productivity and wealth would be viewed as being in the service of intrinsic value production.

As a complement to GDP, we can think about measuring how well the people of a society get the things that they intrinsic value. For instance, attempting to measure:

  • How happy are they? 
  • To what extent are they accomplishing their goals? 
  • How free are they?
  • How meaningful are their relationships? 
  • How much are they suffering?

This is related to the Human Development Index, though that index includes items that are not intrinsic values, and it doesn’t cover all intrinsic values.

If we had such an accounting, different people would naturally rank societies differently (in terms of how good they are overall) because they value these intrinsic values to different extents.

As you can see in this post, a Valuist perspective may have something to say about many other topic areas, giving us a different way to look at topics like Effective Altruism, utopia, animal ethics, worldviews, mental health, and economics.

You’ve just finished the fifth and final part in my sequence of essays on my life philosophy, Valuism – here are the first, second, third, and fourth parts.



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *


  1. According to this philosophy, one should care about animal welfare (and any other ‘altruistic’ value) only if this person has an intrinsic value about this. In other words, if this person is indifferent to animal suffering, it would not be a problem (for this person specifically) to torture animals.

    This is why I see this life philosophy as a good framework for living life in general, but I think moral realism is better for moral dilemmas: https://www.amazon.com/Dialogues-Ethical-Vegetarianism-Michael-Huemer/dp/1138328294

    One excellent way I see to defend moral realism is by asking yourself “Is it wrong to torture an innocent child?”

    I can’t grasp how for some people the answer to this question could be “not wrong” or that this question doesn’t have an objective answer.
    It is like asking yourself if 3 > 2.

  2. Great article! Understanding how valuism connects to various concepts has deepened my grasp of valuism itself. I’m quite curious about how valuism interacts with ethics and morals. What happens if I value something that is deemed unethical (i.e. hurting people, stealing etc)?