Suppose that humans one day (whether 20 or 200 years from now) create far more advanced technology, with power sufficient to reshape what the world is like, and even to determine what humanity becomes.
If that comes to pass, we’re going to be in trouble unless we also reach a better understanding of our own values and resolve the inconsistencies in them. We don’t have sufficient clarity on our values to say what the ideal world looks like. So if we ever have the power to truly redesign the world and ourselves, we can’t be sure what we should work toward. The options would be incredibly different (even potentially insane or nightmarish to some) depending on personal philosophies and values.
WHAT WE CAN AGREE ON
Suppose that (as is the case today) we have only a little power to change the world for the better. Say, only enough to fix a handful of problems in our world. While humanity is very complex, with many differing values, there are several changes that we can pretty much all agree would make the world better, the “easy to agree on” portion of world optimization.
For instance, nearly everyone can agree that, all else being equal, it’s better if:
-fewer people go hungry
-there is less disease
-murder rates fall
-fewer people are depressed
-more people know how to read
-people who want work can find work to do
While people certainly disagree about how we should best make these changes, we’d likely all celebrate such changes.
WITH GREAT POWER COMES GREAT CONFUSION
If incredibly advanced technology gives humans enormous power to reshape our environment (e.g., somehow tapping into nearly endless supplies of energy) and what we are like (e.g., brain modification or more impactful education and training), we will easily tackle the above issues. Once the world has transitioned from its current state to a state of where mutual priorities have been addressed, its unclear where to go from there. In other words, there are endless valuable optimizations to the world, but values differ so much that we can lack clear direction. What seems an optimization to one person may be the opposite to another.
POSSIBILITIES
Consider these possibilities for an “optimal” world, each of which holds up a different human value as its optimization goal:
1. Utilia : what truly matters is that the beings that exist are as happy as possible, so we should have a really small population and throw all resources into making each being happy by whatever means necessary (e.g. even if it means massively restricting the freedom of those beings, for instance preventing nearly all procreation, or keeping each being in a never ceasing state of bliss that it can’t get out of).
2. Utilon : what truly matters is the total sum of all happiness, so making the ideal world involves maximizing population growth while making sure that living beings are just happy enough so that the number of beings times the happiness per being is as large as possible (even if this means having incredibly large numbers of barely happy or minimally conscious but still slightly happy beings, or having all beings live highly restrictive existences in happiness generating machines).
3. Libertia : what truly matters is individual freedom, so what we should optimize for is giving every being the maximal choice for self determination (so long as it doesn’t come at the expense of other being’s freedom to self-determine). This means, presumably that we should create technology for each person to fully alter their own environment, body, face, personality, and values. It also means, presumably, that people should ideally be free to choose what they spend all their time doing, including choosing to do things that are very harmful to themselves or persuading others to (freely choose) to do things that are very harmful for those others.
4. Honoria : what truly matters is that people act in virtuous ways, so we should optimize the world to produce future generations that are maximally kind, honest, wise, courageous, fair, forgiving, humble, etc. Furthermore, if you think it’s not enough to be virtuous in a world with no temptation (since with no temptation anyone can be virtuous), and therefore that temptation is necessary for virtue to count, then we should also optimize the world to have numerous opportunities for sin, but train or modify humanity to be so virtuous that almost no one ever takes these opportunities to act un-virtuous. Furthermore, we may have to purposely built adversity into the world, so that certain virtues (e.g. courage or empathy) can be demonstrated.
5. Deus : what truly matters is obeying the commandments of God (or gods for polytheists), so everyone should be trained to follow God’s commandments as perfectly as possible, and to love/worship god maximally (if the commandments include loving or worshipping god). All temptations to violate God’s commandments should be removed, unless the religion views the act of resisting temptation as good in and of themselves (rather than just acting in accordance with the commandments).
6. Kantar : what truly matters is that everyone lives according to universal principles that can be willed as universal rules for all people to follow, and that no one uses anyone else as a means to an end (only as an end in themselves). Hence, some set of universal non-self defeating rules should be developed (that never involve using other people as ends), and all people in the future should be trained to adhere to these rules as closely as possible, while still having numerous (but almost never taken) opportunities to violate these maxims.
7. Angesia: what truly matters is preventing suffering, so the most important thing to do is to eliminate suffering for all beings (e.g. by redesigning genetics or our brains so that the moment we’re about to experience suffering we experience some distinct but neutral feeling instead – for instance replacing suffering with the experience of hearing the sound of a bell, with loudness proportional to how unpleasant that pain would have been, and the bell sound seeming to emanate from whatever part of us is in pain).
8. Natura : what matters is adherence to the natural law, that is, the pre-existing order of nature. Hence, nature and the animal world should be preserved as pristinely as possible (as it existed in one of its more stable equilibria before humans came along), and humans should be prevented from engaging in any “unnatural” behaviors.
9. Equalia: what truly matters is fairness, so the world should be created so that all beings are exactly equal with exactly the same training and opportunities. To maximize equality, everyone should presumably look the same, or maybe even should be born from the same genetic material (with mutations minimized), and with identical resources proportioned to each person at birth.
10. Evol: evolution itself is what matters, so humanity should compete to optimize genetic fitness (i.e. survival and reproduction of genes), and whichever beings can replicate genes or copies of themselves at the fastest rate will (and should) dominate until perhaps even “fitter” organisms eventually overtake them.
11. Democrus: what truly matters is that people get to collectively choose the way the world is, so all aspects of the world should be put to the vote, and whatever people vote on should determine what world we all live in, even if that world is terrible for the minority (but benefits the majority), and even if some people are highly uniformed about what they are voting into existence.
12. Intellig: what truly matters is maximizing intelligence, hence we should bring into existence the most intelligent beings possible (e.g. by massively changing our neurology, or by building super intelligent A.I.) even if these more intelligent beings end up with values that bear little resemblance to our own, or optimize for goals very different from our own, and even if humanity (as we now know it) forever loses control of the future once these beings exist.
13. Desir: what matters is the satisfaction of human desire, so every person should be able to conjure up any (presumably simulated?) experience at any moment that they want (e.g. to experience the taste of a particular food, or to experience what it’s like to believe you’ve just won the Nobel prize in literature, or to experience intense pleasurable states)
14. Humia: what truly matters are the things that modern human intuitive morality says matter, so we should create a world where people respect authority and social hierarchies, and where bad people are punished and good people rewarded, and where people are taught very effectively to be loyal to those who are loyal to them, and where people act purely (avoiding behaviors that many people’s intuitions would say are degrading or disgusting), and where people experience a minimal (but probably above zero) amount of suffering.
15. Minimi: : we don’t know what world to create, so we should err on the side of caution and only take care of changing the stuff we can mostly agree on (e.g. reduce hunger, disease, depression, etc.) and then we should just stop trying to optimize the world after that, and hope that whatever equilibrium it happens to fall into by chance is a really great one (by virtue of us having taken care of the mutual priorities).
16. Earth: we leave the world as it is, in approximately the same terrible state that it’s in now.
If none of these worlds really seem all that compelling to you, and quite a few of these sound terrible or terrifying, then you and I are on the same page.
The upside of this is: any group that wields power that is sufficiently great (i.e. power to remake the world and to dictate what humanity is like) can’t help but consider philosophy. But philosophy is hard. If you pick any one thing that seems “good” and optimize the world for that, you are likely to end up with something pretty strange that many people might consider a dystopia rather than utopia.
A real utopia would likely involve a mix of many of the aspects from the list above…but which aspects, and in which combinations? In other words: which world do we actually want humanity to eventually live in?
This post was inspired by discussions with Holden Karnofsky about similar topics.
Comments