Disclaimer 1: I don’t know any of the longtermism figureheads (or others I refer to below) personally, so if I misrepresent anyone’s views or words, please let me know. I will gladly amend my mistakes.
Disclaimer 2: I prefer big picture, broad stroke ideas and I don’t feel comfortable enough (yet?) in my knowledge about the longtermism view(s) to assume that I have any ‘specialist’ insights. Take what follows as starting points for further inquiry.
Longtermism 101
Think of longtermism as a spin-off of effective altruism. Effective altruism is the movement that - what’s in a name? - tries to do good as effectively as possible. If you’ve got 100$ to donate, an effective altruist will use reason and evidence to figure out how using that 100$ will help the most people. Best return on investment, so to speak.
Longtermism is the idea that, because humanity is relatively young in evolutionary terms, there could be (a lot) more future people than present people. These future people, longtermists say, are just as morally important as today’s people. Best return on investment? Ensure the well-being of the many billions of people that might inhabit the future of humanity; might being the keyword. The method to do so often involves reducing existential risks - calamities that can wipe out humanity. Think AI value alignment (so that our future overlords are nice to us), moon and Mars bases (so that earth becoming unlivable doesn’t mean the end of our species), and so on.
This view has recently gotten a lot of attention, not in the least thanks to books such as The Precipice by Toby Ord and What We Owe the Future by William MacAskill. Longtermism has also piqued the interest of billionaires and philanthropists. But it’s not immune to vociferous (and often valid) criticisms. Some critics attack an extreme form of longtermism - check out this great Vox article that distinguishes between weak, strong, and galaxy-brain longtermism - but I think there are points that warrant some attention regardless of which longtermism variant we’re looking at. Let’s call them the three challenges. They weave together in several ways, but people like lists, so here we go:
Challenge 1: value pluralism
Most of the longtermism community is white and male. In itself, that doesn’t discredit their work and ideas. (At least I hope it doesn’t, I’m white and male too.) What it does mean, though, is that the longtermist vision of a desirable long-term future (inasmuch as there is a single one) might be based on a narrow set of perspectives that doesn’t capture the full diversity of human lived experiences. Let’s also not forget that longtermism - and, to some extent, effective altruism - is based on a very utilitarian framework. Well-being, happiness, joy… all become variables in an equation. Who decides what to measure? Can we even measure these adequately?
Longtermists can double down here and say: “we’re not trying to build utopia, but to prevent extinction.” Fair point, even if preventing extinction in itself does not guarantee a bright future. Necessary, but not sufficient, as philosophers like to say. Additionally, shuttling large amounts of funding to longtermist goals can shift philanthropic allocations away from current problems - as Christine Emba points out in this opinion piece.
Depending on how you crunch the numbers, making even the minutest progress on avoiding existential risk can be seen as more worthwhile than saving millions of people alive today. In the big picture, “neartermist” problems such as poverty and global health don’t affect enough people to be worth worrying about — what we should really be obsessing over is the chance of a sci-fi apocalypse.
I think this is less of an issue for ‘weak longtermism’. Still, is saving a million future lives worth a few current lives? That’s hard utilitarian logic for you. Is a potential life of equal moral importance as an actual one? I don’t know. I’m not convinced.
Fortunately, there are some encouraging signs of increasing the diversity of views in the effective altruism community:
Finally, non-human lives are rarely addressed in this context. We might disagree on the extent to which they morally matter, but I think most of us agree that they do matter. This Forbes article by Brain Kateman points out that longtermism - as in encouraging the growth of humanity at all costs - is probably not a good thing for our fellow earth dwellers. We are very gifted habitat destroyers, polluters, and factory farmers. How does that factor into the equation? MacAskill does mention non-human animals in What We Owe the Future, where he suggests that the lives of farmed animals are terrible but getting better thanks to improvements in animal welfare regulations. For wild animals, he writes:
But if we assess the lives of wild animals as being worse than nothing on average, which I think is plausible (though uncertain), then we arrive at the dizzying conclusion that from the perspective of the wild animals themselves, the enormous growth and expansion of Homo sapiens has been a good thing.
Uncertain, that’s for sure. Even if we agree on an anthropocentric view (I assume we’re among humans here), I’d make the point that the presence of (some) wild animals and minimally disturbed habitats is good for human well-being. This goes beyond the mental benefits of green spaces and the life they contain. Think of pest or population control that is the result of interactions between wild animals or the prevention of algae blooms by certain fish species, for example. And let’s not forget the perspective wildlife offers us. We are part of an ecosystem. Taking out key parts of that ecosystem - such as certain wild animals - can have unpleasant and unforeseen consequences on the human animals in that ecosystem. Exactly what longtermism wants to avoid.
Challenge 2: temporal focus and acceptable bottlenecks
In some ways, longtermism reminds me of the paleo diet. The latter bases its core assumptions on an imaginary past, the former on an imaginary future. Another commonality is that the timeframe is quite fuzzy. Are longtermists looking at centuries from now? Millions of years? When do we start counting, and when do we stop? Humanity won’t be around forever, after all. We’ll be erased or replaced. (I might be biased as an evolutionary biologist. Life is change; species are never static.)
Humanity’s successors morally matter too, might be the reply, and so the timeframe is open-ended. Okay, sure, post-human beings matter too because of their capacity for consciousness, joy, suffering, or sentience (take your pick). How sentient/human-like does a species have to be to be plugged into the equations? How do we even measure that?
Let’s return to a slightly nearer future and do a thought experiment.
We fail to mitigate climate change. A 7°C rise in average temperature becomes a reality. Many, many people die and suffer. However, a small group (let’s say a few million?) survives thanks to their access to resources, bunkers, seawalls, and so on. That group eventually gives rise to billions of humans that colonize the moon and Mars. For some longtermists, this might be okay. After all, if we focus on the long-term future, the eventual outcome of the morality/goodness/desirability equation is a net positive.
Remember this is a thought experiment, and I might have pushed it a little bit to make a point. The point is this: a sharp, sudden drop in population is not a good idea. As a biologist, I know such bottlenecks severely reduce genetic diversity and it takes effort and luck to bounce back from that. But this is not simply about genetic diversity (with a few million survivors, we’d be okay there). Cultural and social diversity will take a serious blow because let’s be honest, survivors of a catastrophe à la massive climate change will likely be mostly rich and western. Their perspectives and views will shape the future. I’m not suggesting that that’s necessarily bad, but it might be an impoverished set of perspectives to build on (see challenge 1).
Perhaps certain large changes in population size (both severe bottlenecks and overshooting carrying capacity) should be recognized as (semi-)existential risks?
Challenge 3: epistemic overconfidence
This third challenge aligns very closely with this highly recommended, more detailed essay by philosopher David Kinney and this essay by philosopher Christian Tarsney. Let me summarize both in five words: we can’t predict the future. When a butterfly meets a hurricane, no one knows what’s going to happen. We can try to prevent/prepare for potential risks, but the worst ones will be those we don’t see coming. That doesn’t mean we should give up, of course, but it might be worthwhile to not only focus on specific doom scenarios but also consider building an overall more robust and/or antifragile society, which, to be fair, is on some longtermist radars.
One of Kinney’s recommendations is to focus on models instead of forecasts. I don’t think I fully agree. (Although my mathematical modeling skills are sufficiently degraded that I refrain from a definite pro or con). Models can work. For specific scenarios. For which we have decent enough estimates for key parameters. I am unconvinced that we have those for all known existential risk scenarios, let alone the unknown ones. Similarly, for broader scenarios - e.g. building a robust enough society - I wouldn’t even know how to start defining the relevant parameters. (Here too, I’m sure other people have thought much more deeply about this, so educate me.)
All this parametrization leads me to the moral mathematics I’ve alluded to several times. In a utilitarian framework, it makes sense to try to quantify everything. I’m not sure we can. I see two problems here, and probably miss several:
If economics has taught us anything it’s that humans are not rational agents. Even if we would be able to arrive at a perfectly beautiful equation that tells us what to do morally, most people will still do what they feel like. Reasoning and emotion likely play a role in moral judgment.
If sociology has taught us anything it’s that it’s very hard to quantify qualitative phenomena. How do we score happiness? How do we tally wellbeing? How do we do this at a population level? (There could be options here, such as the Gini coefficient or QALYs, but population metrics don’t necessarily guarantee individual wellbeing.)
Thinking ahead
Confession time: I find several aspects of longtermism quite appealing (that’s my bias for you to take into account). This newsletter is called Thinking Ahead, after all. I also think that most (all?) of the members of the longtermist community truly want to do good - many of them already substantially do so via initiatives such as Giving What We Can and dedicating their time and efforts to ‘doing good better’.
It is worthwhile to think about the longer term. To return briefly to the temporal focus from challenge two.: in most people that focus is quite short. As I’ve written here:
…people who think about the far future (as in 30 years or more from now) are actually quite rare. For over a third of people, even 10 years out is a stretch. On the flip side, 10% of people (in the linked, US-based survey) think ahead for more than 30 years several times per week. In other words, there is quite a bit of variability in temporal focus, with the ‘think far ahead’ people at one tail end of the spectrum.
It is only human to think a few decades ahead at most. We want to have a golden old day and we want our kids to do well. Maybe grandkids if we are so lucky. Consider climate change again. We’ve known we were messing it up for quite some time. It’s only in the last few years that we seem to have sprung into action (and even that is debatable). Why? Because we’re starting to see detrimental effects now and in the coming years. Our own lives and our kids’ lives, in other words. What would have happened with a longer temporal focus and the motivation to act on it?
Given my bias, I don’t consider the core tenets of longtermism to be very controversial (although I am allergic to the word obligation, even in terms such as ‘moral obligation’). The issue is more about how we act on these tenets, and how much importance we give them compared to other moral considerations.
Future people matter? Yes (but to what extent?). We should try to build a better future. Yes (but at what current cost?). And we should start yesterday. Yes (but how can we possibly assess the success of our efforts?).
Where do we go from here? Even though some people from across history have penned down views on the long-term future of humanity (including wildly imaginative efforts by fiction writers), longtermism has only been recently formalized as a moral view. It is still in a formative stage. It will grow and evolve, hopefully by incorporating different views and critiques. Here are two suggestions:
Diversity: both in terms of sociocultural perspectives as well as the voices that will shape longtermism and desirable futures. I would go as far as to suggest that substantial losses of human diversity (biological and cultural) are a long-term risk to actively avoid. (As well as large losses in biodiversity, but that’s the biologist talking.)
Alignment: sometimes short-term and long-term goals are in apparent conflict (see the climate change thought experiment above). I see no reason to assume that it is a priori impossible to align short-term and long-term goals. We can make the world a better place now, which can increase the odds that the people and perspectives that will make a meaningful contribution to a better future will thrive. (This is very similar to Kinney’s - see challenge 3 - first recommendation.)
Let’s go.
Good stuff ... and if your have a confidence in an afterlife, your future thinking gets pushed out even further.