I haven't read MacAskill's book yet, but I have read his Toby Ord's book <em>The Precipice</em> making his case for longtermism. Ord is MacAskill's colleague at the Future of Humanity Institute.
Toby Ord estimates that we face a confluence of manmade risks that, in aggregate, add up to a 1 in 6 chance of an existential catastrophe in the next 100 years. He defines existential catastrophe as extinction, unrecoverable collapse, or unrecoverable dystopia. These risks include (among others), nuclear war, climate change, other environmental damage, pandemics, unaligned A.I., nanotechnology, and politically locked in dystopia. That is an intolerable risk, and even if we scrape by over the next hundred years, we won't be able to roll the die over many more centuries before it is game over.
The author of this piece says:
> The impenetrable wall of uncertainty that assails us as we attempt to peer into the distant future overshadows the entire project, rendering much of longtermism unfortunately vacuous.
But Ord's longtermism isn't about attaining visibility into the distant future. If people do exist in 1,000, 10,000, 100,000, or 1,000,000 years, then yes, they're going to have to deal with their own problems. The point Ord makes is that they likely will be able to deal with them if we don't destroy the world in the next 100 years. That's why he calls the next 100 years The Precipice. He's saying that if we care about the longtermers, then we have to care about how we conduct ourselves over the next 100 years because it could either make or break the world for the longtermers.
This piece doesn't seem to be aware of this point at all.
I noticed the same thing, but if you look at the book description and back cover here, you'll see it mentions billions.
So I assume the author is considering future possibilities for the human race, beyond the heat death of the sun, when the earth might become inhospitable (which you are right is estimated at ~1 billion years).
I probably should have linked the book instead to make that more clear, thank you.
> It may not work for other things because we disagree on what values should be eternalized. Some people want future values to be alien to us, as they don't like current values.
This is true, but it's not really relevant to the longtermism I'm familiar with. Admittedly, I haven't read MacAskill's book (which is what everyone is talking about) yet, but I have read Toby Ord's book <em>The Precipice</em> making his case for longtermism. Ord is MacAskill's colleague at the Future of Humanity Institute.
Toby Ord estimates that we face a confluence of manmade risks that add up to a 1 in 6 chance of an existential catastrophe in the next 100 years. He defines existential catastrophe as extinction, unrecoverable collapse, or unrecoverable dystopia. These risks include (among others) nuclear war, climate change, other environmental damage, pandemics, unaligned A.I., out-of-control nanotechnology, and politically locked in dystopia. 1 in 6 is an intolerable risk, and even if we scrape by over the next 100 years, we won't be able to roll the die over many more centuries before it is game over.
But Ord's longtermism isn't about our agreeing on any particular set of values other than one: we want to leave future generations a change to build a good world, and so we don't want to utterly destroy it and eliminate that opportunity for them. If people do exist in 1,000, 10,000, 100,000, or 1,000,000 years, then yes, they're going to have to deal with their own problems, and work out their own values.
The point Ord makes is that they likely will be able to work things out if we overcome the risks we have created (or are creating) in the next 100 years. That's why he calls the next 100 years The Precipice. He's saying that if we care about the longtermers, then we have to care about how we conduct ourselves over the next 100 years because it could either make or break the world for the longtermers.
The only ways I can see that you have a problem with Ord's longtermism is 1: you're an antinatalist, or 2: you just don't feel much concern for the wellbeing of longtermers, for any or no reason.
But if you're already an environmental longtermist, I don't see how you talk yourself out of endorsing Ord's more comprehensive longtermist project.