News

If I scatter broken glass on the ground and someone else walks over it and cuts their feet, does it matter “when” they cut their feet? That’s the thought experiment at the start of the philosopher William MacAskill’s forthcoming book, What We Owe The Future.

MacAskill’s argument is that harm is harm, whether my littering causes cut feet later today, next week or in 10,000 years. He believes that we should consider harm to future people as equal in severity to that inflicted upon the living. And because the potential number of future people is far greater than those who are currently alive, this should change how we think about problems and risks in the present day.

MacAskill wants to make the case for “longtermism”: to guard against catastrophic risks that may either eliminate human life or permanently reduce human flourishing. If we consider the rights and safety of future people, how we think about risk in the shorter term changes. A 1 per cent chance in any given year of a catastrophic event — such as our climate hitting an irreversible tipping point or a full-blown nuclear exchange — may feel like an acceptably low level of risk, but when we take into account the risk to future generations, it becomes intolerable. Or so the theory runs.

But does it work? One inevitable problem here is reproductive freedom. MacAskill rules out limiting access to abortion, much less mandating people to have children. But from an actuarial standpoint, it’s hard to argue that my choice to remain childless so that my partner and I can fritter away our disposable income on fancy restaurants, watching Arsenal Football Club or nice holidays is anything other than immoral when you consider the potential benefits to future generations of our having children.

Surely every top-rate taxpayer ought to have to adopt or have children themselves, given that, statistically speaking, those children will be given better opportunities and these opportunities will last longer than my alternate life plan?

That MacAskill does not draw this conclusion tells us something important about the usefulness of his thought experiment. We should, of course, care about long-term risks. But the problem with MacAskill’s approach is that we know it doesn’t work very well. Many people hear that there is, say, a one in six chance of a catastrophic risk and they either think they’ll take those odds or they sink into despair. Comparatively few people hear it as a call to arms. Far from being inspired by a greater sense of human potential, the prospect of centuries of possible catastrophes can make people feel as if they might as well give up here and now.

Longtermism’s intellectual ancestors are utilitarianism and so-called effective altruism. MacAskill’s thought experiment recalls the work of the Australian philosopher Peter Singer, who aimed to show that mere distance should not influence our concerns about doing harm. But the success of effective altruism has been not in persuading people that they ought to give to charity or to care about harm, but in convincing them that, if they are giving money to charity, they will do more good by handing over their cash for things that work — like malaria nets or deworming. Thinking long-term, however, inevitably involves being more open to missing gaps in information and to accepting that we can’t know for sure what will work or what won’t.

Furthermore, to secure the long-term future requires persuading people who don’t already subscribe to the belief that it matters. It’s true to say that the world’s current trajectory on climate change is a lot like playing Russian roulette: the longer you play, the more likely you are to lose. But perhaps a better way to get people to stop playing Russian roulette is to explain that there is a good chance they will blow their heads off today rather than that there is an even better chance that they will blow their heads off eventually.

With climate change, it is a problem for future generations, yes, but also a problem here and now for many people all over the world, near and far. The long-term risks we can actually do more to address are, almost by definition, the ones whose contours are the most obvious to us in the present. Giving greater weight to the rights of those yet to be born doesn’t really illuminate these problems any better than illustrating the real risks they carry today.

A better way to convince people to tackle long-term problems is to point out the short-run risks — not try to sell them on a thought experiment that even its author doesn’t wholly endorse.

stephen.bush@ft.com

Articles You May Like

Near’s crosschain AI Assistant will soon book flights and order takeout for you
Bitcoin to be ‘political imperative,’ owning none ‘a liability’ — NYDIG
Palm Beach comptroller calls for dismissal of Israel bonds suit
Trump is the most pro-stock market president in history, Wharton’s Jeremy Siegel says
Hedge funds performed better under Democratic presidents than Republican ones, history shows