The moral inefficacy of carbon offsetting (with Tyler M. John & Amanda Askell)
Australasian Journal of Philosophy (forthcoming).
Abstract: Many real-world agents recognise that they impose harms by choosing to emit carbon, e.g., by flying. Yet many do so anyway, and then attempt to make things right by offsetting those harms. Such offsetters typically believe that, by offsetting, they change the deontic status of their behaviour, making an otherwise impermissible action permissible. Do they succeed in practice? Some philosophers have argued that they do, since their offsets appear to reverse the adverse effects of their emissions. But we show that they do not. In practice, standard carbon offsetting does not reverse the harms of the original action, nor does it even benefit the same group as was harmed. Standard moral theories hence deny that such offsetting succeeds. Indeed, we show that any moral theory that allows offsetting in this setting faces a dilemma between allowing any wrong to be offset, no matter how grievous, and recognising an implausibly sharp discontinuity between offsettable actions and non-offsettable actions. The most plausible response is to accept that carbon offsetting fails to right our climate wrongs.
Philosophical Quarterly (forthcoming).
Abstract: Can it be rational to be risk-averse? It seems plausible that the answer is yes—that normative decision theory should accommodate risk aversion. But there is a seemingly compelling class of arguments against our most promising methods of doing so. These long-run arguments point out that, in practice, each decision an agent makes is just one in a very long sequence of such decisions. Given this form of dynamic choice situation, and the (Strong) Law of Large Numbers, they conclude that those theories which accommodate risk aversion end up delivering the same verdicts as risk-neutral theories in nearly all practical cases. If so, why not just accept a simpler, risk-neutral theory? The resulting practical verdicts seem to be much the same. In this paper, I show that these arguments do not in fact condemn those risk-aversion-accommodating theories. Risk aversion can indeed survive the long run.
Longtermism in an infinite world (with Christian Tarsney)
In Greaves, H., D. Thorstad, & J. Barrett (eds). Essays on Longtermism (Oxford University Press, forthcoming).
Abstract: The case for longtermism depends on the vast potential scale of the future. But that same vastness also threatens to undermine the case for longtermism: If the universe as a whole, or the future in particular, contain infinite quantities of value and/or disvalue, then many of the theories of value that support longtermism (e.g., risk-neutral total utilitarianism) seem to imply that none of our available options are better than any other. If so, then even apparently vast effects on the far future cannot in fact make the world morally better. On top of this, some strategies for avoiding this problem of “infinitarian paralysis” (e.g., exponential pure time discounting) yield views that are much less supportive of longtermism. In this chapter, we explore how the potential infinitude of the future affects the case for longtermism. We argue that (i) there are reasonable prospects for extending risk-neutral totalism and similar views to infinite contexts and (ii) many such extension strategies will still support the case for longtermism, since they imply that when we can only effect (or only predictably affect) a finite, bounded part of an infinite universe, we can ignore the unaffectable rest of the universe and reason as if the finite, affectable part were all there is.
A critical approach to critiquing economics (with Geoffrey Brennan)
In Róna, P., L. Zsolnai and A. Wincewicz-Price. Virtues and Economics 4 (Springer, forthcoming).
Australasian Journal of Philosophy 101.2 (2023): 340-59.
Abstract: For aggregative theories of moral value, it is a challenge to rank worlds that each contain finitely many valuable events. And, although there are several existing proposals for doing so, few provide a cardinal measure of each world's value. This raises the even greater challenge of ranking lotteries over such worlds—without a cardinal value for each world, we cannot apply expected value theory. How then can we compare such lotteries? To date, we have just one method for doing so (proposed separately by Arntzenius, Bostrom, and Meacham), which is to compare the prospects for value at each individual location, and to then represent and compare lotteries by their expected values at each of those locations. But, as I show here, this approach violates several key principles of decision theory and generates some implausible verdicts. I propose an alternative—one which delivers plausible rankings of lotteries, which is implied by a plausible collection of axioms, and which can be applied alongside almost any ranking of infinite worlds.
Philosophy & Public Affairs 50.2 (2022): 202-38.
Abstract: Our actions in the marketplace often harm others. For instance, buying and consuming petroleum contributes to climate change and thereby does harm. But there is another kind of harm we do in almost every market interaction: market harms. These are harms inflicted via changes to the goods and/or prices available to the victim in that market. (Similarly, market benefits are those conferred in the same way.) Such harms and benefits may seem morally unimportant, as Judith Jarvis Thomson and Ronald Dworkin have argued. But, when those harms or benefits are concentrated on the global poor, they can have considerable impacts on wellbeing. For instance, in 2007-2008, commodity traders invested heavily in wheat and other staple foods, caused a dramatic price rise, and thereby pushed 40 million people into hunger. In such cases, intuition suggests that the traders act wrongly. In this paper, I argue that market harms and benefits are morally equivalent to harms and benefits imposed through other means (contra Thomson and Dworkin). I also demonstrate that, in practice, these harms and benefits are often great in magnitude. For many common products, buying that product results in a considerable financial loss for one group and a considerable gain for another. For instance, for every $10 we spend on wheat, we cause the global poor to lose between $5 and $67 (in expectation) and the global rich to gain the same amount. In light of these effects, I argue that we have moral duties to adopt certain consumption habits.
Ethics 132.2 (2022): 445-77.
Abstract: Consider a decision between: 1) a certainty of a moderately good outcome, such as one additional life saved; 2) a lottery which probably gives a worse outcome, but has a tiny probability of some vastly better outcome (perhaps trillions of blissful lives created). Which is morally better? By expected value theory (with a plausible axiology), no matter how tiny that probability of the better outcome, (2) will be better than (1) as long that better outcome is good enough. But this seems fanatical. So we may be tempted to abandon expected value theory.
But not so fast - denying all such verdicts brings serious problems. For one, we must reject either: that moral betterness is transitive; or even a weak tradeoffs principle. For two, we must accept that judgements are either: ultra-sensitive to small probability differences; or inconsistent over structurally-identical pairs of lotteries. And, for three, we must sometimes accept judgements which we know we would reject if we learned more. Better to accept fanaticism than these implications.
Abstract: Aggregative moral theories face a series of devastating problems when we apply them in a physically realistic setting. According to current physics, our universe is likely infinitely large, and will contain infinitely many morally valuable events. But standard aggregative theories are ill-equipped to compare outcomes containing infinite total value so, applied in a realistic setting, they cannot compare any outcomes a real-world agent must ever choose between. This problem has been discussed extensively, and non-standard aggregative theories proposed to overcome it. This paper addresses a further problem of similar severity. Physics tells us that, in our universe, how remotely in time an event occurs is relative. But our most promising aggregative theories, designed to compare outcomes containing infinitely many valuable events, are sensitive to how remote in time those events are. As I show, the evaluations of those theories are then relative too. But this is absurd; evaluations of outcomes must be absolute. So we must reject such theories. Is this objection fatal for all aggregative theories, at least in a relativistic universe like ours? I demonstrate here that, by further modifying these theories to fit with the physics, we can overcome it.
Philosophical Studies 178.6 (2021): 1917-1949.
Abstract: How might we extend aggregative moral theories to compare infinite worlds? In particular, how might we extend them to compare worlds with infinite spatial volume, infinite temporal duration, and infinitely many morally valuable phenomena? When doing so, we face various impossibility results from the existing literature. For instance, the view we adopt can endorse the claim that (1) worlds are made better if we increase the value in every region of space and time, or (2) that they are made better if we increase the value obtained by every person. But they cannot endorse both claims, so we must choose. In this paper I show that, if we choose the latter, our view will face serious problems such as generating incomparability in many realistic cases. Opting instead to endorse the first claim, I articulate and defend a spatiotemporal, expansionist view of infinite aggregation. Spatiotemporal views such as this do face some difficulties, but I show that these can be overcome. With modification, they can provide plausible comparisons in the cases that we care about most.
PhD dissertation (2021)
Abstract: Suppose you found that the universe around you was infinite—that it extended infinitely far in space or in time and, as a result, contained infinitely many persons. How should this change your moral decision-making? Radically, it seems, according to some philosophers. According to various recent arguments, any moral theory that is ’minimally aggregative’ will deliver absurd judgements in practice if the universe is (even remotely likely to be) infinite. This seems like sound justification for abandoning any such theory.
My goal in this thesis is simple: to demonstrate that we need not abandon minimally aggregative theories, even if we happen to live in an infinite universe. I develop and motivate an extension of such theories, which delivers plausible judgements in a range of realistic cases. I show that this extended theory can overcome key objections—both old and new—and that it succeeds where other proposals do not. With this proposal in hand, we can indeed retain minimally aggregative theories and continue to make moral decisions based on what will promote the good.
Abstract: Two key questions of normative decision theory are: 1) whether the probabilities relevant to decision theory are evidential or causal; and 2) whether agents should be risk-neutral, and so maximise the expected value of the outcome, or instead risk-averse (or otherwise sensitive to risk). These questions are typically thought to be independent--that our answer to one bears little on our answer to the other. But there is a surprising argument that they are not. In this paper, I show that evidential decision theory implies risk neutrality, at least in moral decision-making and at least on plausible empirical assumptions. Take any risk-aversion-accommodating decision theory, apply it using the probabilities prescribed by evidential decision theory, and every verdict of moral betterness you reach will match those of expected value theory.
Abstract: Various philosophers accept moral views that are impartial, additive, and risk-neutral with respect to moral betterness. But, if that risk neutrality is spelt out according to expected value theory alone, such views face a dire reductio ad absurdum. If the expected sum of value in humanity's future is undefined--if, e.g., the probability distribution over possible values of the future resembles the Pasadena game, or a Cauchy distribution---then those views say that no option is ever better than any other. And, as I argue, this holds in practice: our evidence supports such a probability distribution. Indeed, it supports a probability distribution that cannot be evaluated even if we adopt one of the various extensions of expected value theory proposed in the literature. Must we therefore reject all impartial, additive, risk-neutral moral theories? It turns out that we need not. I develop an alternative solution: a new method of extending expected value theory, which allows us to deal with this distribution and to salvage those moral views. I also examine how this solution affects one of the most notable implications of those views--namely, longtermism.
Abstract: Consider longtermism: the view that, at least in some of the most important decisions facing agents today, which options are morally best is determined by which are best for the long-term future. Various critics have argued that longtermism is false—indeed, that it is obviously false, and that we can reject it on normative grounds without close consideration of certain descriptive facts. In effect, it is argued, longtermism would be false even if real-world agents had promising means of benefiting vast numbers of future people. In this paper, I develop a series of troubling impossibility results for those who wish to reject longtermism so robustly. It turns out that, to do so, we must incur severe theoretical costs. I suspect that these costs are greater than simply accepting longtermism. If so, the more promising route to denying longtermism would be by appeal to descriptive facts.
Abstract: Various decision theories share a troubling implication. They imply that, for any finite amount of value, it would be better to wager it all for a vanishingly small probability of some greater value. Counterintuitive as it might be, this fanaticism has seemingly compelling independent arguments in its favour. In this paper, I consider perhaps the most prima facie compelling such argument: an Egyptology argument (an analogue of the Egyptology argument from population ethics). I show that, despite recent objections from Russell (2023) and Goodsell (2021), the argument's premises can be justified and defended, and the argument itself remains compelling.
Abstract: Our universe is both chaotic and (most likely) infinite in space and time. But it is within this setting that we must make moral decisions. This presents problems. The first: due to our universe's chaotic nature, our actions often have long-lasting, unpredictable effects; and this means we typically cannot say which of two actions will turn out best in the long run. The second problem: due to the universe's infinite dimensions, and infinite population therein, we cannot compare outcomes by simply adding up their total moral values - those totals will typically be infinite or undefined. Each of these problems poses a threat to aggregative moral theories. But, for each, we have solutions: a proposal from Greaves let us overcome the problem of chaos, and proposals from the infinite aggregation literature let us overcome the problem of infinite value. But a further problem emerges. If our universe is both chaotic and infinite, those solutions no longer work - outcomes that are infinite and differ by chaotic effects are incomparable, even by those proposals. In this paper, I show that we can overcome this further problem. But, to do so, we must accept some peculiar implications about how aggregation works.