The latest (April 2025) issue of Mind includes an interesting paper (Sung 2025) discussing the impact of our time-based biases on morality. In it, the author argues that our inherent near-future bias regarding our own interests necessarily implies that we should preference others’ near-future interests over our own long-term interests. This, as she notes, has interesting implications for charitable giving, but also has wider political impact (for example, in what is the morally ‘right’ response to climate change).
Her arguments are presented in two strands, and in an almost-mathematical way. I, being a physicist, ended up re-stating them slightly more formally (although not too formally, since I’m not a mathematician) in order to understand the argument better. I figured I might as well tidy up my notes and turn them into a blog post.
Impartial Near-Future Bias
The first strand of argument considers a person, \(A\), who is agent-independent (i.e., impartial) and exhibits a near-future bias with respect to their own interests. Their agent-independence necessarily implies that they exhibit the same near-future bias with respect to others. In fact, Sung implies that they not only exhibit near-future bias for others, but exhibit the exact same level of concern over time for others as they do for themselves.
Sung uses various sketch-graphs to illustrate this. We can formalise these by supposing that a person’s level of concern can be presented as a real number (this is a big assumption) and defining a ‘self-concern’ function, \(C_A : \mathbb{R}^{+} \to \mathbb{R}\):
This represents the level of concern that the person has, at \(t=0\), for their own well-being at some later time \(t\). From Sung’s illustrations, it can be inferred that, since we are near-future biased, she considers that \(C_A\) must be a monotonically decreasing function.
We can then consider the level of concern \(A\) has for another person, \(B\). We will call this level-of-concern function \(C_{AB} : \mathbb{R}^{+} \to \mathbb{R}\). If \(A\) is perfectly impartial, their level of concern for another person, \(B\), is the same as their level of concern for themselves (i.e., \( C_{AB} = C_A\)). However, Sung allows for imperfect impartiality, whereby \(C_{AB}\) is offset from \(C_A\) by a constant amount, \(\Delta C_{AB}\):
Sung notes that \(\Delta C_{AB}\) is often negative (i.e., we care less for others than we do for ourselves), but there are situations in which it can be positive (e.g., in the case of a parent \(A\) and their child \(B\)). I, following Sung, assume that \(\Delta C_{AB} \lt 0\) for the remainder.
Sung then analyses two cases: the first is one where there exists a time \(T_n\) such that \(C_A(T_n) = C_{AB}(0)\). The second is one where no such time exists.
\(\exists T_n\)
If there is a time \(T_n\) such that \(C_A(T_n) = C_{AB}(0)\), then it follows from the fact that \(C_A\) is monotonically decreasing that for all later times, \(t > T_n\), \(C_A(t) \lt C_{AB}(0)\). This means that \(A\)’s self-concern level after \(T_n\) is less than their concern for \(B\) now. Put another way, \(A\) cares more about \(B\)’s present than they do about their own future after \(T_n\). This is a relatively straightforward statement.
\(\nexists T_n\)
On the other hand, if there is no such time \(T_n\), then there must be an asymptote, \(C_A(\infty) \geq C_{AB}(0)\), such that:
I believe this follows from noting that if \(\nexists T_n\) then \(\forall t : C_A(t) > C_{AB}(0)\). Since \(C_{AB}\) is monotonically decreasing, \(\forall t \gt 0 : C_{AB}(0) \gt C_{AB}(t)\). Thus, \(\forall t : C_A(t) > C_{AB}(t)\). This is equivalent to saying that \(\mathrm{Im}(C_A)\) must be bounded below. But if \(C_A\) is monotonically decreasing the only way for it to be bounded below is for it to have an asymptote. Furthermore, the bound must be greater than \(\max(C_{AB})\) and, as noted, since \(C_{AB}\) is monotonically decreasing, \(\max(C_{AB}) = C_{AB}(0)\).
Remark: This asymptote represents the minimum you ever care about your future self. Of course, it is possible for the \(\exists T_n\) scenario to also have an asymptote \(C_A(\infty) \lt C_{AB}(0)\), but it is not necessary, unlike in this \(\nexists T_n\) scenario. This means that if there is a lower bound on our self-concern we could be in either situation, but if there is no lower bound it must necessarily be true that \(\exists T_n\).
Sung argues that, in spite of \(\nexists T_n\), ‘you still ought to be … less concerned about your distant-future well-being relative to the present well-being of strangers’. This is argued on the basis that:
which follows from the fact that \(C_A\) is monotonically decreasing. However, this is a difficult inequality to interpret. Sung argues that:
… even if there is no point in time at which you are morally required to be indifferent between an increase in your own well-being and an increase in the distant stranger’s well-being, you should at least be willing to sacrifice some unit of your distant-future well-being in order to greatly increase the stranger’s present well-being. And the size of the unit that you should be willing to sacrifice is greater than it would be if it were your present well-being that was being sacrificed.
This seems to argue that the levels of concern \(C_A\) and \(C_{AB}\) are really per-unit well-being levels of concern. Really, there is a kind of ‘gross’ level of concern, \(\mathfrak{C}_A(t) = C_A(t)\ \Delta W_{A}(t)\), where \(\Delta W_{A} : \mathbb{R}^{+} \to \mathbb{R}\) is the potential change in well-being at \(t\). Then, as long as \(\Delta W_{B}(0)\) is sufficiently larger than \(\Delta W_{A}(\infty)\), it follows that \(\mathfrak{C}_{AB}(0) \gt \mathfrak{C}_A(\infty)\).
Partial Near-Future Bias
The second strand of argument takes \(A\) to have no temporal bias with respect to their level of concern about \(B\). That is, \(C_{AB}\) is constant: \(C_{AB} = C_A(0) + \Delta C_{AB}\). The same two cases are analysed, and essentially the same conclusions are reached. (Note that the fact that \(C_{AB}\) was monotonically decreasing was only used once, to show that \(C_{A}\) must be bounded below in the \(\nexists T_n\) case. This works just as well if \(C_{AB}\) is constant.)
However, as noted by Sung, one can reach a stronger conclusion in the \(\exists T_n\) case in this strand. If \(\exists T_n\), then \(C_A(T_n) = C_{AB}(0)\). But \(C_{AB}\) is constant, so this means that \(C_A(T_n) = C_{AB}(T_n)\) as well. Since \(C_A\) is monotonically decreasing, we have not only that \(\forall t \gt T_n : C_A(t) \lt C_{AB}(0)\) but also that \(\forall t \gt T_n : C_A(t) \lt C_{AB}(t)\). That is, \(A\) cares about \(B\) after time \(t\) not just more than they do about themselves at present, but also than they do about themselves after that time.
Sung interprets the difference between the two strands as relating to the rationality of the near-future bias. If it is rational, then we are obliged to have it for overs. This is covered in the first strands of argument. If it is not rational, then we are not obliged to have it for others. This is covered in the second strand of argument.
In either case, it follows that in the \(\exists T_n\) case, we should always be more concerned about a stranger beyond a certain time than we are about ourselves now. If it is irrational, then we should also care about them more than we care about our future self.
The \(\nexists T_n\) case I find harder to interpret without additional theoretical framework. Once you start adding in changes in well-being, it seems to me that you may as well do the full utilitarian treatment. This is more or less what the modern subject of economics is about (see, e.g., (Jehle 2010)).
A Slight Generalisation
I think the only actual requirements on the level of concern functions are that \(C_A\) is monotonically decreasing and that \(C_A(0) \gt C_{AB}(0)\). That is, \(A\) is near-future biased and cares more about themselves in the present than they do about \(B\) in the present. The actual form of \(C_{AB}\) is irrelevant: \(A\) can be temporally indifferent with regard to \(B\), or near-biased, or even far-biased.
The main difficulty is in interpreting the criteron that \(\exists T_n\). I personally do not find the utilitarian argument advanced by Sung that the difference between the two cases is minimal particularly convincing, as this seems to just be kicking the can down the road by defining a new pair of level-of-care functions for which \(\exists T_n\).
The criterion is not the same as the question of whether \(C_A\) has an asymptote or not. Futhermore, whilst the question of whether \(\exists T_n\) can be reframed as a question of whether \(C_A(\infty) \lt C_{AB}(0)\) or not, this re-framing relies on the monotonicity of \(C_A\) and so this is more of a conclusion than an independent criterion.
This means we are left with two nice statements: (1) that \(A\) is selfish (\(C_A(0) > C_{AB}(0)\)), and (2) that \(A\) is near-future biased (\(C_A\) is monotonically decreasing); and one abstract statement (\(\exists T_n\)). If all three are true, then it follows that \(C_A(\infty) \lt C_{AB}(0)\) (i.e., \(A\) cares more about \(B\) now than they do about future \(A\)). Otherwise, if \(\nexists T_n\), \(A\) always cares more about themselves (notwithstanding utilitarian arguments about incremental well-being).