Diet, changing desires, and dementia
Cross-post from the Uehiro Centre's Practical Ethics blog
Last week saw the launch of a campaign (run by the group Vegetarian For Life) that seeks to ensure that older people in care who have ethical commitments to a particular diet are not given food that violates those commitments. This is, as the campaign makes clear, a particularly pressing issue for those who have some form of dementia who may not be capable of expressing their commitment.
Those behind campaign is quite right to note that people’s ethical beliefs should not be ignored simply because they are in care, or have a cognitive impairment (see a Twitter thread where I discuss this with a backer of the campaign). But the idea that one’s dietary ethics must be ‘for life’ got me thinking about a more well-established debate about Advance Directives. (I should stress that what I say here should not be taken to be imputing any particular motivation or philosophical commitments to those behind the campaign itself.)
Briefly, one way of thinking about advance directives is that they are a way for people to, while we retain the relevant cognitive capacities, direct what happens to them at a future point where they lack capacity. Ronald Dworkin (in Life’s Dominion) draws a distinction between two kinds of interests. Some interests are purely experiential – it is in my interest to enjoy this piece of cake, for instance, merely because of how it tastes in the moment. But other interests are critical: they lend coherence and shape to our lives (he acknowledges that people may have such interests to greater or lesser degree).
When it comes to certain kinds of advance directives, these interests may come into conflict. For instance, someone may write an AD which says that if they are suffering from severe dementia, and have lost the capacity to do various things that currently give their life meaning (e.g. writing; certain kinds of interactions with loved ones), they should not be resuscitated if they fall ill. Their critical interest in shaping their life (exercised initially while still possessing capacity) may come into conflict with their later experiential interests if their quality of life is still good at the time when doctors are deciding whether to resuscitate. While dementia can of course negatively impact quality of life, many individuals with dementia lead good, enjoyable lives, full of valuable relationships, pastimes and experiences.
In Dworkin’s view, it is the critical interests that should win out. Up until now I have been convinced by a response to Dworkin from Rebecca Dresser. Dresser argues that it is far from clear that critical interests should always take precedence. Dresser raises a number of important issues, but a particularly compelling example of hers, to my mind, is that of pain relief: we can imagine a person whose religious or philosophical commitments led them to value the experience of physical pain, perhaps seeing it as more authentic than using pain relief. But imagine such an individual later on, as a patient suffering dementia and having forgotten all about these commitments. It is hard to see the value in nonetheless respecting her past critical interests, and refusing to give a confused, frightened patient pain relief when doing so would benefit her immensely.
In such a case, we allow a person’s current experiential interests to override our respect for her past critical interests. And it seems to me that we do so quite defensibly; indeed, I think it would be indefensible to insist on respecting her past commitments which have no resonance with her now. To my mind, the case of life-prolonging care is of the same kind; if a patient with dementia no longer has any sense of the commitments that led her to request that her life not be extended if they were to suffer from such a condition, and her continuing life is of a good quality, then our moral obligation is to save her life.
The Vegetarian for Life campaign has, I admit, given me pause for thought. As someone with ethical dietary commitments, I am certainly unsettled by the thought that I might one day unwittingly violate those commitments, even if I no longer explicitly hold them. So I will end by offering a couple of thoughts about what might justify a difference in approach between diet on the one hand, and pain relief and the extension of good quality life on the other.
1. No harm done: If I am denied pain relief, I am directly harmed. Although philosophically more controversial, many of us believe that we can be harmed by being deprived of good quality life. In many cases, though, there is no harm done to someone by feeding them, say, a vegan diet. As such, the experiential costs of respecting the critical interest is lower.
Such a defence will, of course, be limited. If a patient’s tastes change so that they will only eat previously abhorred foods, then there would be a serious cost to maintaining their past commitment.
2. Evidence: We typically get fairly direct evidence that someone is in pain, and that the patient in pain wants it to stop. Judgements of whether an individual life is overall good are more difficult, but if a patient seems happy on a day-to-day basis, we have little reason not to judge that they would benefit from continuing to live. It is perhaps more controversial to say that we can attribute to a patient with dementia a desire to go on living; but we can typically attribute a desire to carry on various pleasurable activities, for which living is of course a requirement. In both these cases then, we have evidence of the patient’s individual experiential preferences.
Contrast this with dietary preferences. In general, the evidence we could get that a patient’s preferences had changed would not come about organically (as with the preference against pain, and for the continuation of life), but through a conscious decision by carers to change the patient’s diet. But there is usually no good reason to make that change.
3. Moral wrong vs. ideal lifeEven amongst people with particular dietary values, reasons vary. My own reasons are other-regarding moral ones of a fairly standard kind (animals have rights, and it’s thus wrong to kill them for food). What’s more, I think that it would still be wrong of me to eat meat even if I changed my mind.
This contrasts to some extent with at least one way of viewing Dworkin’s example of critical interests. Imagine someone who values abstract philosophy, who knows he will have no interest in that sort of thing once he has dementia. He currently sees his life as valuable in a very specific way. But he may not (and, I would suggest, should not) see other kinds of life as without value, at least for other kinds of people. What’s more, even if he does see a life without philosophy as containing some kind of mistake, it would be odd to see it as a moral mistake. Is there anything to be said for the idea that a person’s deep moral commitments have a greater claim to being respected (so long as they are reasonable) than other kinds of evaluative commitments? I expect many people will want to say no!
Of course, my varying intuitions may just be due to personal value prejudice. While I have a strong commitment to my dietary values, I have no similar commitments that would make me want to prematurely end my life if I were to suffer from dementia (it is not exactly that I welcome dementia as Charles Foster does, but that I recognise that my life may well have significant personal value, albeit of a different kind than it does now). And I certainly don’t have any commitments against avoiding pain (authenticity schmauthenticity, I say). So it may be that I’m just being parochial, and insisting that while the things I care about in a critical sense are deeply important and should be respected, the things others care about aren’t, and shouldn’t.
Arbitrariness as an ethical criticism
This is a cross-post from the Uehiro Centre's Practical Ethics blog:
We recently saw a legal challenge to the current UK law that compels fertility clinics to destroy frozen eggs after a decade. According to campaigners, the ten-year limit may have had a rationale when it was instituted, but advances in freezing technology have rendered the limit “arbitrary”. Appeals to arbitrariness often form the basis of moral and political criticisms of policy. Still, we need to be careful in relying on appeals to arbitrariness; it is not clear that arbitrariness is always a moral ‘deal-breaker’.
On the face of it, it seems clear why arbitrary policies are ethically unacceptable. To be arbitrary is to lack basis in good reasons. An appeal against arbitrariness is an appeal to consistency, to the principle that like cases should be treated alike. Arbitrariness may therefore seem to cut against the very root of fairness.
However, there are at least two ways a policy’s arbitrariness is not a knock-down argument for scrapping or changing it. The first of these is when a policy is in fact based on arbitrary grounds, but where there are good alternative grounds for it. Consider, for instance, the furore around a decade ago over the firing of government health advisor David Nutt. Nutt was sacked for contradicting government guidelines over drug safety. Nutt publicly appealed to the idea of arbitrariness in his criticism of government policy. In particular, he said, it was arbitrary to support the continued legality of alcohol and tobacco while opposing the legalisation of less harmful drugs such as cannabis, LSD and ecstasy.
It was certainly worrying for the government to sack an advisor simply for disagreeing with them. And Nutt was right to say that government policy should be based on evidence, and probably wasn’t. So, government policy was arbitrary. But he was wrong to imply that because alcohol is more harmful than LSD, it is essentially arbitrary and thus wrong to keep the former legal while criminalising the other.
The reason is that this arbitrariness is ingrained in society. What we want to know when facing the question of whether to legalise or criminalise a particular drug is not only how it compares to other legal and illegal drugs, but how much good or harm will come from changing its legal status. It is possible, then, that although alcohol is worse than ecstasy, we should keep alcohol legal and ecstasy illegal because:
Here’s the other way a policy can be arbitrary and yet justified. A particular limit (e.g. a speed limit of 20mph outside schools; or an age limit of 18 to vote) can be arbitrary in the sense that any particular limit would be arbitrary, and yet a limit is still required. In this case, the question of whether something is arbitrary is comparison-sensitive. If we ask why the speed limit should be 20mph rather than 100, we can give good reason. But if we ask why it should be 20 rather than 21 or 19, the answer looks less clear. Similarly, we can explain why we should allow people to vote at 18 rather than 5, but may struggle when presented with articulate, intelligent and politically motivated 16-year-olds to say why the limit should not be dropped. Nonetheless, it seems plausible that in both cases we do need some limit. And any limit that attempts to cover a general population will either face exceptions (dropping the voting age to 16 will face some intelligent, articulate 15-year-olds), or be open to symmetrical challenges (just as there is no reason to prefer 20mph to 19mph, so too is there no reason to prefer 19 to 20).
These cases come in two kinds. One is where an easy-to-measure feature tracks, to some degree, a feature that is far more difficult to get a handle on. Age tracks emotional maturity, politically-relevant knowledge and independence, but it does so very imperfectly (some teenagers outdo some adults on all of these). The other is where we have good reason for preferring something in a particular range, but not for preferring any individual within that range. It really is best to have a speed limit of around 20 near schools, but it really does seem impossible to justify 20 precisely.
Faced with such cases, we have three choices. We can scrap the limit altogether as unavoidably arbitrary. In the first kind of case, we can attempt to impose limits based on non-arbitrary features (e.g. replacing a voting age with epistocracy, where voting rights are based on knowledge and intelligence). Or we can accept that while an arbitrary limit is imperfect, it’s the best we’ll get. Like the drugs debate, this latter view requires attention to the pragmatic consequences of adopting any particular policy. For instance, if the idea of epistocracy looks less attractive than moderate arbitrariness (and I think it looks much, much less attractive), we will need to accept that our voting limit is always going to exclude some people unfairly. Nonetheless, we might think that a particular age limit (not necessarily 18) will do the best available job of excluding as few people as possible, without opening us up to a mass of uninformed, clueless voters (as might be the case if we simply dropped an age limit altogether.
These cases point to a further feature that’s required for arbitrariness to be justified (the first being that basic rights are not violated or denied). It is that the area in which an arbitrary limit is in place is something that requires a limit, and that no better criteria for a limit are available.
Strikingly, these three features map onto the case being made against the ten-year storage limit. Campaigners have argued that the limit is not only arbitrary, but arbitrary in a way that violates the right to a family life. And while the individual bringing the challenge accepts the need for a limit of some kind, she argues that this should be in line with the age at which most fertility clinics would not accept patients in any case: 55. Such a limit might itself be arbitrary, if some of those over 55 could have children. But importantly, it might be less arbitrary than the status quo.
* Of course, these claims may be false. It may be safer to legalise ecstasy. But the truth of this claim has nothing to do with a comparison with alcohol.
Take back control? Doctors as appointed fiduciaries
This is a cross-post from my recent update at the Practical Ethics blog
There’s a story that’s often told about the evolution of the doctor-patient relationship. Here’s how it goes: back in the bad old days, doctors were paternalists. They knew what was best, and the job of the patient was simply to do as they were told and hopefully get better. Then, in part because of abuses of power, and in part because of cultural changes, a new model emerged. This model cast patients not as passive recipients of instruction, but as active, autonomous agents, put in charge of their own medical decisions. The doctor-patient relationship was remodelled, from a paternalistic relationship (doctor looks after patient’s health) to a service relationship (doctor does what patient wants, within limits).
That story is almost certainly too simple to be true. But even histories that aren’t wholly accurate can come to influence our culture and expectations. And the dominant assumption between both patients and medical professionals seems to be that our relationship will be cast on what is sometimes called the “informative model” (Emmanuel and Emmanuel, 1992), where the medical role is simply to provide the patient with empirical information, such as information about likely risks and outcomes.
That model has itself been subject to challenge. For instance, we might think that doctors should be willing to offer advice not only about facts, but also about values. Emmanuel and Emmanuel (1992) suggest two mid-points between paternalism and the informative model. According to the ‘interpretive’ model, the doctor helps the patient to work out the patient’s own values; on the deliberative model, of which liberal rationalism is an instance, the doctor aims to persuade the patient to adopt her favoured course of action, though the decision is ultimately left up to the patient.
All of these models, however, leave the ultimate decision up to the patient. One reason for this might be the assumption that any model where decisions are not made by the patient must be a paternalist one. After all, either the patient is making decisions, or someone else is. And if someone else is making decisions for me, isn’t that worryingly paternalistic?
But this line of thought is subject to an understandable confusion between what we might call sovereignty on the one hand and, on the other, decision-making authority. Consider an issue that has been an undercurrent in British politics lately, the relationship between representatives in a democracy like the UK’s, and the electorate whom they represent. One model (by no means uncontroversial) sees the right to make decisions as essentially loaned to MPs by their voters. Nonetheless, ultimate sovereignty lies with the people.
Having someone else make decisions for you, even about very important issues, is consistent with your ultimately having the authority, and the power, to take back control (sorry) over decision-making. On this model, the decision-maker (doctor or parliamentarian) becomes the appointed fiduciary of the sovereign individual or group (patient or public).
There is one important difference between parliamentary democracy and my proposal for an appointed fiduciary model in medicine, though. In the UK, even if the people theoretically have control, it isn’t practically possible for us to take it back in any substantive way without undermining our system of government. The people can choose who governs them, but they can’t choose not to be governed.
In contrast, my suggestion is that an appointed fiduciary model could be the exception rather than the norm in medicine. Placing a doctor in charge of your medical decision-making would be present as an option, not imposed on anyone who didn’t want it. This appointment could be rescinded at any time. And having appointed a doctor as a fiduciary for one medical decision would not mean that they automatically had decision-making authority over other medical decisions, let alone decisions outside the medical sphere.
Having control over decisions that affect your life is sometimes presented, not least in the Just So story we began with, as an unqualified good. It surely is often a good thing to be in control of your life. But control, and the responsibility it entails, can also be a burden. Being responsible opens you up to blame, to making mistakes of judgement that have serious costs, and perhaps even to substantive penalties if your poor choices affect others. That can be stressful and emotionally draining for anyone. Add to that the stresses that come with serious illness, and decision-making authority might turn out to be too much. To take away someone’s right to choose when they want to do so is wrong. But it is also wrong to insist that someone choose when that is too much for them.
To be clear, the kind of patient I’m thinking of is not the individual who lacks mental capacity in a legal sense. Mental capacity is a threshold concept (you either have it with respect to a particular decision, or you don’t), and the principle of assumed capacity is an important one. Rather, I am thinking of patients who undeniably have capacity, but who are at a point where decision-making is extremely burdensome for them.
No doubt, there are worries that might attach to this proposal, as with any proposal that involves transferring authority away from the individual who is affected by the relevant decisions. Although patients would retain ultimate authority, vulnerable patients may be liable to manipulation and abuse. The decision to transfer authority would need to be subject to oversight beyond the doctor involved.
Still, I think this model is worth considering as one option among several. We must not fall into the trap of assuming that one model will suit everyone equally well. Nor can a fear of misapplication, which could be minimised, distract from the potential to benefit patients.
Vernon Bognador (2016) ‘After the referendum, the people, not parliament, are sovereign’ Financial Times December 9th 2016
Ezekiel J. Emmanuel and Linda L. Emmanuel (1992) ‘Four Models of the Physician-Patient Relationship’ The Journal of the American Medical Association267(16): 2221-2226
Laurence B. McCullough (2011) ‘Was Bioethics Founded on Historical and Conceptual Mistakes About Medical Paternalism?’ Bioethics 25(2): 66-74
Julian Savulescu and Richard W Momeyer (1997) ‘Should Informed Consent Be Based on Rational Beliefs?’ Journal of Medical Ethics 23: 282-288
Outcome risk and status risk - Uncertainty for vegans
Here are two cases:
1. A trolley is hurtling towards a person on a track. You can divert the trolley so that it plummets over a cliff. You know that there is a small (but not vanishingly small) chance that there is a person in the trolley.
2. Eating plant food involves, indirectly, being complicit in the suffering of animals that are killed during harvest. You could reduce the amount of plant food you eat by eating insects like crickets. You know that there is a small (but not vanishingly small) chance that crickets have whatever feature(s), F, make things into persons. (For instance, crickets might feel pain and pleasure).
Bob Fischer (in his 2016 paper 'Bugging the strict vegan', ) wants us to accept that these two situations are equivalent. In both cases, we might say, you 'might kill someone who matters'. Since you clearly ought to divert the trolley, you ought to eat the insect.
I'm not a huge fan of trolley cases. They're good at exposing our bare intuitions about, well, trolley cases. But they're often not so good at getting us to think about the underlying reasons or values behind those intuitions.
Nonetheless, I think Bob's challenge to vegans (and vegetarians) who don't want to eat insects is a reasonable one. OK, in eating an insect you might violate a right (at least, those are the terms in which I think of it). But if you don't eat an insect, additional rights (of field animals) definitely get violated.
Here is a third case:
3. A person is dying of organ failure. In the bed next to them is a human being who is being kept artificially alive while apparently in a permanent vegetative state (PVS). You could take the PVS patient's organs, and give them to the fully conscious patient. You know that there is a small (but not vanishingly small) chance that the PVS patient has whatever feature(s), F, make things into persons.
People will differ about this, I'm sure. But, as it stands, we don't typically think it's OK to use PVS patients as involuntary organ donors, just because we are unsure about whether they are still persons. And I think that many people (including myself) who would send the trolley over the cliff in case (1) will not be happy with cutting open the PVS patient in case (3).
It's true that all three cases could be described, as I did above, as cases where you 'might kill someone who matters'. But it's also true that there is a distinction within that description. In particular, while case (1) is a case where:
You might kill someone who definitely matters,
Cases (2) and (3) are cases where:
You will definitely kill someone who might matter.
The first case is thus an instance of what I call outcome risk: risk that derives from uncertainty about what will occur.
The second and third case are cases of what I call status risk: risk that derives form uncertainty about the moral status of those who will definitely be affected.
I'm not too sure what, exactly, differentiates the two. After all, someone might object that status risk still is a form of outcome risk: the outcome you're not sure about is whether you'll kill anything that matters. But I think this distinction may go some way towards explaining what's wrong with Fischer's analogy. We can't move directly from our thinking about outcome risk to claims about status risk.
One final thought. Perhaps part of the worry is about the epistemic limits involved in each case. In a case where I'm looking for people in trolleys, I know roughly what I'm looking for, what sorts of measures I can take to improve my accuracy, and so on.
In cases where I'm looking for (to speak metaphorically) people inside a body in front of me - whether that body be human or not - I'm more at risk of ignoring salient but alien factors due to a kind of chauvinism (this bug doesn't express pain the way I express pain, so it doesn't feel pain). And I may also be aware that this has historically led to ignoring the genuine claims of certain classes of individual. Indeed, vegans and vegetarians should be especially attuned to this worry, since (if my experience is anything to go by) many people still believe this about almost all other animals, including some where the evidence seems incontrovertible.
Is that awareness sufficient to warrant refusing to eat insects? As Fischer, and others, might press, we surely can't follow this logic all the way down, so that anything where there's even the remotest possibility that it has F gets treated as if it certainly does. But all I want to gesture at here is the idea that the kind of risk we confront might warrant a different approach than straightforward probabilistic accounting. Where we draw the line is a different, equally difficult question.
There's been a lot of celebration in the UK news recently of the Suffragette movement, since it's 100 years since the 1918 Representation of the People Act, which granted the vote to (some) women.
Someone I know recently complained about the lack of focus in the associated discussion about the Suffragists. While I don't have the sufficient historical knowledge to offer a comprehensive distinction between the two groups, a rough division seems to be that while the 'ettes were prepared to engage in both violent and non-violent criminality to forward the cause, the 'ists wanted to do things within the letter of the law as it was.
I'm inclined to think that an absolute prohibition on law-breaking isn't something we should take very seriously. There can be deeply unjust laws; laws that violate justice so comprehensively that they are, for want of a better word, evil. But there are interesting questions that can be - and have been - asked on the legitimacy of law-breaking in the name of broader justice.
For instance, we might wonder whether, in a case where there is serious but non-comprehensive discrimination, is it only the unjust laws themselves that can be broken, Or does even a single unjust law make the law itself an ass? In which case it would be acceptable to break apparently unrelated laws in the name of justice. Part of the issue here - an issue of particular problem for those who would restrict law-breaking to a single unjust law - is the inter-connected nature of injustice. Woman who broke minor laws as suffragettes were treated excessively harshly. While injustice can be more serious in some areas of a person's life than others, to be treated unjustly by a social and legal system in any area is to be treated with disregard, disrespect, or worse. At the very least, being the victim of injustice anywhere may well leave people feeling vulnerable to similar liability elsewhere.
A slightly different question, which is to some extent more empirical but which also has some philosophical implications, is the extent to which the two traditions - of violent and non-violent protest - interact with one another. Those who oppose political violence in any form often point to exemplars such as Gandhi, or Martin Luther King, as people who got the job done in a peaceful way. Similarly, my acquaintance's complaint amounted to the claim that the "real" work had been done by the peaceful, law-abiding suffragists.
The empirical question is how far the willingness of the Suffragettes to engage in disruptive, illegal and violent behaviour led to a corresponding willingness on the part of the British state to engage with the non-violent Suffragists. Or how far the violence that existed in India in Gandhi's time led to his acceptability to the representatives of empire.
The philosophical issue this raises - albeit a fairly basic one - is that we cannot just point to the success of non-violent protest as evidence that violence is unnecessary, or even that it is counter-productive. If violence provides the context for non-violence to work, then violence may be a necessary (even if not sufficient) political tool for serious social change in the face of intransigent injustice.
A further question is how far those who are willing to engage in violent protest should be happy with this role. If it turns out that the primary function of violence is to make the 'less extreme' campaigners look like a more attractive option, does that present a problem?
This may depend, of course, on why violence is being adopted. If it is adopted merely to speed things up, then there's no particular reason for worry. But if, as is sometimes the case, the violent have more radical goals than the non-violent, this might well be a cause for concern. That, in turn, will depend on how far we accept a narrative of inexorable progress, where moderate gains are simply steps on the road to the final goal; or a narrative of struggle, where capitulating too early can mean losing hold of the ideal, perhaps for good.
The art of politicising a tragedy
Caveat: Since this is a blog, and not an article, I play loose with language by treating the left and right as two homogeneous groups. That's not accurate, but is hopefully not so inaccurate as to defeat my purposes.
America seems to be characterised increasingly by acts of gun violence. The response depends on features of the gunman. Typically, those on the left will use the opportunity to call for gun control. If the gunman is white, those on the right will offer thoughts and prayers, and perhaps say something about mental health. If he isn't, they'll say something about immigration, perhaps endorsing Trump's Muslim ban.
What follows, in any case, is that each side accuses the other of 'politicising a tragedy'. There have been screenshots on Twitter of people on both the right and the left reacting very differently to an episode of gun violence, depending on whether it supports their narrative. If it doesn't, opponents are reminded not to politicise a tragedy. If it does, a particular policy proposal is made.
Is this simply hypocrisy on both sides? If the suggestion (as it sometimes seems) is that any policy proposal in response to tragedy is 'politicising', then it certainly is. There is nothing wrong with using a tragedy as impetus to take political action.
We need to do one of two things with the concept of politicising:
1. Keep 'politicising' as a general term, but recognise that many instances of it are not wrong; or,
2. Make 'politicising' a term of art, and recognise that not all policy suggestions are forms of politicisation.
I want to suggest that option 2 is the way forward. One politicises a tragedy not just when one suggests a political response or solution, but when one does so insincerely. That might take several forms. It might be that one is primarily suggesting problems with opponents' proposals as a form of political points scoring. It might be that one supports a proposal for other means and simply be using the tragedy as a pretext (see, for instance, accusations from the right that gun control measures are a pretext for tyranny).
As someone who broadly supports the left's typical response (gun control) and strongly opposes the extreme right's response (a Muslim ban), where does this leave accusations of politicisation? It might be tempting to conclude simply that the left are correct in their suggestions, and that the right are incorrect. After all, I think that gun control is a good response to gun violence, and that banning Muslims from entering America is a bad response.
This would be too quick, though. For while I think that calls for immigration bans are deeply flawed, it does not follow that they are flawed in a way that means that those who make such calls are 'politicising'. At least some such people may be sincere in their suggestion; they may really believe that immigration bans are the way forward.
My suggestion to those on the left is therefore that we need to be a little more careful in our condemnation. A knee-jerk response to proposals that we abhor of calling them 'politicising' leaves us open to the same accusation. If we show no willingness to think that people we disagree with are sincere in their policy proposals, we should not expect them to think any differently of us.
Of course, there may be exceptions. Perhaps some people have repeatedly shown themselves to be insincere, and so can be more reasonably accused of 'politicising'. But in general, we should condemn calls for an immigration ban for what it is: racist, Islamophobic, and idiotic. Assuming, as a baseline, that others are sincere doesn't preclude other forms of criticism.