This is a cross-post from the Practical Ethics Blog:
We recently saw a legal challenge to the current UK law that compels fertility clinics to destroy frozen eggs after a decade. According to campaigners, the ten-year limit may have had a rationale when it was instituted, but advances in freezing technology have rendered the limit “arbitrary”. Appeals to arbitrariness often form the basis of moral and political criticisms of policy. Still, we need to be careful in relying on appeals to arbitrariness; it is not clear that arbitrariness is always a moral ‘deal-breaker’.
On the face of it, it seems clear why arbitrary policies are ethically unacceptable. To be arbitrary is to lack basis in good reasons. An appeal against arbitrariness is an appeal to consistency, to the principle that like cases should be treated alike. Arbitrariness may therefore seem to cut against the very root of fairness.
However, there are at least two ways a policy’s arbitrariness is not a knock-down argument for scrapping or changing it. The first of these is when a policy is in fact based on arbitrary grounds, but where there are good alternative grounds for it. Consider, for instance, the furore around a decade ago over the firing of government health advisor David Nutt. Nutt was sacked for contradicting government guidelines over drug safety. Nutt publicly appealed to the idea of arbitrariness in his criticism of government policy. In particular, he said, it was arbitrary to support the continued legality of alcohol and tobacco while opposing the legalisation of less harmful drugs such as cannabis, LSD and ecstasy.
It was certainly worrying for the government to sack an advisor simply for disagreeing with them. And Nutt was right to say that government policy should be based on evidence, and probably wasn’t. So, government policy was arbitrary. But he was wrong to imply that because alcohol is more harmful than LSD, it is essentially arbitrary and thus wrong to keep the former legal while criminalising the other.
The reason is that this arbitrariness is ingrained in society. What we want to know when facing the question of whether to legalise or criminalise a particular drug is not only how it compares to other legal and illegal drugs, but how much good or harm will come from changing its legal status. It is possible, then, that although alcohol is worse than ecstasy, we should keep alcohol legal and ecstasy illegal because:
Here’s the other way a policy can be arbitrary and yet justified. A particular limit (e.g. a speed limit of 20mph outside schools; or an age limit of 18 to vote) can be arbitrary in the sense that any particular limit would be arbitrary, and yet a limit is still required. In this case, the question of whether something is arbitrary is comparison-sensitive. If we ask why the speed limit should be 20mph rather than 100, we can give good reason. But if we ask why it should be 20 rather than 21 or 19, the answer looks less clear. Similarly, we can explain why we should allow people to vote at 18 rather than 5, but may struggle when presented with articulate, intelligent and politically motivated 16-year-olds to say why the limit should not be dropped. Nonetheless, it seems plausible that in both cases we do need some limit. And any limit that attempts to cover a general population will either face exceptions (dropping the voting age to 16 will face some intelligent, articulate 15-year-olds), or be open to symmetrical challenges (just as there is no reason to prefer 20mph to 19mph, so too is there no reason to prefer 19 to 20).
These cases come in two kinds. One is where an easy-to-measure feature tracks, to some degree, a feature that is far more difficult to get a handle on. Age tracks emotional maturity, politically-relevant knowledge and independence, but it does so very imperfectly (some teenagers outdo some adults on all of these). The other is where we have good reason for preferring something in a particular range, but not for preferring any individual within that range. It really is best to have a speed limit of around 20 near schools, but it really does seem impossible to justify 20 precisely.
Faced with such cases, we have three choices. We can scrap the limit altogether as unavoidably arbitrary. In the first kind of case, we can attempt to impose limits based on non-arbitrary features (e.g. replacing a voting age with epistocracy, where voting rights are based on knowledge and intelligence). Or we can accept that while an arbitrary limit is imperfect, it’s the best we’ll get. Like the drugs debate, this latter view requires attention to the pragmatic consequences of adopting any particular policy. For instance, if the idea of epistocracy looks less attractive than moderate arbitrariness (and I think it looks much, much less attractive), we will need to accept that our voting limit is always going to exclude some people unfairly. Nonetheless, we might think that a particular age limit (not necessarily 18) will do the best available job of excluding as few people as possible, without opening us up to a mass of uninformed, clueless voters (as might be the case if we simply dropped an age limit altogether.
These cases point to a further feature that’s required for arbitrariness to be justified (the first being that basic rights are not violated or denied). It is that the area in which an arbitrary limit is in place is something that requires a limit, and that no better criteria for a limit are available.
Strikingly, these three features map onto the case being made against the ten-year storage limit. Campaigners have argued that the limit is not only arbitrary, but arbitrary in a way that violates the right to a family life. And while the individual bringing the challenge accepts the need for a limit of some kind, she argues that this should be in line with the age at which most fertility clinics would not accept patients in any case: 55. Such a limit might itself be arbitrary, if some of those over 55 could have children. But importantly, it might be less arbitrary than the status quo.
* Of course, these claims may be false. It may be safer to legalise ecstasy. But the truth of this claim has nothing to do with a comparison with alcohol.
This is a cross-post from my recent update at the Practical Ethics blog
There’s a story that’s often told about the evolution of the doctor-patient relationship. Here’s how it goes: back in the bad old days, doctors were paternalists. They knew what was best, and the job of the patient was simply to do as they were told and hopefully get better. Then, in part because of abuses of power, and in part because of cultural changes, a new model emerged. This model cast patients not as passive recipients of instruction, but as active, autonomous agents, put in charge of their own medical decisions. The doctor-patient relationship was remodelled, from a paternalistic relationship (doctor looks after patient’s health) to a service relationship (doctor does what patient wants, within limits).
That story is almost certainly too simple to be true. But even histories that aren’t wholly accurate can come to influence our culture and expectations. And the dominant assumption between both patients and medical professionals seems to be that our relationship will be cast on what is sometimes called the “informative model” (Emmanuel and Emmanuel, 1992), where the medical role is simply to provide the patient with empirical information, such as information about likely risks and outcomes.
That model has itself been subject to challenge. For instance, we might think that doctors should be willing to offer advice not only about facts, but also about values. Emmanuel and Emmanuel (1992) suggest two mid-points between paternalism and the informative model. According to the ‘interpretive’ model, the doctor helps the patient to work out the patient’s own values; on the deliberative model, of which liberal rationalism is an instance, the doctor aims to persuade the patient to adopt her favoured course of action, though the decision is ultimately left up to the patient.
All of these models, however, leave the ultimate decision up to the patient. One reason for this might be the assumption that any model where decisions are not made by the patient must be a paternalist one. After all, either the patient is making decisions, or someone else is. And if someone else is making decisions for me, isn’t that worryingly paternalistic?
But this line of thought is subject to an understandable confusion between what we might call sovereignty on the one hand and, on the other, decision-making authority. Consider an issue that has been an undercurrent in British politics lately, the relationship between representatives in a democracy like the UK’s, and the electorate whom they represent. One model (by no means uncontroversial) sees the right to make decisions as essentially loaned to MPs by their voters. Nonetheless, ultimate sovereignty lies with the people.
Having someone else make decisions for you, even about very important issues, is consistent with your ultimately having the authority, and the power, to take back control (sorry) over decision-making. On this model, the decision-maker (doctor or parliamentarian) becomes the appointed fiduciary of the sovereign individual or group (patient or public).
There is one important difference between parliamentary democracy and my proposal for an appointed fiduciary model in medicine, though. In the UK, even if the people theoretically have control, it isn’t practically possible for us to take it back in any substantive way without undermining our system of government. The people can choose who governs them, but they can’t choose not to be governed.
In contrast, my suggestion is that an appointed fiduciary model could be the exception rather than the norm in medicine. Placing a doctor in charge of your medical decision-making would be present as an option, not imposed on anyone who didn’t want it. This appointment could be rescinded at any time. And having appointed a doctor as a fiduciary for one medical decision would not mean that they automatically had decision-making authority over other medical decisions, let alone decisions outside the medical sphere.
Having control over decisions that affect your life is sometimes presented, not least in the Just So story we began with, as an unqualified good. It surely is often a good thing to be in control of your life. But control, and the responsibility it entails, can also be a burden. Being responsible opens you up to blame, to making mistakes of judgement that have serious costs, and perhaps even to substantive penalties if your poor choices affect others. That can be stressful and emotionally draining for anyone. Add to that the stresses that come with serious illness, and decision-making authority might turn out to be too much. To take away someone’s right to choose when they want to do so is wrong. But it is also wrong to insist that someone choose when that is too much for them.
To be clear, the kind of patient I’m thinking of is not the individual who lacks mental capacity in a legal sense. Mental capacity is a threshold concept (you either have it with respect to a particular decision, or you don’t), and the principle of assumed capacity is an important one. Rather, I am thinking of patients who undeniably have capacity, but who are at a point where decision-making is extremely burdensome for them.
No doubt, there are worries that might attach to this proposal, as with any proposal that involves transferring authority away from the individual who is affected by the relevant decisions. Although patients would retain ultimate authority, vulnerable patients may be liable to manipulation and abuse. The decision to transfer authority would need to be subject to oversight beyond the doctor involved.
Still, I think this model is worth considering as one option among several. We must not fall into the trap of assuming that one model will suit everyone equally well. Nor can a fear of misapplication, which could be minimised, distract from the potential to benefit patients.
Vernon Bognador (2016) ‘After the referendum, the people, not parliament, are sovereign’ Financial Times December 9th 2016
Ezekiel J. Emmanuel and Linda L. Emmanuel (1992) ‘Four Models of the Physician-Patient Relationship’ The Journal of the American Medical Association267(16): 2221-2226
Laurence B. McCullough (2011) ‘Was Bioethics Founded on Historical and Conceptual Mistakes About Medical Paternalism?’ Bioethics 25(2): 66-74
Julian Savulescu and Richard W Momeyer (1997) ‘Should Informed Consent Be Based on Rational Beliefs?’ Journal of Medical Ethics 23: 282-288
Here are two cases:
1. A trolley is hurtling towards a person on a track. You can divert the trolley so that it plummets over a cliff. You know that there is a small (but not vanishingly small) chance that there is a person in the trolley.
2. Eating plant food involves, indirectly, being complicit in the suffering of animals that are killed during harvest. You could reduce the amount of plant food you eat by eating insects like crickets. You know that there is a small (but not vanishingly small) chance that crickets have whatever feature(s), F, make things into persons. (For instance, crickets might feel pain and pleasure).
Bob Fischer (in his 2016 paper 'Bugging the strict vegan', ) wants us to accept that these two situations are equivalent. In both cases, we might say, you 'might kill someone who matters'. Since you clearly ought to divert the trolley, you ought to eat the insect.
I'm not a huge fan of trolley cases. They're good at exposing our bare intuitions about, well, trolley cases. But they're often not so good at getting us to think about the underlying reasons or values behind those intuitions.
Nonetheless, I think Bob's challenge to vegans (and vegetarians) who don't want to eat insects is a reasonable one. OK, in eating an insect you might violate a right (at least, those are the terms in which I think of it). But if you don't eat an insect, additional rights (of field animals) definitely get violated.
Here is a third case:
3. A person is dying of organ failure. In the bed next to them is a human being who is being kept artificially alive while apparently in a permanent vegetative state (PVS). You could take the PVS patient's organs, and give them to the fully conscious patient. You know that there is a small (but not vanishingly small) chance that the PVS patient has whatever feature(s), F, make things into persons.
People will differ about this, I'm sure. But, as it stands, we don't typically think it's OK to use PVS patients as involuntary organ donors, just because we are unsure about whether they are still persons. And I think that many people (including myself) who would send the trolley over the cliff in case (1) will not be happy with cutting open the PVS patient in case (3).
It's true that all three cases could be described, as I did above, as cases where you 'might kill someone who matters'. But it's also true that there is a distinction within that description. In particular, while case (1) is a case where:
You might kill someone who definitely matters,
Cases (2) and (3) are cases where:
You will definitely kill someone who might matter.
The first case is thus an instance of what I call outcome risk: risk that derives from uncertainty about what will occur.
The second and third case are cases of what I call status risk: risk that derives form uncertainty about the moral status of those who will definitely be affected.
I'm not too sure what, exactly, differentiates the two. After all, someone might object that status risk still is a form of outcome risk: the outcome you're not sure about is whether you'll kill anything that matters. But I think this distinction may go some way towards explaining what's wrong with Fischer's analogy. We can't move directly from our thinking about outcome risk to claims about status risk.
One final thought. Perhaps part of the worry is about the epistemic limits involved in each case. In a case where I'm looking for people in trolleys, I know roughly what I'm looking for, what sorts of measures I can take to improve my accuracy, and so on.
In cases where I'm looking for (to speak metaphorically) people inside a body in front of me - whether that body be human or not - I'm more at risk of ignoring salient but alien factors due to a kind of chauvinism (this bug doesn't express pain the way I express pain, so it doesn't feel pain). And I may also be aware that this has historically led to ignoring the genuine claims of certain classes of individual. Indeed, vegans and vegetarians should be especially attuned to this worry, since (if my experience is anything to go by) many people still believe this about almost all other animals, including some where the evidence seems incontrovertible.
Is that awareness sufficient to warrant refusing to eat insects? As Fischer, and others, might press, we surely can't follow this logic all the way down, so that anything where there's even the remotest possibility that it has F gets treated as if it certainly does. But all I want to gesture at here is the idea that the kind of risk we confront might warrant a different approach than straightforward probabilistic accounting. Where we draw the line is a different, equally difficult question.
There's been a lot of celebration in the UK news recently of the Suffragette movement, since it's 100 years since the 1918 Representation of the People Act, which granted the vote to (some) women.
Someone I know recently complained about the lack of focus in the associated discussion about the Suffragists. While I don't have the sufficient historical knowledge to offer a comprehensive distinction between the two groups, a rough division seems to be that while the 'ettes were prepared to engage in both violent and non-violent criminality to forward the cause, the 'ists wanted to do things within the letter of the law as it was.
I'm inclined to think that an absolute prohibition on law-breaking isn't something we should take very seriously. There can be deeply unjust laws; laws that violate justice so comprehensively that they are, for want of a better word, evil. But there are interesting questions that can be - and have been - asked on the legitimacy of law-breaking in the name of broader justice.
For instance, we might wonder whether, in a case where there is serious but non-comprehensive discrimination, is it only the unjust laws themselves that can be broken, Or does even a single unjust law make the law itself an ass? In which case it would be acceptable to break apparently unrelated laws in the name of justice. Part of the issue here - an issue of particular problem for those who would restrict law-breaking to a single unjust law - is the inter-connected nature of injustice. Woman who broke minor laws as suffragettes were treated excessively harshly. While injustice can be more serious in some areas of a person's life than others, to be treated unjustly by a social and legal system in any area is to be treated with disregard, disrespect, or worse. At the very least, being the victim of injustice anywhere may well leave people feeling vulnerable to similar liability elsewhere.
A slightly different question, which is to some extent more empirical but which also has some philosophical implications, is the extent to which the two traditions - of violent and non-violent protest - interact with one another. Those who oppose political violence in any form often point to exemplars such as Gandhi, or Martin Luther King, as people who got the job done in a peaceful way. Similarly, my acquaintance's complaint amounted to the claim that the "real" work had been done by the peaceful, law-abiding suffragists.
The empirical question is how far the willingness of the Suffragettes to engage in disruptive, illegal and violent behaviour led to a corresponding willingness on the part of the British state to engage with the non-violent Suffragists. Or how far the violence that existed in India in Gandhi's time led to his acceptability to the representatives of empire.
The philosophical issue this raises - albeit a fairly basic one - is that we cannot just point to the success of non-violent protest as evidence that violence is unnecessary, or even that it is counter-productive. If violence provides the context for non-violence to work, then violence may be a necessary (even if not sufficient) political tool for serious social change in the face of intransigent injustice.
A further question is how far those who are willing to engage in violent protest should be happy with this role. If it turns out that the primary function of violence is to make the 'less extreme' campaigners look like a more attractive option, does that present a problem?
This may depend, of course, on why violence is being adopted. If it is adopted merely to speed things up, then there's no particular reason for worry. But if, as is sometimes the case, the violent have more radical goals than the non-violent, this might well be a cause for concern. That, in turn, will depend on how far we accept a narrative of inexorable progress, where moderate gains are simply steps on the road to the final goal; or a narrative of struggle, where capitulating too early can mean losing hold of the ideal, perhaps for good.
Caveat: Since this is a blog, and not an article, I play loose with language by treating the left and right as two homogeneous groups. That's not accurate, but is hopefully not so inaccurate as to defeat my purposes.
America seems to be characterised increasingly by acts of gun violence. The response depends on features of the gunman. Typically, those on the left will use the opportunity to call for gun control. If the gunman is white, those on the right will offer thoughts and prayers, and perhaps say something about mental health. If he isn't, they'll say something about immigration, perhaps endorsing Trump's Muslim ban.
What follows, in any case, is that each side accuses the other of 'politicising a tragedy'. There have been screenshots on Twitter of people on both the right and the left reacting very differently to an episode of gun violence, depending on whether it supports their narrative. If it doesn't, opponents are reminded not to politicise a tragedy. If it does, a particular policy proposal is made.
Is this simply hypocrisy on both sides? If the suggestion (as it sometimes seems) is that any policy proposal in response to tragedy is 'politicising', then it certainly is. There is nothing wrong with using a tragedy as impetus to take political action.
We need to do one of two things with the concept of politicising:
1. Keep 'politicising' as a general term, but recognise that many instances of it are not wrong; or,
2. Make 'politicising' a term of art, and recognise that not all policy suggestions are forms of politicisation.
I want to suggest that option 2 is the way forward. One politicises a tragedy not just when one suggests a political response or solution, but when one does so insincerely. That might take several forms. It might be that one is primarily suggesting problems with opponents' proposals as a form of political points scoring. It might be that one supports a proposal for other means and simply be using the tragedy as a pretext (see, for instance, accusations from the right that gun control measures are a pretext for tyranny).
As someone who broadly supports the left's typical response (gun control) and strongly opposes the extreme right's response (a Muslim ban), where does this leave accusations of politicisation? It might be tempting to conclude simply that the left are correct in their suggestions, and that the right are incorrect. After all, I think that gun control is a good response to gun violence, and that banning Muslims from entering America is a bad response.
This would be too quick, though. For while I think that calls for immigration bans are deeply flawed, it does not follow that they are flawed in a way that means that those who make such calls are 'politicising'. At least some such people may be sincere in their suggestion; they may really believe that immigration bans are the way forward.
My suggestion to those on the left is therefore that we need to be a little more careful in our condemnation. A knee-jerk response to proposals that we abhor of calling them 'politicising' leaves us open to the same accusation. If we show no willingness to think that people we disagree with are sincere in their policy proposals, we should not expect them to think any differently of us.
Of course, there may be exceptions. Perhaps some people have repeatedly shown themselves to be insincere, and so can be more reasonably accused of 'politicising'. But in general, we should condemn calls for an immigration ban for what it is: racist, Islamophobic, and idiotic. Assuming, as a baseline, that others are sincere doesn't preclude other forms of criticism.