These are dangerous times. I don’t just mean that one can get infected with Sars-Cov-2 and end up in hospital or worse, although that is of course a risk. I primarily mean that we collectively face the danger that things may get much, much worse. Don’t get me wrong, I do not think this is the end of civilisation as we know it, not just yet anyway, but it does not take much imagination to see how the current crisis could spiral into Something Very Bad. For example, a continued state of lockdown may cause the global economy to collapse, which may cause widespread political instability. Many millions, especially in low- and middle-income countries, currently already find themselves without the means to provide for their basic needs. Many more may follow, which, apart from obviously being Something Very Bad itself, may lead to further chaos and the erosion of democratic institutions. Some countries, perhaps those who have invested a lot in their military dominance, may fall back to wild and irrational behaviour in order to distract from internal chaos. Destructive and lengthy military conflicts may ensue. Ethnic tensions may flare up. Authoritarianism may thrive. Carbon reduction ambitions may be scaled back and the first tentative steps to combat the climate crisis could be set back decades – which in turn may lead to a crisis that would dwarf the current one. In addition, there may be various setbacks in controlling the virus. Group immunity may turn out to be an illusion. Vaccinations and curative medicine may be difficult to develop, and we may meet wave after wave of this pandemic, each wave as deadly or more than the previous one.
I am not sure exactly how plausible each scenario is – although some are clearly more likely than others. I also have no idea what the full implications would be if they did indeed materialise. I doubt many people do. I believe it needs no argument, however, that no matter what direction we move in response to the pandemic, there are potential pitfalls and risks of Something Very Bad happening. I believe therefore that there are strong reasons to apply precaution in dealing with Covid-19. At the same time, precaution should not lead to impasse – governments, institutions and individuals (from here on: ‘we’) cannot afford to sit on their hands while the pandemic rages on.
Because something needs to be done fast and preferably without making matters worse, it is important that experts on various topics join forces to tackle the most pressing and complicated questions in addressing this crisis. As I hope to illustrate, this effort may benefit from the input of philosophers. I will not pretend that this blog post offers any definitive answers to the important questions, but, as a philosopher, I at least would like to share my perspective on some of the current developments. I will even commit to a negative recommendation: I will say something about what should not be done. In order to prevent, Something Very Bad from happening, we should not revert to a generic technology fix. Yes, I will be discussing apps.
There is now (mid April 2020) increasing support for applying mobile health technology in handling the coronavirus outbreak. Specifically, it is thought that apps could be of use in track and trace policies, where the interactions of individual citizens are logged so that when one person at some point tests positively for Covid, this sets off a signal to warn everybody that was recently in the vicinity of this person that they are at risk. The latter group may then be encouraged or forced to isolate themselves, thus protecting society from further spread. This is roughly the idea, but the devil is in the details, and there are real risks involved, depending on how this technology is implemented. Centralised storage of location data, for example, may be hacked or lead to function creep (where the data is appropriated for something else than initially intended, for example, for investigating tax fraud). For this reason, decentralised approaches seem to be favoured by most digital rights experts, but these are open to abuse too if left in the wrong hands, and their effectiveness is still under dispute. In addition, sorting citizens in this way may disproportionally affect vulnerable populations, for example, people who cannot afford to shield themselves from the risk of exposure, because their job is not compatible with working from home. Furthermore, depending on the perceived reliability, not enough people may be willing to sign up. This could perhaps be mitigated by making the app mandatory, but that would also entail a bigger infringement of privacy rights, and might lead to various ‘backlash effects’, where people come to distrust the motives for implementation and resist the measures even more fiercely. Such dynamics may threaten the legitimacy of the entire programme. We may end up with an ineffective app that infringes upon human rights on a massive scale, which is clearly Something Very Bad.
On the one hand, mobile technology appears to offer a way out. It may provide a way to prevent at least some fatalities, some overcrowding of IC wards, while at the same time allowing economies to recover from the heavy blows. But, on the other hand, the implementation of such technologies creates risks of unknown likelihood and magnitude.
This lack of evidence is troubling, because, as we have already seen, the current pandemic is a mess, and I mean ‘mess’ here as a technical term. Messy problems are systemically complex problems where it is not immediately clear which scientific discipline is qualified to provide the answers. It is not easy to avoid risks when you are in a messy problem, since there are risks all around. This clearly applies to the Covid-19 pandemic: there is no direction in which the choices we make do not expose people to risk. Since this situation is rife with uncertainty, it seems that we cannot simply opt to minimise risks, nor, as mentioned, is there the luxury of eliminating, or at least reducing, uncertainty by gathering evidence for a prolonged period of time: we need to act urgently. So how do we determine what is an acceptable price to pay?
Complex, messy problems are usually not very ‘sexy’. They require patience, epistemic humility and the ability to reconcile oneself with the imperfectness of life. Not everybody has those properties in abundance. It is therefore tempting to want to present messy problems as sexier than they actually are. One way to do that is to strive for a ‘technofix’: a technological way out that redefines the messy problem as something “neat, quick and techy”. My claim is that we should be wary of any technological fix to messy problems, precisely because these tend to disregard the complexity of the situation.
Of course, one might argue (people have) that, given the threat (massive mortality and global economic collapse) we do not need the same certainty of effectiveness of interventions that we would ordinarily require. We should just give it a go, and see where it leads us. I don’t disagree. But such a strategy can only be adopted in a responsible way if we attempt to steer away from Something Very Bad That Is Also Irreversible. My claim here is that the only way to deal with messy problems is by moving carefully – which prohibits deploying massive new surveillance applications without first carefully examining the possible repercussions and trade-offs involved. Such examination should be an interdisciplinary effort, and, as I said above, it should be guided by the ambition to act “decisively and cautiously”. That does not mean that we should take our time, in fact I have presumed that we do not have this luxury. Rather, it means that we should not leap into something we don’t know how to control. This, I think, excludes any generic technical solution. More in particular, this conclusion directly opposes the generous offer of Apple and Google to build bluetooth-based proximity tracking functionality in their respective underlying platforms. In a later blog post, I hope to say some more about the reasons why this is a bad idea (if others do not beat me to it entirely – edit (27-7-2020): they did). For now, I want to point out that it involves a picture-perfect example of thinking in terms of a technological fix: a global track & trace tool, of which it is assumed that it will be effective regardless of context. It takes no heed of actual risks, needs and possibilities in various countries and communities and pre-emptively shuts down any meaningful conversation between experts on the proper balancing of risks in a specific context. A precautionary approach would advise against such an irresponsible disregard of the messiness of the situation we are in. Let us not stumble into Something Very Bad.