mHealth: Creepiness and Consent

One of the problems of current mobile health technologies is that a lot of it is distinctly creepy. Providers may know what your breathing pattern is when you sleep, when you menstruate, that you have psoriasis or genital warts, how much you drink and whether you are feeling down. Not only do providers know all of these things, the logic of the market dictates that they monetise this knowledge, which often involves combining it with, say, information on what you do on social media, or dating websites. So now they know that you have been on three dates this month, care about the Amazon rainforest and have genital warts. In addition to that, providers might strive to combine information that was gathered by your mobile phone with your patient record, your comments on patient network sites or even your genetic profile – enabling them to predict your future health – or at least trying to. In turn, we – users – know very little about the providers who make money from the mass gathering of potentially sensitive information. Many people agree that this is a problem.

This sensitive information may be used in ways that are potentially detrimental to the user involved, for example when it is sold to insurers who may try to charge higher premiums for people with a higher risk of disease. And of course, information (and money) is power, and so we may worry about how these technologies exacerbate inequalities and make big companies even bigger and more powerful. But it seems that not all of the unease about the creepiness of mobile health can be explained by pointing to harms and risks. There is something uncomfortable about mHealth companies gathering sensitive, intimate details without us being able to control, or even know, what is being gathered. Actually, ‘uncomfortable’ seems something of an understatement here: a lot of people have the intuition that this mass surveillance is very problematic. In one way, this seems not too difficult to understand: it may be – and indeed, often is – argued that creepy technology is morally problematic because it interferes with the private sphere. While you were watching Das Leben der Anderen or 1984 and shuddering at the idea of government agents listening in on every conversation you had, your phone was listening to what you were watching, so that it could recommend that you should maybe also give the Handmaid’s Tale a try.

It doesn’t seem that hard to explain that something went wrong here.

Many share the intuition that unsolicited surveillance is creepy and, therefore, wrong. But is it really? Is there something inherently morally wrong with the mass gathering of ‘private’ data? As it turns out, the answer to that question takes us deep into the heart of moral theory and philosophy of law. We need to talk about what rights are.

The argument from privacy is roughly that freedom from surveillance is necessary to guarantee that individuals are able to live their lives as they please. To put it a bit more technically: persons have a right to develop and maintain a personal sphere that is sufficiently robust to allow them to pursue their own conception of the good. If you believe that there is such a thing as ‘human rights’, you will probably also agree that some sort of personal sphere is a basic requirement of a proper human rights framework. And freedom from surveillance then seems to follow from the observation that some, intimate, data has a certain kind of meaning or significance to the individual, and this meaning is intertwined with one’s identity – the forming of which we could say should be protected by freedom rights.

This, I believe, is not really controversial. To deny it would imply endorsing a totalitarian approach to information, and nobody wants that, except totalitarians, of course, but we do not talk to them. I don’t think it will take a whole lot of convincing to get people to agree that Das Leben der Anderen-style situation (“You don’t know me, but I know you”) where everything you do is scrutinised and monitored constitutes an infringement of rights. So the argument seems rather straightforward.

I would suggest however that the reason that this sounds straightforward is that it avoids the crucial question: is there anything wrong with digging in people’s private lives without their permission if there is no (risk) of harm? This is a different question from merely whether or not a certain totalitarian scenario is desirable or not.

The answer of what is at stake here seems to rely on one’s position on the idea of harmless wrongdoing, on which for example Joel Feinberg has written. The idea of harmless wrongdoing in this context is that there are some acts that infringe upon people’s rights, without harming them. Arthur Ripstein gives the example of someone sleeping in your bed. Here you are asked to imagine that while you were at work, someone breaks into your house, gets into your bed, and takes a nap, leaving no traces. And the question is whether this action constitutes a morally blameworthy infringement. The answer to this question depends on how we conceive of rights.

I would say that rights delineate a sphere of certain self-regarding matters, that no other person may infringe upon without justification. Every serious conception of rights allows for some sort of conception of freedom, in the sense that rights should minimally secure an area of in the words of Peter Jones a “moral space in which individuals are free to pursue their own plans and projects” (Jones, p. 122). I admit that this talk of ‘spheres’ and ‘spaces’ is not very precise. It is difficult to talk about what rights are in general without getting a little metaphorical. This, in itself, is an indication that we usually do not know very well what we talk about when we talk about rights. And that is unfortunate, because they’re really all we have when it comes to treating people fairly and justly. So, in order to be a bit more clear about what we are talking about here, we also need to talk about why rights are important.

There are two answers to this last question, represented by the two main schools of thought in the philosophy of rights: Will Theory and Interest Theory. Will theorists and interest theorists have been going at it for over two centuries and it does not seem that either of the two will emerge victoriously any time soon (although the will theorists are, in fact, right…)

Interest theorists believe that the function of rights is to protect certain, basic, interests. For example the right to physical integrity protects one against harmful attacks. Will theorists, on the other hand, think that the importance of rights derives from people’s autonomous agency and that people’s control over these rights is fundamental to what rights are. According to will theorists, being able to decide whether someone may, for example, touch you in a certain way is just what it means to have a right to physical integrity, no matter if you are harmed by this touching or not.

In the example of the unsolicited bed-sleeping, will theorist of rights will typically hold that, yes, there can be such instances of harmless wrongdoing and that – apparently harmless – privacy infringements are a key example. An interest theorist on the other hand will typically hold that as long as there is no harm, or risk of harm, rights have nothing to do with it. So it seems that whether or not we think creepy technology is inherently problematic depends on whether we think rights are based on interests or not. But maybe it’s not quite so simple.

There is at least one way in which the gathering of data by Big Tech is not *exactly* like listening in on people’s private conversations (or sleeping in a person’s bed without permission). We don’t know, for sure, whether there is anyone ‘listening in’ on our private information. There is not an actual person in San Francisco who is currently looking at what websites I have visited and joking to their colleagues about how much I’ve been procrastinating. In any case, I presume that this is not the case. I presume that there are servers that continuously compute complex data sets, by means of algorithms that automatically register preferences and that these – just as automatically – send out recommendations, or make predictions or whatever it is they are set to do. Humans do not really come in to it. And this seems to make a difference. Even a will theorist might say that in this case, it depends on what happens with the information and whether the information is used in a way that then constitutes an invasion of privacy. For example, if these algorithms decide to send updates about my health to my boss.

But maybe this is beside the point. Maybe it doesn’t matter for the invasion of privacy if it is a computer listening in, or an actual human. This raises the question, can technology really be creepy, or is this a privilege of human beings?

image_pdf

Leave a Reply

Your email address will not be published. Required fields are marked *