Tobi

Data selection, algorithmic bias and user vulnerability

Tobi is a 37 year old self-trained musician and tech enthusiast. As a child, he was diagnosed with Type 1 diabetes, when tests showed that his body was unable to produce insulin to regulate his blood glucose levels. He works as a sound engineer and lives a busy life, which involves a lot of travel and late-night work shifts. He loves his job and the excitement that comes with it, but sometimes, he finds it difficult to remember when and how much insulin he needs to take. He is thrilled when he finds an app for his smartwatch, which sends him reminders to take his insulin. The app even measures his glucose levels with new sensor-based technology that measures the glucose level though sweat measured under the watch. Based on these measurements, the app provides him with personalised advice regarding a proper dosage. Tobi is relieved as he believes that with the app, he will be on top of his health while being able to enjoy his music adventures.

What Tobi does not know is that the practice of measuring glucose levels via sweat has not been properly tested and is not safe for medical use. The smartwatch misreads his blood glucose levels and the app gives Tobi incorrect advice regarding insulin dosage. One day, Tobi takes more insulin than he needs and goes into diabetic shock.

When Tobi recovers, he and his doctor write a complaint about the app to a health regulation agency. The agency starts an investigation into both the smartwatch and the app because Tobi’s complaint is one of a handful. The investigation uncovers that due to the lack of systematic testing with a proper research design, the smartwatch is not able to provide correct and reliable blood glucose measurements. Furthermore, the inquiry identifies that the app developer team does not involve medical professionals. The findings also point to many problems with bias clouding health guidance generated by the app. The investigation finds that to design algorithms utilised by the app, the developers used training data from clinical trials which were skewed towards patients with Type 2 diabetes, which is caused by the body being unable to metabolise glucose.

Through the investigation, it also becomes clear that when designing the app, the developers were convinced that the technology would provide better results, if it combined individual data with data from other users to identify an average good dose of insulin. In consequence, the algorithm-generated guidance was not tailored to Tobi’s individual needs.

Data selection, algorithmic bias and user vulnerability

Tobi’s case raises many ethical issues regarding efficacy, reliability and safety of mHealth technologies. As such, it illustrates the importance of proper design and testing of devices which promise to offer personalised health care and medical guidance.

You might be asking, which criteria of good design are important? Tobi’s case highlights that a well designed technology needs to be based on appropriate and condition-specific medical data, otherwise it can exacerbate user vulnerability and even cause harm in users. A good mHealth technology also needs to provide health benefits to users and it can only do so on the grounds of medically justified knowledge. mHealth users rightly expect correct and personalised health guidance and for a technology to fulfill such expectations, it needs to be able to process training and user data in a way that makes them relevant to particular users. Here, criteria such as the type of one’s diabetes are important along with other factors, such as one’s ethnicity or sex. For example, it is known that historically, a lot of clinical trials have been conducted with white men. In consequence, medical professionals and technologies working with training data generated by these selective trials tend to overdiagnose (and unnecessarily treat) black men with sickle cell anemia with diabetes.

Sex and gender are also relevant factors, particularly in terms of impact on progression of disease and complications. In regards to Type 1 diabetes, it is known that women of childbearing age are less likely to develop the illness. Type 2 diabetes raises the risk of Coronary Heart Disease in women, while men appear more susceptible to the consequences of indolence and obesity, possibly owing to sex-specific differences in insulin sensitivity and regional fat deposition.

Thus, it is important that clinical guidelines and mHealth technologies implement a sex and gender sensitive approach to diabetes, as this can help to improve therapy and reduce progression of disease and development of complications.

To ensure correct diagnosis and good personalised healthcare for all, mHealth technologies ought to be informed by data from diverse population groups. When it comes to algorithms utilized by these technologies, it is absolutely crucial to eliminate any bias, which could lead to wrong or misleading health guidance and potentially also harm to users. As we see in Tobi’s case, when incorrect blood glucose measuring combines with training data relevant to a different type of diabetes, health guidance offered by mHealth technologies can lead to adverse health outcomes in users.

In order to prevent harm to users, mHealth technologies need to be properly tested for safety and efficacy as well as appropriateness, fairness and reliability of their algorithms.

Literature

Campolo, A., Sanfilippo, M., Whittaker, M. and Crawford, C. (2017). AI Now 2017 Report. AI Now Institute.

Gale, E. A. M.; Gillespie, K. M. (2001). Diabetes and gender. Diabetologia, 44(1), 3-15.

Sharp, M., O’Sullivan, D. 2017. Mobile Medical Apps and mHealth Devices: A Framework to Build Medical Apps and mHealth Devices in an Ethical Manner to Promote Safer Use – A Literature Review. Stud Health Technol Inform, 135: 363-367.

Schneider, A. L. C., Lazo M., Ndumele, C. E., Pankow, J. S., Coresh, J., Clark, J. M., Selvin, E. (2013). Liver enzymes, race, gender and diabetes risk: the Atherosclerosis Risk in Communities (ARIC) Study. Diabetic Medicine. 30(8): 926-933.

Siddiqui, M. A., Khan, M. F., Carline, T. E. (2013). Gender Differences in Living with Diabetes Mellitus. Mater Sociomed, 25(2),140–142.

Legato, M.J., Gelzer A., Goland R., Ebner S.A., Rajan, S., Villagra, V., Kosowski, V., The Writing Group for The Partnership for Gender-Specific Medicine. Gender-specific care of the patient with diabetes: Review and recommendations. (2016). Gender Medicine, 3(2), 131-158.

Juutilainen, A., Kortelainen, S., Lehto, S.,  Rönnemaa, T., Pyörälä, K, Laakso, M. (2004). Gender Difference in the Impact of Type 2 Diabetes on Coronary Heart Disease Risk. Diabetes Care 2004 Dec; 27(12): 2898-2904.

Source

This case is inspired by a New York Times blog. The story raised issues with untested and unreliable apps, including those which are failing to count blood glucose correctly.
Krisch, J. A. (2015). Questioning the Value of Health Apps. The New York Times, 16. 3. 2015.


image_pdf

More Projects