Algorithms stand for the mathematical calculation or logic behind any system which makes decisions or performs tasks. 1 mHealth technologies use algorithms to process user data and to produce algorithm-generated predictions and particular personalised health guidance. Algorithms can become a problem when particular social bias or prejudice gets inbuilt into them. This is because they are programmed by people who are not immune to making certain assumptions about their social environment or subscribing to particular dominant social norms, which are not universal (for example, one dating app only paired women with men who were taller than them, without factoring in individual preferences of particular users). Some bias can also be the result of using a limited data sample for training an algorithm (for example data from a clinical study with male patients will be applied to both men and women) or of mislabeling certain data sets (for example images). Bias built into algorithms can produce algorithmic bias and, in consequence, algorithms can generate irrelevant or incorrect predictions or health guidance, which can negatively impact on mHealth users (for miscalculations of medicine dosage). This is why it is very important to test algorithms for any bias, conduct algorithmic audits and improve the quality of algorithms to ensure that mHealth technologies work well for a diverse cohort of users.

  1. AI Now Institute. Algorithmic Accountability Policy Toolkit. (2018). Retrieved from:
Filed under: