Bias occurs when a disproportionate weight is placed on a particular judgment, for example due to prejudice or stereotyping. Typically bias in algorithmic decision making becomes problematic when a particular judgment that may positively or negatively impact a specific group, or an individual belonging to a specific group, is based on irrelevant group characterics, thus leading to an unfair decision outcome. Automated decision making thus presents particular challenges with regard to issues of justice when such decision making is based on characteristics such as gender, ethnicity or age. Because individuals have numerous, overlapping identities, it has been argued that there is an inherent conflict in attempts to address bias across different groups, and some form of bias in algorithmic decision making is inevitable for at least some conceptions of fairness. This raises deep, philosophical questions on the role of algorithms in society and its effects on issues of equality and social justice.