Speaker:
This event has already taken place.
Bias or discrimination in automated decision making has recently received more attention by the research community. For instance, Yang et al. (2023) reported that machine learning (ML) models for in clinical applications can be susceptible to ethnicity-based biases. Bartlett et al. (2022) measured disparities in interest rates across minority and non-minority borrowers in the FinTech industry. To measure bias in algorithmic decision-making, there exist several measures most of which are based on statistical or machine learning evaluation methods. However, they have several limitations, including their inability to consider all attributes in the data, their dependence on machine learning models and their focus on explicit bias rather than implicit. To address these limitations, we created two novel fairness measures. We use fuzzy-rough sets to quantify explicit bias and Fuzzy Cognitive Maps to quantify implicit bias. Both can be applied in tabular data for classification problems. In this talk, I will show how to apply and interpret these measures through a business related case-study.
Registration is possible until 21st November.