Lisa Koutsoviti - How can we quantify bias in automated decision-making?

Speaker:

  • Gert Janssenswillen
27 November 2024
12:00 - 13:00
Campus Diepenbeek - C101
Beleidsinformatica 016 (1) Beleidsinformatica 016 (1)

Abstract of the talk

Bias or discrimination in automated decision making has recently received more attention by the research community. For instance, Yang et al. (2023) reported that machine learning (ML) models for in clinical applications can be susceptible to ethnicity-based biases. Bartlett et al. (2022) measured disparities in interest rates across minority and non-minority borrowers in the FinTech industry. To measure bias in algorithmic decision-making, there exist several measures most of which are based on statistical or machine learning evaluation methods. However, they have several limitations, including their inability to consider all attributes in the data, their dependence on machine learning models and their focus on explicit bias rather than implicit. To address these limitations, we created two novel fairness measures. We use fuzzy-rough sets to quantify explicit bias and Fuzzy Cognitive Maps to quantify implicit bias. Both can be applied in tabular data for classification problems. In this talk, I will show how to apply and interpret these measures through a business related case-study.

Registration is possible until 21st November.

Registration

About the speaker

Lisa Koutsoviti

Beleidsinformatica 016 (1)
Back to the Agenda overview