December 9, 2022
Today, more decisions are being handled by artificial intelligence (AI) and machine learning (ML) algorithms, with increased implementation of automated decision-making systems in different applications. Unfortunately, ML algorithms are only sometimes as ideal as we would expect. The research has shown a model can perform differently for distinct groups within your data. Those groups might be identified by protected or sensitive characteristics, such as race, gender, age, and veteran status. In this paper, we investigate various methods for detecting, understanding, and mitigating unwanted algorithmic bias consisting of: Analyzing the dataset by Exploratory Data Analysis (EDA) and visualizing the sensitive features to check the possibility of fairness issues in the dataset.