Machine learning models are becoming more and more integrated into various decision-making processes, which calls for a rigorous analysis of their ethical implications, especially with regard to fairness. The use of the FairScore Transformer (FST), a post-processing method intended to reduce biases in supervised learning models, is examined in this thesis. First, a logistic regression model was trained with the Adult dataset in order to create a baseline for variables related to performance and fairness. The model's output probabilities were then modified using the FST in an effort to lessen demographic differences without materially affecting accuracy. The findings showed a minor loss in accuracy but a large reduction in fairness inequities, with demographic parity disparity falling from 15% to 5%. This work adds to the larger conversation on the development of ethical AI by providing empirical support for the FST's efficaciousness in improving fairness in machine learning models. According to the results, supervised learning models' fairness may be increased by post-processing methods like FST without significantly lowering the models' prediction accuracy.