Deep Ensembles can improve performance but may also introduce fairness issues, unevenly benefiting different groups. This study identifies that effect, propose a explanation based on predictive diversity and explores mitigation techniques that can reduce unfairness while preserving performance gains.
AI-driven management conflicts with labor laws since AI relies on correlations, not causes, risking legal gaps and hidden discrimination. The study explores these issues and solutions.
This article is a legal and technical study of the intersection between Trustworthy AI and Labour Law. We propose a tripartite taxonomy to understand the implications of the AI Act in the field of Labour Law.
We define three metrics of the group information power (social capital) in a network based on effective resistance (spectral graph theory). We propose also three metrics of social capital unfairness (structural group unfairness) and a heuristic to mitigate it.
FairShap, i.e. Fair Shapley Values, is a family of data valuation functions for Algorithmic Fairness based on Game Theory which can be used as a novel, interpretable, pre-processing and model-agnostic (re-weighting) method for fair algorithmic decision-making.