Algorithmic Fairness

The Disparate Benefits of Deep Ensembles

Deep Ensembles can improve performance but may also introduce fairness issues, unevenly benefiting different groups. This study identifies that effect, propose a explanation based on predictive diversity and explores mitigation techniques that can reduce unfairness while preserving performance gains.

Studying Causality in Algorithmic Decision Making: the Impact of IA in the Business Domain

AI-driven management conflicts with labor laws since AI relies on correlations, not causes, risking legal gaps and hidden discrimination. The study explores these issues and solutions.

The Intersection of Trustworthy AI and Labour Law. A Legal and Technical Study from a Tripartite Taxonomy

This article is a legal and technical study of the intersection between Trustworthy AI and Labour Law. We propose a tripartite taxonomy to understand the implications of the AI Act in the field of Labour Law.

Structural Group Unfairness: Measurement and Mitigation by means of the Effective Resistance

We define three metrics of the group information power (social capital) in a network based on effective resistance (spectral graph theory). We propose also three metrics of social capital unfairness (structural group unfairness) and a heuristic to mitigate it.

FairShap: A Data Re-weighting Approach for Algorithmic Fairness based on Shapley Values

FairShap, i.e. Fair Shapley Values, is a family of data valuation functions for Algorithmic Fairness based on Game Theory which can be used as a novel, interpretable, pre-processing and model-agnostic (re-weighting) method for fair algorithmic decision-making.