This work evaluates how LLMs handle mental health crises, introducing a unified taxonomy, benchmark dataset, and expert-based evaluation protocol — revealing both support capabilities and significant safety risks.
PhD thesis proposing a sociotechnical framework for Trustworthy AI, with contributions in fairness (FairShap), structural disparity in networks (ERG), human–AI complementarity for matching, and AI governance in labor law.
CoMatch is a collaborative matching system that combines human decisions with algorithmic decisions to outperform humans or algorithms alone.
Deep Ensembles can improve performance but may also introduce fairness issues, unevenly benefiting different groups. This study identifies that effect, propose a explanation based on predictive diversity and explores mitigation techniques that can reduce unfairness while preserving performance gains.
AI-driven management conflicts with labor laws since AI relies on correlations, not causes, risking legal gaps and hidden discrimination. The study explores these issues and solutions.
This article is a legal and technical study of the intersection between Trustworthy AI and Labour Law. We propose a tripartite taxonomy to understand the implications of the AI Act in the field of Labour Law.
We define three metrics of the group information power (social capital) in a network based on effective resistance (spectral graph theory). We propose also three metrics of social capital unfairness (structural group unfairness) and a heuristic to mitigate it.
FairShap, i.e. Fair Shapley Values, is a family of data valuation functions for Algorithmic Fairness based on Game Theory which can be used as a novel, interpretable, pre-processing and model-agnostic (re-weighting) method for fair algorithmic decision-making.