Fairness & Industrial Bias

Motivation

Machine learning algorithms thrive on data, but data often carries unwanted biases – societal prejudices reflected in observations, inconsistencies due to processing methods, and even hidden sensitive information. These biases, amplified during training, lead to unfair decisions and hinder generalizability. This scientific challenge tackles this crucial challenge by aiming to develop robust machine learning models that uphold fairness regardless of distribution shifts.

Research directions

During the DEEL project, three axes have been explored:

Industrial datasets often harbor hidden biases that only reveal their harmful effects after deployment. In critical applications, blindly deploying models without awareness of these blind spots can have drastic consequences. This research addresses this concern by focusing on two key areas: firstly, detection of hidden bias sources in high-dimensional data like images and text, and secondly, measuring the impact of identified biases on model performance. The first axis employs several strategies to unearth hidden biases. One approach involves clustering data using carefully chosen representations that help visualize and isolate potential biases. Additionally, influence functions are harnessed to pinpoint the individual data points that have the most significant impact on the model’s predictions. By analyzing these influential samples, researchers can gain valuable insights into the nature and root causes of potential biases lurking within the data.

The second axis of this research delves into the intricate world of measuring fairness. While numerous fairness metrics exist in literature, each with its own specific interpretation of what constitutes « fairness », their application heavily depends on the unique context of the situation. One key limitation faced by many of these metrics is their reliance on a binary sensitive variable, an oversimplification for real-world scenarios involving multiple potentially discriminated groups. To address this challenge, this research axis proposes a novel approach to generalize existing fairness metrics, enabling them to handle more complex scenarios with non-binary sensitive variables and accommodate diverse, potentially overlapping, discriminated groups. This generalization effort can potentially provide a more nuanced and accurate assessment of fairness.

With the sources of bias identified and their impact measured, the final axis tackles the crucial step of mitigating bias and ensuring model fairness. Three promising avenues have been explored:

  • Optimal transport theory finds its application here, serving to align representations of sensitive and non-sensitive groups. This alignment process effectively reduces bias by ensuring that the model treats both groups similarly in the feature space.
  •  
  • Data augmentation offers another powerful tool. By strategically adjusting the distributions of sensitive and non-sensitive groups, the research aims to create a more balanced training dataset, ultimately leading to fairer model predictions.
  •  
  • Distributionally robust optimization and representation learning: This vein delves into theoretical foundations, exploring how these frameworks can guide the learning process towards unbiased representations. Ideally, this approach would achieve fairness without requiring explicit group labels, offering a more generalizable and privacy-preserving solution.
  1.  

Tool - Influenciae

As a means to democratize tools for unknown bias detection, we have developed an open-source library in TensorFlow that implements some of the most recent techniques in the state-of-the-art for computing the influence of data.

Main Publications

Unsuspected Bias paper preview

1. « Detecting and Processing Unsuspected Sensitive Variables for Robust Machine Learning », Laurent Risser, Agustin-Martin Picard, Lucas Hervier, Jean-Michel Loubes, Algorithms, 16(11), 510, 2023

Fairness seen as global sensitivity analysis paper preview

2. « Fairness seen as Global Sensitivity Analysis», Bénesse, C., Gamboa, F., Loubes, J., & Boissin, T., Machine Learning, Springer, 1–28, 2022

Tackling Bias with Wasserstein-2 paper preview

3. « Tackling Algorithmic Bias in Neural-Network Classifiers using Wasserstein-2 Regularization», Risser, L., Sanz, A. G., Vincenot, Q., & Loubes, J.,  Math Imaging Vis 64, 672–689 (2022)

Leveraging Influence Functions for Dataset Exploration and Cleaning paper preview

4. « Leveraging Influence Functions for Dataset Exploration and Cleaning », Agustin Martin Picard, David Vigouroux, Petr Zamolodtchikov, Quentin Vincenot, Jean-Michel Loubes, et al.,  ERTS 2022