REMOVING BIAS AND SENSITIVE INFORMATION FROM DATASETS IN MACHINE LEARNING

Faculty members : Mathieu Serrurier, Jean-Michel Loubes

Datascientists : David Vigouroux, Quentin Vincenot, Franck Mamalet

SCOPE

Propose and demonstrate solutions where decisions are not influenced by bias (Unknown and Known), which could cause unfair Machine Learning systems, thus non-dependable solutions.

Understanding the effects of the distribution of the learning sample in the machine learning process in order to guarantee anomaly detection, robust efficient algorithms resilient to modifications of their environment, possibly taking benefit of multiple inputs (transfer learning or consensus learning) or able to detect missing learning data that could be required to protect themselves from adversarial conditions.

Decision rules in Machine Learning techniques are learnt by looking at labeled data called samples. Decisions are applied to the whole data, which is assumed to follow the same underlying distribution. However, in some cases, there are some biases, that can influence these decisions. Biases can be due to a real but unwanted bias present in data observations (societal bias for instance), or due to the way data is processed (multiple sources, parallelized inference, evolution in the distribution of data, etc.). Machine Learnings techniques searching for correlation in the training dataset will tend to take advantage of and magnify these biases.


The bias topic is also likened to the problematic of protection of sensitive information present in the data which must remain hidden to avoid to be retrieved by anyone with granted access to the database or the outcome of an algorithm trained using such observations as a learning sample. The challenge will be to propose also methods to keep sensitive information hidden during the training phase.

Examples of industrial use cases

  1. Remote sensing classification with multiple sensors. To guaranty independence of results regarding the sensors characteristics
  2. Traffic signs (automotive, railway, taxiway) recognitions under different environments conditions (luminosity, weather conditions, night/day, countries…). To guaranty independence of results regarding these conditions
  3. Be able to share datasets with sensitives information  

State of the art and limitations

Problems of bias, either coming from the original observation variables or coming from biases that exist in their use, present a difficult issue when building data driven automated decision rules. Actually, bias may hamper their power of generalization. If this issue is well known, there are few results that provide an effective correction for such effect. Very recently new methods have been developed either to build unbiased learning sample or unbiased decision rules. Such methods rely on imposing constraints of independency when using the machinery of machine learning technics. Certification for such methods, in particular controlling the error rates is still an important topic. Moreover, understanding how correlation is turned into causality, which is at the core of this problem, is still a challenging field of research.

Scientific approach to solve the challenge

In the recent years, new tools have been developed for applications of optimal transport in machine learning and statistics, including new tests for classification using Wasserstein distance and statistical properties of Fréchet means of distributions seen as Wasserstein Barycenters. This work has important developments in machine learning for fairness issues and robustness with respect to stress or evolution of distribution for machine learning algorithms.

The main direction of this research project deals with the theoretical properties of statistical inference under fairness constraint modelling how the effect of an algorithm can depend on an unwanted variable whose influence is yet present in the learning sample. A proposition is to rewrite the framework of fair learning using tools from mathematical statistics and optimal transport theory to obtain new methods and bounds (connected to the differential privacy). Extension to others statistical methods (tests, PCA, PLS, matrix factorizations, Bayesian Networks), unsupervised learning and machine learning methods (regression models or ranking models, online algorithms, deep networks, GAN …) can be considered. The goal is to provide new feasible algorithms to promote fairness by adding constraints. Finally, replacing the notion of independence with a notion of causality can provide new ways of understanding algorithms and adding prior knowledge such as acceptability, logic or physical constraints to AI.

The second direction of the research program deals with extensions of fairness models to obtain robust and explainable models. Actually this research on fairness has also other possible impacts by incorporating constraints that guarantee conditional independence with other variables. In features selection, controlling or understanding the real impact of variables and mitigating the unwanted bias coming from unbalanced samples is of high importance in novelty detection or to explain the outcome of algorithms. Then, fairness issues involve understanding the effects of changes in the distribution between the learning sample and the generalization sample. Facing this issue enable to deal with the influence of a change in the distribution for robust machine learning methods or to the presence of multiple distribution for collaborative or consensus inference. Moreover, behaviors of an algorithm under changes of the distributions provides a deep insight on the importance and causality of each variable at play in the learning process. This provides a natural framework to understand global and local explainability of automated rules.

The third direction of the research program is to study biases when the sensitive parameters are unknown. We will focus this study where biases are due to the violation of the independent and identically distributed (i.i.d) hypothesis in unsupervised learning process. In real application, it is commonly assumed that observations in the sample of the dataset are iid which is complex to verify. Wrongly assume i.i.d hypothesis could have undesirable effects like to have poor performance on rare observation or to obtain unfair networks. To avoid to assume i.i.d hypothesis, another assumption could be made: the error of Loss function should be identical regardless of the occurrence of an observation. This new hypothesis is justified in critical system where it’s not only necessaire to have a global good performance but also on rare events. To obtain the distribution of samples, autoregressive models [7] or flow-based models [8] can be used. The first step will be to modify the loss of these models to take into account the new hypothesis.

The fourth direction of the research program will consider theoretical values to extend DP to testing problems (and then classification issues) to evaluate the theoretical cost for privacy. Differential Privacy provides also a theoretical framework that can be used to define a notion of differential fairness where an algorithm behaves similarly for individuals with similar characteristics but different close values for the sensitive variable. Diffential privacy is the most common formalization of the problem of privacy. It can be summed up as the following condition: altering a single data point does not affect the probability of an outcome much. In particular, it can parametrize by some α, where low values of α correspond to a stronger privacy condition.

Expected scientific outcomes

  1. Lower bounds to assess the theoretical cost for fairness. Link with domain adaptation rates
  2. Removing sensitive information from image analysis problematics using GAN, such as image classification, change detection, or multiple-sources image analysis in general
  3. Fairness for ranking problems, such as evaluations, hiring procedures, etc
  4. Differential fairness, to distinguish close cases (similar characteristics with different sensitive variables) and analyze them fairly without apriori. Using the same kind of definition as differential privacy to define fairness
  5. Searching for parts of the domain which has unfairness, and fix this by replacing global fairness with local fairness: this will have important application in reinforcement learning or active learning
  6. Auto-balanced datasets for unsupervised tasks
  7. Propose methods to keep sensitive information hidden during the training phase

Dataset required for the challenge

  1. Remote Sensing

Image classification and land-cover classification tasks consist in classifying observation scenes as a defined type of image (depending on ground aspects, objects presence, etc.). [1] CORINE land-cover ground truth is available to check results of model predictions.
Change detection consists in identifying precisely, in a pair of images, areas that changed from one image to the other. This is very common use case in Earth observation to watch buildings and development of urban areas, or either monitor deforestation and floods in particular areas. [2]
Those 3 different use cases are general to Earth observation, and current industrial applications could be enhanced by merging and working with multiple datasets, coming from multiple satellites, in order to get the most of all available data. However, fairness constraints have be taken into account in order to avoid biaised classification or change detection, so as to be to most efficient possible benefitting of the mass of data, without being influenced by the source of these data.
Images coming from multiple sources can have different characteristics relative to satellites sensors, cameras resolution, antennas alignment, lines of sight, etc. Therefore, images produced by these different « conditions » will have different statistics and properties. Decisions in Machine Learning techniques shall not be influenced by these characteristics, but only by inherent information that resides inside images. [3]
Sentinel-2 and Landsat-8 open datasets images (accessible through CNES PEPS platform and USGS online portal) are potential candidates to quickly prototype on image classification challenges. Change detection use cases can take advantage of Onera change detection dataset, computer vision online dataset or even appropriate selected images on Sentinel and Landsat databases.

2. Traffic signs

Airbus datatsets from traffic signs recognition challenge
Renault datatsets from turn indicator recognition

Success criteria for the challenge

Challenge will be successful if we have developed:

  1. A methodology, tools and metrics to quantify the biases on datatsets
  2. Algorithms which allow to correct these biases
  3. Benchmarks demonstrating the performance of these algorithms on various datatsets and at least one from industrial applications on supervised and unsupervised tasks
  4. A methodology to enable collaborative learning with dataset including sensitives information

References

[1] M. Kampffmeyer, A.-B. Salberg, and R. Jenssen, « Urban land cover classification with missing data using deep convolutional neural networks », in2017 International Geoscience and Remote Sensing Symposium, 2017

[2] Kevin Louis de Jong, Anna Sergeevna Bosman, « Unsupervised Change Detection in Satellite Images Using Convolutional Neural Networks », IJCNN 2019

[3] David Madras, Elliot Creager, Toniann Pitassi, Richard Zemel, « Learning Adversarially Fair and Transferable Representations », 2018

[4] E. del Barrio, P. Gordaliza, and J-M. Loubes, A Central Limit Theorem for transportation cost with applications to Fairness Assessment in Machine Learning, Information and Inference (2019)

[5] E. del Barrio, F. Gamboa, P. Gordaliza, and J-M. Loubes. Obtaining fairness using optimal transport theory. ICML (2019)

[6] E. Del Barrio and J-M. Loubes. Central limit theorems for empirical transportation cost in general dimension. The Annals of Probability 47 (2), 926-951. (2019)

[7] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, Daan Wierstra – Deep AutoRegressive Networks – 2014

[8] Emilien Dupont, Arnaud Doucet, Yee Whye – The Augmented Neural ODEs. 2 april 2019. Arxiv: 1904.01681