Justicia: A Stochastic SAT Approach to Formally Verify Fairness
Paper in proceeding, 2021

As a technology ML is oblivious to societal good or bad, and thus, the field of fair machine learning has stepped up to propose multiple mathematical definitions, algorithms, and systems to ensure different notions of fairness in ML applications. Given the multitude of propositions, it has become imperative to formally verify the fairness metrics satisfied by different algorithms on different datasets. In this paper, we propose a stochastic satisfiability (SSAT) framework, Justicia, that formally verifies different fairness measures of supervised learning algorithms with respect to the underlying data distribution. We instantiate Justicia on multiple classification and bias mitigation algorithms, and datasets to verify different fairness metrics, such as disparate impact, statistical parity, and equalized odds. Justicia is scalable, accurate, and operates on non-Boolean and compound sensitive attributes unlike existing distribution-based verifiers, such as FairSquare and VeriFair. Being distribution-based by design, Justicia is more robust than the verifiers, such as AIF360, that operate on specific test samples. We also theoretically bound the finite-sample error of the verified fairness measure.


Bishwamittra Ghosh

National University of Singapore (NUS)

Debabrota Basu

Chalmers, Computer Science and Engineering (Chalmers), Data Science

Inria Lille Nord Europe

Kuldeep S. Meel

National University of Singapore (NUS)

35th AAAI Conference on Artificial Intelligence, AAAI 2021

Vol. 35 7554-7563
978-1-57735-866-4 (ISBN)

35th AAAI Conference on Artificial Intelligence / 33rd Conference on Innovative Applications of Artificial Intelligence / 11th Symposium on Educational Advances in Artificial Intelligence
, ,

Subject Categories

Probability Theory and Statistics

Signal Processing

Computer Vision and Robotics (Autonomous Systems)



More information

Latest update