Fairness Shields: Safeguarding against Biased Decision Makers

Published in 9th International Conference on Formal Structures for Computation and Deduction, 2025

As AI-based decision-makers increasingly influence human lives, it is a growing concern that their decisions are often unfair or biased with respect to people’s sensitive attributes, such as gender and race. Most existing bias prevention mea- sures provide probabilistic fairness guarantees in the long run, and it is possible that the decisions are biased on specific in- stances of short decision sequences. We introduce fairness shielding, where a symbolic decision-maker—the fairness shield—continuously monitors the sequence of decisions of another deployed black-box decision-maker, and makes inter- ventions so that a given fairness criterion is met while the total intervention costs are minimized. We present four different algorithms for computing fairness shields, among which one guarantees fairness over fixed horizons, and three guarantee fairness periodically after fixed intervals. Given a distribution over future decisions and their intervention costs, our algo- rithms solve different instances of bounded-horizon optimal control problems with different levels of computational costs and optimality guarantees. Our empirical evaluation demon- strates the effectiveness of these shields in ensuring fairness while maintaining cost efficiency across various scenarios.