|
pp. 2507-2523
S&M4450 Research paper https://doi.org/10.18494/SAM6352 Published: May 12, 2026 Debasing Tabular Data via Actionable Explanations for Fair Machine Learning [PDF] Jinchao Ge, Tao Du, Haidong Li, Nan Ju, and Shuwen Zhao (Received March 26, 2026; Accepted April 16, 2026) Keywords: machine learning fairness, bias detection, feature interactions, model interpretability
As machine learning (ML) models are increasingly used in high-stakes domains, improving both predictive performance and fairness has become an important research problem. Existing explanation-based fairness analysis methods often focus on individual feature effects while paying limited attention to feature redundancy and interaction, which may lead to incomplete bias diagnosis. In this paper, we propose the Prioritizing Redundancy and Interaction through Monte Carlo Tree (PRIMCT), a fairness-oriented feature subset analysis framework based on Monte Carlo Tree Search (MCTS). The method jointly evaluates feature importance, redundancy, and interaction to identify feature subsets that are informative for model behavior and useful for bias diagnosis. We validate PRIMCT on three tabular datasets, namely, German Credit Data, Adult Income Data (AID), and Stop, Question, and Frisk Data. Experimental results show that PRIMCT consistently achieves a better balance between predictive performance and fairness-related metrics than several baseline methods. In particular, the method achieved an increase of 13.34% in accuracy and an improvement of 11.31% in fairness metrics on AID.
Corresponding author: Haidong Li![]() ![]() This work is licensed under a Creative Commons Attribution 4.0 International License. Cite this article Jinchao Ge, Tao Du, Haidong Li, Nan Ju, and Shuwen Zhao , Debasing Tabular Data via Actionable Explanations for Fair Machine Learning, Sens. Mater., Vol. 38, No. 5, 2026, p. 2507-2523. |