[Submitted on 23 Sep 2022]
Overview: Robustness studies of black-box models are recognized as a necessary challenge for numerical models based on structural equations and predictive models learned from data. These studies need to assess the robustness of the model against possible misspecifications of model inputs (such as covariate shifts). Prismatic black-box model studies of uncertainty quantification (UQ) are often based on sensitivity analysis, which involves a probabilistic structure imposed on the inputs, whereas ML models are built only from observed data. increase. Our work aims to unify the interpretability approaches of UQ and ML by providing easy-to-use tools related to both paradigms. To provide a general and straightforward framework for robustness studies, we define quantile constraints on the Wasserstein distance between probability measures and perturbations of the input information that depend on predictions, but keep the dependence structure. increase. We show that this perturbation problem can be solved analytically. Ensuring a regularity constraint via an isotonic polynomial approximation may make the perturbation smoother and more suitable in practice. Numerical experiments on real case studies from UQ and ML fields highlight the computational feasibility of such studies and provide local and global insights into the robustness of black-box models to input perturbations. To do.
From: Marouane Il Idrisi [view email] [via CCSD proxy]
Friday, September 23, 2022 11:58:03 UTC (942 KB)