You only derive once (YODO): Automatic differentiation for efficient sensitivity analysis in Bayesian networks

Sensitivity analysis measures the influence of a Bayesian network’s parameters on a quantity of interest defined by the network, such as the probability of a variable taking a specific value. In particular, the so-called sensitivity value measures the quantity of interest’s partial derivative with respect to the network’s conditional probabilities. However, finding such values in large networks with thousands of parameters can become computationally very expensive. We propose to use automatic differentiation combined with exact inference to obtain all sensitivity values in a single pass. Our method first marginalizes the whole network once using eg variable elimination and then backpropagates this operation to obtain the gradient with respect to all input parameters. We demonstrate our routines by ranking all parameters by importance on a Bayesian network modeling humanitarian crises and disasters, and then show the method’s efficiency by scaling it to huge networks with up to 100’000 parameters. An implementation of the methods using the popular machine learning library PyTorch is freely available.

Citation

Rafael Ballester-Ripoll and Manuele Leonelli (2022). You only derive once (YODO): Automatic differentiation for efficient sensitivity analysis in Bayesian networks. In Proceedings of the 11th International Conference on Probabilistic Graphical Models (PGM), pp. 169-180. PMLR. 5-7 September 2022, Almería, Spain.

Authors from IE Research Datalab