Speaker
Description
Model misspecification analysis strategies, such as anomaly detection, model validation, and model comparison are a key component of scientific model development. Over the last few years, there has been a rapid rise in the use of simulation-based inference (SBI) techniques for Bayesian parameter estimation, applied to increasingly complex forward models. To move towards fully simulation-based analysis pipelines, however, there is an urgent need for a comprehensive simulation-based framework for model misspecification analysis.
In this talk, I will describe a solid and flexible foundation for a wide range of model discrepancy analysis tasks, using distortion-driven model misspecification tests. From a theoretical perspective, I will introduce the statistical framework built around performing many hypothesis tests for distortions of the simulation model. I will also make explicit analytic connections to classical techniques: anomaly detection, model validation, and goodness-of-fit residual analysis. Furthermore, I will introduce an efficient self-calibrating training algorithm that is useful for practitioners. I will demonstrate the performance of the framework in multiple scenarios, making the connection to classical results where they are valid. Finally, I will show how to conduct such a distortion-driven model misspecification test for real gravitational wave data, specifically on the event GW150914.
Related work at https://arxiv.org/abs/2412.15100 .
AI keywords | simulation-based inference, misspecification tests, out-of-distribution, anomaly detection |
---|