Speaker
Description
The computational cost of individual likelihood evaluations and physics simulations is a key limiting factor for BSM global fits and other large-scale parameter scans. One approach to tackle this is to use fast, pretrained emulators for the most expensive computations. However, as the set of relevant experimental results is frequently updated, many pretrained emulators have limited reusability. Here we propose an approach for training and applying fast emulators on the fly during parameter scans, based on the Dividing Local Gaussian Processes (DLGP) algorithm of Lederer et al. During a scan, the DLGP algorithm iteratively divides the input space using an evolving binary tree, where each leaf contains a local Gaussian process (GP) emulator. Guided by the typical computational requirements of BSM global fits, we extend the DLGP approach to improve its prediction accuracy. Our modifications include the use of new covariance functions, more detailed and frequent retraining of the local GPs, and new approaches to performing the iterative division of the input space. We demonstrate our approach on data from a recent global fit by the GAMBIT Collaboration.
Joint work with Riccardo De Bin and Anders Kvellestad (both from University of Oslo)