Speaker
Description
Monte Carlo simulations provide a systematically improvable framework for computing Euclidean correlation functions with high precision. However, their effectiveness is fundamentally constrained by the exponential decay of the signal-to-noise ratio at large Euclidean time separations. By expressing correlators as derivatives of one-point functions with respect to sources in the action, the signal-to-noise problem can be reformulated as an overlap problem within a reweighting procedure. Recent work has demonstrated that combining automatic differentiation with Hamiltonian Monte Carlo can exactly resolve this overlap problem in simple scalar theories, offering a constructive blueprint for more complex scenarios where an exact solution is unlikely to be accessible.
In this talk, I will review this approach and discuss how neural networks naturally extend this program. In particular, I will show that linearised (normalising) flows naturally emerge as potential candidates to address this overlap problem in practice and may even offer a deeper geometrical understanding of the underlying minimisation problem.