Speaker
Description
In High Energy Physics, when testing theoretical models of new physics against experimental results, the customary approach is to simply sample random points from the parameter space of the model, calculate their predicted values for the desired observables and compare them to experimental data. However, due to the typically large number of parameters in these models, this process is highly time consuming and inefficient. We propose a solution to this by adopting optimization algorithms which make use of Artificial Intelligence methods in order to improve the efficiency of this validation task.
A first study compared the performance of three different optimization algorithms (Bayesian, evolutionary and genetic algorithms) at constraining conventional Supersymmetry realizations, when confronted against Higgs mass and Dark Matter relic density constraints and the results show an increase in up to 3 orders of magnitude in sampling efficiency when compared to random sampling.
In a much more challenging scenario, a follow-up analysis was implemented for the scotogenic model, this time using an evolutionary multi-objective optimization algorithm assisted by a novelty detection (ND) algorithm, confronted against experimental constraints coming from the Higgs and neutrinos masses, lepton flavor violating decays, neutrino mixing and the anomalous magnetic moment of the muon. Results show at least 6 orders of magnitude increase in sampling efficiency as well as a better coverage of the parameter space due to the inclusion of a multi-objective cost function. Lastly, the use of ND improved the exploratory capacities of the algorithm, leading to new phenomenology.
AI keywords | simulation-based inference, anomaly detection, evolutionary algorithms |
---|