Speaker
Description
Projects such as the imminent Vera C. Rubin Observatory are critical tools for understanding cosmological questions like the nature of dark energy. By observing huge numbers of galaxies, they enable us to map the large scale structure of the Universe. However, this is only possible if we are able to accurately model our photometric observations of the galaxies, and thus infer their redshifts and other properties. I will present a new approach to this problem, which uses a neural emulator to speed up a complex physical model for galaxy spectra (stellar population synthesis; SPS), and a GPU-enabled batched ensemble sampler for posterior sampling. We perform this inference under a flexible diffusion model prior on the 16 physical parameters. This prior is a population distribution that was trained to reproduce the multi-band photometry of a deep 26-band dataset taken from the Cosmic Evolution Survey (COSMOS). I will present the different stages of our pipeline, including the emulation of SPS, the initial training of our population model, and our use of this model as a prior in subsequent inference for individual galaxies. The use of neural emulation for the SPS calculations has enabled us to perform full Bayesian inference for ~300,000 individual galaxies from COSMOS with a sophisticated SPS model - with ongoing work scaling this to tens of millions of galaxies from the Kilo-Degree Survey (KiDS). I will also demonstrate that our population model prior enables more precise and less biased redshift inference than competing methods, with a significantly reduce rate of catastrophic failures.
AI keywords | diffusion models; Bayesian inference; generative models; emulators |
---|