Particle detectors are complex instruments, as their design involves the specification of hundreds of geometric and material parameters. That choice in the past has been informed by robust paradigms (redundancy, symmetry, "track-first, destroy later"), which allowed us to build highly performant particle physics experiments. However, in the impossibility of optimizing our design choices to the final, true goals of our instruments (the highest sensitivity to a flagship parameter, or the discovery reach to some relevant phenomenon), we have until now consistently relied on manageable proxies as figures of merit: e.g., highest resolution, lowest backgrounds. While sensible, in a high-dimensional feature space of possible choices that modus operandi corresponds to potential huge losses of performance on those true goals.
A realignment of goals and design choices may be pursued by relying on modern deep learning techniques. These allow us to produce fully differentiable pipelines where a model of the instrument, the pattern recognition procedures, the cost constraints, and the detector-related systematic uncertainties can be considered all together, and where the true experimental goals may be encoded in a carefully defined objective function. The latter can then be maximized by stochastic gradient descent, achieving a full end-to-end optimization of the design.
In this presentation the above concepts will be clarified with a few examples, and a summary of the research program of the MODE collaboration (mode-collaboration.github.io) will be offered.
Giuseppina Salente, Andrea Longhin, Tommaso Dorigo