Speaker
Description
In physics, Lagrangians provide a systematic way to describe laws governing physical systems. In the context of particle physics, they encode the interactions and behavior of the fundamental building blocks of our universe. By treating Lagrangians as complex, rule-based constructs similar to linguistic expressions, we trained a transformer model-- proven to be effective in natural language tasks -- to predict the Lagrangian corresponding to a given list of particles. We report on the transformer's performance in constructing Lagrangians respecting the SU(3)×SU(2)×U(1) gauge symmetries. The resulting model is shown to achieve high accuracies with Lagrangians up to six matter fields, with the capacity to generalize beyond the training distribution, albeit within architectural constraints. We show through embedding analysis that the model has internalized concepts such as group representations and conjugation operations eventhough it is trained to generate Lagrangians. It is also capable of generating parts of known Lagrangians such as the Standard Model and other BSM models. We make the model and training datasets available to the community. An interactive demonstration can be found at: https://huggingface.co/spaces/JoseEliel/generate-lagrangians
AI keywords | Transformers ; Symbolic AI ; LLM ; Out-of-Distribution Generalization ; Embedding Analysis |
---|