Speaker
Description
In recent years there has been been an explosion in the use of artificial neural network transformer models to solve a variety of different problems across many fields. In this paper we take the transformer model archetype and apply it to a well understood problem in Gravitational-Wave data analysis - the detection of Compact Binary Coalescences (CBCs), mergers of black holes and neutron stars. As opposed to Convolutional Neural Networks (CNNs), which have been the prevalent avenue of investigation with Gravitational-Wave machine learning, transformer models use self-attention which enables aggregation of global information than focusing on local feature detection. They also have the advantage of treating time series data sequentially rather than CNNs which view a time series similarly to an image. By examining the attention maps of the transformer model, in contrast to the learned filters of the CNN, we demonstrate a fundamentally different method of analysis which we propose is more suitable for detection.