Speaker
Description
Scientific computing is widely recognised as the third pillar of physics together with theory and experiments. In this scenario, High Performance Computing (HPC) plays a key role allowing for modelling increasingly realistic and more detailed processes thanks to the availability of larger and larger computational power. Moreover, HPC has been instrumental in empowering machine learning and artificial intelligence enabling the training of progressively complex models with unprecedented speed and efficiency.
Challenged by Moore’s law, HPC is continuously evolving with the introduction of novel more performant architectures, which promise innovative and paradigmatic changes. With such a variety, it is crucial to check how legacy scientific codes perform on the latest available computing technologies. Indeed, understanding strengths and weaknesses of different architectures related to each specific application will lead on one hand to better utilisation of the available resources and on the other hand to the possibility of improving the application in a co-design fashion.
In this presentation, we are going to analyse the behaviour on different architectures of some of the most widespread benchmarks and codes in high-energy physics, fluid dynamics, material science, and plasma physics. We will look at the execution time of these selected applications and their scalability. We will also investigate the impact of architectural features such as cache hierarchy, vectorisation support, and memory bandwidth. Finally, we will make consideration on power consumption. Our study will thus provide guidance for developers in selecting the most suitable architecture given their workloads and optimising applications through architecture-aware design strategies.