Speaker
Description
High-performance computing (HPC) has become indispensable for addressing the complex challenges of modern scientific research. From processing massive datasets to running simulations with millions of variables, HPC supports advancements across a range of disciplines. This presentation will provide an overview of the design and implementation of a new HPC data center, focusing on its ability to meet the computational and storage demands of diverse scientific workflows.
The data center is tailored to accommodate various user needs. For instance, astrophysical scientists analyze large-scale images from ground-based telescopes (e.g., ESO in Chile, GranTeCan at the Canary Islands), space telescopes (Hubble, Webb), and radio telescopes. These images, often exceeding millions of pixels, require advanced AI tools for object recognition and physical property extraction after calibration. Material physicists, on the other hand, use state-of-the-art mathematical models to simulate and optimize alternative jet fuels, generating thousands of tests that need efficient storage and rapid comparison. High-energy physicists process dozens of terabytes of data from experiments such as ATLAS (CERN) or Belle II (KEK), requiring sequential access, high-speed networking, and parallel file systems for efficient analysis.
This HPC infrastructure integrates GPUs, OpenMP-based parallelism, and optimized caching mechanisms to ensure high performance despite budgetary constraints. As a key application example, we will showcase its role in astrophysics, specifically for ELT and SKA, demonstrating how it enables cutting-edge research and fosters interdisciplinary collaboration.
Giorno preferito | 11 Dicembre Pomeriggio |
---|