Description
Within the framework of the ICSC project (Italian National Centre on HPC, Big Data and Quantum Computing), a flexible and experiment‑agnostic cloud infrastructure has been developed to address the increasing computational demands of the HL‑LHC and future collider experiments. The platform provides transparent access to computing resources through containerization technologies and orchestration via Kubernetes, enabling both interactive and quasi‑interactive analysis workflows through tools such as Jupyter and Dask, while abstracting operational complexity from end users.
Benchmarking activities in the cloud environment are currently at an early stage and are focused on identifying appropriate metrics for evaluating performance and scalability using representative HEP use cases. The results obtained so far are exploratory and aim to define a coherent methodology for assessing heterogeneous workloads across different configurations. Based on these initial studies, the next phase will involve designing a more comprehensive benchmarking framework, with particular emphasis on quantifying the impact of virtualization on computational performance. To achieve this, we are developing a bare-metal testbed that replicates the same software stack and evaluation procedures, allowing direct and controlled comparisons between cloud and on-premises systems. This work is still ongoing and represents a critical milestone toward establishing robust and sustainable benchmarking practices for the HL-LHC computing ecosystem.