Speakers
Description
Astroparticle experiments in space have usually approached computing with very little attention to the distributed side of the workflow, either developing in-house job distribution systems (especially for demanding tasks such as data reconstruction or MC simulation) or ignoring it entirely (especially in the lower stages of the data analysis workflow, where users rely on the locally-provided batch systems of their institutions).
This introduces inefficiencies in the usage of available computing resources, and it prompted our efforts over the last years to exploit technical solutions provided by the "Dynamic On Demand Analysis Service" (DODAS) developed in the context of projects such as INDIGO-DataCloud, EOSC-hub, and XDC in order to seamlessly access cloud resources both commercial (Deutsche Telekom, Google-Cloud) and on-premises (Cloud@ReCas, and Cloud@CNAF).
Also, past experience shows how access to services and tools useful for the day-to-day life of the mission benefits vastly from having a unified or centrally organized infrastructure with a single-sign-on entrypoint (e.g. users of CERN recognized experiments have access to a CERN account and to all the services it enables access to).
Over the following years, we took advantage of our need to design a computing model from scratch for the HERD experiment to leverage as much as possible the INFN-Cloud infrastructure to deploy common services to the whole collaboration (IAM, calendar&document server, experiment website, and internal documentation, dedicated GitLab instance, ...), to test the integration of distributed computing resources with the ones already available at CNAF Tier1, and to explore new solutions for data storage and handling.
The status of these activities will be shown with a particular focus on the progress made since the last year.