Logo Logo
Hilfe
Hilfe
Switch Language to English

Kennedy, J. A.; Kluth, S.; Mazzaferro, L. und Walker, Rodney (2015): Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society. In: 21st International Conference On Computing in High Energy and Nuclear Physics (Chep2015),Parts 1-9, Bd. 664, 092019

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP/RZG to provide access to the HYDRA supercomputer facility. Hydra is the supercomputer of the Max Planck Society, it is a linux based supercomputer with over 80000 cores and 4000 physical nodes located at the RZG near Munich. This paper describes the work undertaken to integrate Hydra into the ATLAS production system by using the Nordugrid ARC-CE and other standard Grid components. The customization of these components and the strategies for HPC usage are discussed as well as possibilities for future directions.

Dokument bearbeiten Dokument bearbeiten