Abstract
The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Physik |
Themengebiete: | 500 Naturwissenschaften und Mathematik > 530 Physik |
ISSN: | 1742-6588 |
Sprache: | Englisch |
Dokumenten ID: | 34495 |
Datum der Veröffentlichung auf Open Access LMU: | 15. Feb. 2017, 16:04 |
Letzte Änderungen: | 08. Mai 2024, 08:56 |