Abstract
With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling virtual organisations (VO) and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test workflows. These new workflows comprise e.g. tests of the ATLAS nightly build system, ATLAS Monte Carlo production system, XRootD federation (FAX) and new site stress test workflows. We report on the development, optimization and results of the various components in the HammerCloud framework.
Item Type: | Journal article |
---|---|
Faculties: | Physics |
Subjects: | 500 Science > 530 Physics |
URN: | urn:nbn:de:bvb:19-epub-33831-5 |
ISSN: | 1742-6588 |
Language: | English |
Item ID: | 33831 |
Date Deposited: | 15. Feb 2017, 14:45 |
Last Modified: | 08. May 2024, 08:39 |