Abstract
In recent years, breakthroughs in methods and data have enabled gravitational time delays to emerge as a very powerful tool to measure the Hubble constant H-0. However, published state-of-the-art analyses require of order 1 yr of expert investigator time and up to a million hours of computing time per system. Furthermore, as precision improves, it is crucial to identify and mitigate systematic uncertainties. With this time delay lens modelling challenge, we aim to assess the level of precision and accuracy of the modelling techniques that are currently fast enough to handle of order 50 lenses, via the blind analysis of simulated data sets. The results in Rungs 1 and 2 show that methods that use only the point source positions tend to have lower precision (10-20 per cent) while remaining accurate. In Rung 2, the methods that exploit the full information of the imaging and kinematic data sets can recover H-0 within the target accuracy (vertical bar A vertical bar < 2 per cent) and precision (<6 per cent per system), even in the presence of a poorly known point spread function and complex source morphology. A post-unblinding analysis of Rung 3 showed the numerical precision of the ray-traced cosmological simulations to be insufficient to test lens modelling methodology at the percent level, making the results difficult to interpret. A new challenge with improved simulations is needed to make further progress in the investigation of systematic uncertainties. For completeness, we present the Rung 3 results in an appendix and use them to discuss various approaches to mitigating against similar subtle data generation effects in future blind challenges.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Physik |
Themengebiete: | 500 Naturwissenschaften und Mathematik > 530 Physik |
ISSN: | 0035-8711 |
Sprache: | Englisch |
Dokumenten ID: | 97746 |
Datum der Veröffentlichung auf Open Access LMU: | 05. Jun. 2023, 15:26 |
Letzte Änderungen: | 05. Jun. 2023, 15:26 |