Logo Logo
Help
Contact
Switch Language to German
Necker, Tobias; Weissmann, Martin; Sommer, Matthias (2018): The importance of appropriate verification metrics for the assessment of observation impact in a convection-permitting modelling system. In: Quarterly Journal of the Royal Meteorological Society, Vol. 144, No. 714: pp. 1667-1680
Full text not available from 'Open Access LMU'.

Abstract

Over the past 15 years, adjoint-based, ensemble-based and hybrid methods have been developed for estimating observation impact based on the forecast sensitivity to observation impact (FSOI). These methods are now commonly used in global modelling systems. However, little attention has been given to assessing observation impact in regional convection-permitting modelling systems. This study presents the first evaluation of ensemble-based estimates of observation impact over an extended period of six weeks in such a convection-permitting modelling system, namely the regional ensemble system of Deutscher Wetterdienst. Another aspect that has received little attention is the choice of the forecast-error verification metric. Nearly all previous studies used the difference between the forecast and a subsequent analysis of the same modelling system expressed in terms of energy (total energy norm). While such a self-verification generally needs to be treated with caution, it appears unsuitable for convection-permitting regional forecasts. Firstly, total energy does not really reflect parameters that forecast users are interested in, and important forecast quantities such as surface wind gusts and precipitation are not even part of the analysis. Secondly, systematic analysis and forecast errors are non-negligible in the presence of convection, especially for important variables that are related to convection. To overcome this issue, we introduce the use of independent radar observations for the verification of observation impact and compare results for a variety of different observation-based metrics for a six-week high-impact weather period in summer 2016. This revealed a particular sensitivity of the estimated impact to model as well as observation biases and sensitivity studies indicated that even small biases can have an influence on the estimated impact. Additionally, we demonstrate that FSOI can be used to identify biases through comparison of results for different metrics.