Logo Logo
Help
Contact
Switch Language to German
Ruddat, Inga; Scholz, B.; Bergmann, S.; Buehring, A.-L.; Fischer, S.; Manton, A.; Prengel, D.; Rauch, E.; Steiner, S.; Wiedmann, S,; Kreienbrock, L.; Campe, A. (2014): Statistical tools to improve assessing agreement between several observers. In: Animal, Vol. 8, No. 4: pp. 643-649
[img]
Preview
214kB

Abstract

In the context of assessing the impact of management and environmental factors on animal health, behaviour or performance it has become increasingly important to conduct (epidemiological) studies in the field. Hence, the number of investigated farms per study is considerably high so that numerous observers are needed for investigation. In order to maintain the quality and validity of study results calibration meetings where observers are trained and the current level of agreement is assessed have to be conducted to minimise the observer effect. When study animals were rated independently by the same observers by a categorical variable the exclusion test can be performed to identify disagreeing observers. This statistical test compares for each variable and each observer the observer-specific agreement with the overall agreement among all observers based on kappa coefficients. It accounts for two major challenges, namely the absence of a gold-standard observer and different data type comprising ordinal, nominal and binary data. The presented methods are applied on a reliability study to assess the agreement among eight observers rating welfare parameters of laying hens. The degree to which the observers agreed depended on the investigated item (global weighted kappa coefficients: 0.37 to 0.94). The proposed method and graphical description served to assess the direction and degree to which an observer deviates from the others. It is suggested to further improve studies with numerous observers by conducting calibration meetings and accounting for observer bias.