Logo Logo
Switch Language to German

Maron, Roman C.; Schlager, Justin G.; Haggenmüller, Sarah; Kalle, Christof von; Utikal, Jochen S.; Meier, Friedegund; Gellrich, Frank F.; Hobelsberger, Sarah; Hauschild, Axel; French, Lars; Heinzerling, Lucie; Schlaak, Max; Ghoreschi, Kamran; Hilke, Franz J.; Poch, Gabriela; Heppt, Markus V.; Berking, Carola; Haferkamp, Sebastian; Sondermann, Wiebke; Schadendorf, Dirk; Schilling, Bastian; Goebeler, Matthias; Krieghoff-Henning, Eva; Hekler, Achim; Fröhling, Stefan; Lipka, Daniel B.; Kather, Jakob N. and Brinker, Titus J. (2021): Original Research A benchmark for neural network robustness in skin cancer classification. In: European Journal of Cancer, Vol. 155: pp. 191-199

Full text not available from 'Open Access LMU'.


Background: One prominent application for deep learning-based classifiers is skin cancer classification on dermoscopic images. However, classifier evaluation is often limited to holdout data which can mask common shortcomings such as susceptibility to confounding factors. To increase clinical applicability, it is necessary to thoroughly evaluate such classifiers on out-of-distribution (OOD) data. Objective: The objective of the study was to establish a dermoscopic skin cancer benchmark in which classifier robustness to OOD data can be measured. Methods: Using a proprietary dermoscopic image database and a set of image transformations, we create an OOD robustness benchmark and evaluate the robustness of four different convolutional neural network (CNN) architectures on it. Results: The benchmark contains three data sets-Skin Archive Munich (SAM), SAM -corrupted (SAM-C) and SAM-perturbed (SAM-P)-and is publicly available for download. To maintain the benchmark's OOD status, ground truth labels are not provided and test results should be sent to us for assessment. The SAM data set contains 319 unmodified and biopsy-verified dermoscopic melanoma (n = 194) and nevus (n = 125) images. SAM-C and SAM-P contain images from SAM which were artificially modified to test a classifier against low-quality inputs and to measure its prediction stability over small image changes, respectively. All four CNNs showed susceptibility to corruptions and perturbations. Conclusions: This benchmark provides three data sets which allow for OOD testing of binary skin cancer classifiers. Our classifier performance confirms the shortcomings of CNNs and provides a frame of reference. Altogether, this benchmark should facilitate a more thorough evaluation process and thereby enable the development of more robust skin cancer classifiers. 2021 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Actions (login required)

View Item View Item