Abstract
Various strategies for active learning have been proposed in the machine learning literature. In uncertainty sampling, which is among the most popular approaches, the active learner sequentially queries the label of those instances for which its current prediction is maximally uncertain. The predictions as well as the measures used to quantify the degree of uncertainty, such as entropy, are traditionally of a probabilistic nature. Yet, alternative approaches to capturing uncertainty in machine learning, alongside with corresponding uncertainty measures, have been proposed in recent years. In particular, some of these measures seek to distinguish different sources and to separate different types of uncertainty, such as the reducible (epistemic) and the irreducible (aleatoric) part of the total uncertainty in a prediction. The goal of this paper is to elaborate on the usefulness of such measures for uncertainty sampling, and to compare their performance in active learning. To this end, we instantiate uncertainty sampling with different measures, analyze the properties of the sampling strategies thus obtained, and compare them in an experimental study.
Item Type: | Journal article |
---|---|
Form of publication: | Publisher's Version |
Faculties: | Mathematics, Computer Science and Statistics > Computer Science > Artificial Intelligence and Machine Learning |
Subjects: | 000 Computer science, information and general works > 004 Data processing computer science |
URN: | urn:nbn:de:bvb:19-epub-91888-0 |
ISSN: | 0885-6125 |
Language: | English |
Item ID: | 91888 |
Date Deposited: | 13. Apr 2022 15:15 |
Last Modified: | 11. Oct 2024 14:23 |