
Abstract
Research on multi-class text classification of short texts mainly focuses on supervised (transfer) learning approaches, requiring a finite set of pre-defined classes which is constant over time. This work explores deep constrained clustering (CC) as an alternative to supervised learning approaches in a setting with a dynamically changing number of classes, a task we introduce as dynamic topic discovery (DTD). We do so by using pairwise similarity constraints instead of instance-level class labels which allow for a flexible number of classes while exhibiting a competitive performance compared to supervised approaches. First, we substantiate this through a series of experiments and show that CC algorithms exhibit a predictive performance similar to state-of-the-art supervised learning algorithms while requiring less annotation effort. Second, we demonstrate the overclustering capabilities of deep CC for detecting topics in short text data sets in the absence of the ground truth class cardinality during model training. Third, we showcase how these capabilities can be leveraged for the DTD setting as a step towards dynamic learning over time. Finally, we release our codebase to nurture further research in this area.
Item Type: | Conference or Workshop Item (Speech) |
---|---|
Faculties: | Mathematics, Computer Science and Statistics > Statistics Mathematics, Computer Science and Statistics > Statistics > Chairs/Working Groups > Computational Statistics Mathematics, Computer Science and Statistics > Statistics > Chairs/Working Groups > Methods for missing Data, Model selection and Model averaging |
Subjects: | 500 Science > 510 Mathematics |
URN: | urn:nbn:de:bvb:19-epub-95618-2 |
Item ID: | 95618 |
Date Deposited: | 05. Apr 2023, 07:35 |
Last Modified: | 05. Apr 2023, 07:35 |