Abstract
The search for specific objects or motifs is essential to art history as both assist in decoding the meaning of artworks. Digitization has produced large art collections, but manual methods prove to be insufficient to analyze them. In the following, we introduce an algorithm that allows users to search for image regions containing specific motifs or objects and find similar regions in an extensive dataset, helping art historians to analyze large digitized art collections. Computer vision has presented efficient methods for visual instance retrieval across photographs. However, applied to art collections, they reveal severe deficiencies because of diverse motifs and massive domain shifts induced by differences in techniques, materials, and styles. In this paper, we present a multi-style feature fusion approach that successfully reduces the domain gap and improves retrieval results without labelled data or curated image collections. Our region-based voting with GPU-accelerated approximate nearest-neighbour search allows us to find and localize even small motifs within an extensive dataset in a few seconds. We obtain state-of-the-art results on the Brueghel dataset and demonstrate its generalization to inhomogeneous collections with a large number of distractors.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Fakultät: | Geschichts- und Kunstwissenschaften > Department Kunstwissenschaften > Kunstgeschichte |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme
700 Künste und Unterhaltung > 700 Künste |
ISSN: | 0302-9743 |
Ort: | Cham |
Bemerkung: | Serie: Lecture Notes in Computer Science ; 12536 |
Sprache: | Englisch |
Dokumenten ID: | 109977 |
Datum der Veröffentlichung auf Open Access LMU: | 26. Apr. 2024, 06:34 |
Letzte Änderungen: | 28. Mai 2024, 12:37 |