Abstract
Biologically-inspired event-driven silicon retinas, so called dynamic vision sensors (DVS), allow efficient solutions for various visual perception tasks, e.g. surveillance, tracking, or motion detection. Similar to retinal photoreceptors, any perceived light intensity change in the DVS generates an event at the corresponding pixel. The DVS thereby emits a stream of spatiotemporal events to encode visually perceived objects that in contrast to conventional frame-based cameras, is largely free of redundant background information. The DVS offers multiple additional advantages, but requires the development of radically new asynchronous, event-based information processing algorithms. In this paper we present a fully event-based disparity matching algorithm for reliable 3D depth perception using a dynamic cooperative neural network. The interaction between cooperative cells applies cross-disparity uniqueness-constraints and within-disparity continuity-constraints, to asynchronously extract disparity for each new event, without any need of buffering individual events. We have investigated the algorithm's performance in several experiments;our results demonstrate smooth disparity maps computed in a purely event-based manner, even in the scenes with temporally-overlapping stimuli.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultätsübergreifende Einrichtungen: | Munich Center for Neurosciences – Brain & Mind |
Themengebiete: | 500 Naturwissenschaften und Mathematik > 500 Naturwissenschaften |
ISSN: | 1370-4621 |
Sprache: | Englisch |
Dokumenten ID: | 49074 |
Datum der Veröffentlichung auf Open Access LMU: | 27. Apr. 2018, 08:16 |
Letzte Änderungen: | 04. Nov. 2020, 13:26 |