Abstract
Outlier detection is one of the most important tasks to keep your processes in control. Unawareness of critical anomalies can lead to exhausting expenses, hence, it is highly beneficial to treat process failures as soon as possible. However, anomalies are difficult to detect due to their rarity though they occur too often to neglect the necessity of its detection. Even if the detection problem is solved, the treatment of singular anomalies and the adjustment of the process based on each abnormal trace is tedious and costly regarding time and money. To increase the efficiency of later anomaly treatment, we propose a novel strategy to detect collective anomalies. However, this is not equivalent to anomaly clustering as a post-processing step. TOAD orders process instances by similarity and detects abnormal accumulations of deviating cases. These collections are abnormal due to their aggregated behavior. Assuming that similar deviations are caused by the same reason, the treatment of such an anomaly is more cost-efficient than the handling of deviating singletons. Applying TOAD to an event log yields a ranking of significant, temporally abnormal trace collections, that provide a baseline for further analysis.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Mathematik, Informatik und Statistik > Informatik |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 004 Informatik |
Sprache: | Englisch |
Dokumenten ID: | 89051 |
Datum der Veröffentlichung auf Open Access LMU: | 25. Jan. 2022, 09:28 |
Letzte Änderungen: | 25. Jan. 2022, 09:28 |