Logo Logo
Hilfe
Hilfe
Switch Language to English

Gu, Jindong und Tresp, Volker (2020): Search for Better Students to Learn Distilled Knowledge. In: ECAI 2020: 24th European Conference on Artificial Intelligence, Bd. 325: S. 1159-1165

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Knowledge Distillation, as a model compression technique, has received great attention. The knowledge of a well-performed teacher is distilled to a student with a small architecture. The architecture of the small student is often chosen to be similar to their teacher's, with fewer layers or fewer channels, or both. However, even with the same number of FLOPs or parameters, the students with different architecture can achieve different generalization ability. The configuration of a student architecture requires intensive network architecture engineering. In this work, instead of designing a good student architecture manually, we propose to search for the optimal student automatically. Based on L1-norm optimization, a subgraph from the teacher network topology graph is selected as a student, the goal of which is to minimize the KL-divergence between student's and teacher's outputs. We verify the proposal on CIFAR10 and CIFAR100 datasets. The empirical experiments show that the learned student architecture achieves better performance than ones specified manually. We also visualize and understand the architecture of the found student.

Dokument bearbeiten Dokument bearbeiten