自己蒸留によるDNNの蒸留の効率化
自己蒸留によるDNNの蒸留の効率化
カテゴリ: 論文誌(論文単位)
グループ名: 【C】電子・情報・システム部門
発行日: 2019/12/01
タイトル(英語): Efficient Learning for Distillation of DNN by Self Distillation
著者名: 高木 純平((株) セック),服部 元信(山梨大学 大学院総合研究部)
著者名(英語): Jumpei Takagi (Systems Engineering Consultants Co., LTD.), Motonobu Hattori (Faculty of Interdisciplinary Research, University of Yamanashi)
キーワード: 知識蒸留,深層学習,画像分類 knowledge distillation,deep learning,image classification
要約(英語): Knowledge distillation is a method to create a superior student by using knowledge obtained from a trained teacher neural network. Recent studies have shown that much superior students can be obtained by distilling the trained student further as a teacher. Distilling the knowledge through multiple generations, however, takes a long time for learning. In this paper, we propose a Self Distillation(SD) method which can reduce both the number of generations and learning time for knowledge distillation. In SD, the most accurate network is obtained during intra-generation learning, and it is used as a teacher of intra-generational distillation. Our experiments for image classification task demonstrate that the proposed SD acquires high accuracy with fewer generations and less learning time than the conventional method.
本誌: 電気学会論文誌C(電子・情報・システム部門誌) Vol.139 No.12 (2019) 特集:電気・電子・情報関係学会東海支部連合大会
本誌掲載ページ: 1509-1516 p
原稿種別: 論文/日本語
電子版へのリンク: https://www.jstage.jst.go.jp/article/ieejeiss/139/12/139_1509/_article/-char/ja/
受取状況を読み込めませんでした
