商品情報にスキップ
1 1

Cascaded Subband Energy-Based Emotion Classification

Cascaded Subband Energy-Based Emotion Classification

通常価格 ¥770 JPY
通常価格 セール価格 ¥770 JPY
セール 売り切れ
税込

カテゴリ: 論文誌(論文単位)

グループ名: 【C】電子・情報・システム部門

発行日: 2013/01/01

タイトル(英語): Cascaded Subband Energy-Based Emotion Classification

著者名: Senaka Amarakeerthi (Spatial Media Group, University of Aizu), Chamin Morikawa (Interfaculty Initiative in Information Studies The University of Tokyo), Tin Lay Nwe (Institute for Infocomm Research), Liyanage C. De Silva (Faculty of Science, University of

著者名(英語): Senaka Amarakeerthi (Spatial Media Group, University of Aizu), Chamin Morikawa (Interfaculty Initiative in Information Studies The University of Tokyo), Tin Lay Nwe (Institute for Infocomm Research), Liyanage C. De Silva (Faculty of Science, University of Brunei Darussalam), Michael Cohen (Spatial Media Group, University of Aizu)

キーワード: emotion classification,hidden Markov model,sentiment analysis,subband filters,subband energy

要約(英語): Since the earliest studies of human behavior, emotions have attracted attention of researchers in many disciplines, including psychology, neuroscience, and lately computer science. Speech is considered a salient conveyor of emotional cues, and can be used as an important source for emotional studies. Speech is modulated for different emotions by varying frequency- and energy-related acoustic parameters such as pitch, energy, and formants. In this paper, we explore analyzing inter- and intra-subband energy variations to differentiate six emotions. The emotions considered are anger, disgust, fear, happiness, neutral, and sadness. In this research, Two-Layered Cascaded Subband Cepstral Coefficients (TLCS-CC) analysis was introduced to study energy variations within low and high arousal emotions as a novel approach for emotion classification. The new approach was compared with Mel frequency cepstral coefficients (MFCC) and log frequency power coefficients (LFPC). Experiments were conducted on the Berlin Emotional Data Corpus (BECD). With energy-related features, we could achieve average accuracy of 73.9% and 80.1% for speaker-independent and -dependent emotion classification respectively.

本誌: 電気学会論文誌C(電子・情報・システム部門誌) Vol.133 No.1 (2013) 特集:2012 Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV2012)

本誌掲載ページ: 200-210 p

原稿種別: 論文/英語

電子版へのリンク: https://www.jstage.jst.go.jp/article/ieejeiss/133/1/133_200/_article/-char/ja/

販売タイプ
書籍サイズ
ページ数
詳細を表示する