Combine motor imagery and speech imagery to create a personal asynchronous EEG system.
Combine motor imagery and speech imagery to create a personal asynchronous EEG system.
カテゴリ: 部門大会
論文No: PS2-10
グループ名: 【C】2022年電気学会電子・情報・システム部門大会
発行日: 2022/08/24
タイトル(英語): Combine motor imagery and speech imagery to create a personal asynchronous EEG system.
著者名: Zhang Zhuohao(東京工業大学),Li Pengcheng(東京工業大学),Connelly Akima(東京工業大学),Rangpong Phurin(東京工業大学),Yagi Tohru(東京工業大学)
著者名(英語): Zhuohao Zhang (Tokyo Institute of Technology),Pengcheng Li (Tokyo Institute of Technology),Akima Connelly (Tokyo Institute of Technology),Phurin Rangpong (Tokyo Institute of Technology),Tohru Yagi (Tokyo Institute of Technology)
キーワード: Brain computer interface(BCI)|speech imagery|motor imagery|asynchronous
要約(日本語): The purpose of this study is to construct an asynchronous BCI system that classifies what words a user is thinking about. It is widely known that asynchronous BCI systems are a more natural mode of interaction than synchronous ones. However, this BCI system will be more complex because it is necessary to determine whether the evoked neural activity is due to the user's intentional or unintentional mental activity. Therefore, this study will focus on combining two types of imagery to identify neural activity. In the experiment, we will not only ask subjects to read a particular word in atonal breath, but also to think about the associated motor activity shown to them. By analyzing the characteristics of the EEG in these two images, we attempt to construct a personal asynchronous BCI system.
受取状況を読み込めませんでした
