商品情報にスキップ
1 1

顔姿勢推定に基づくユーザとロボットとの共同注意の形成

顔姿勢推定に基づくユーザとロボットとの共同注意の形成

通常価格 ¥440 JPY
通常価格 セール価格 ¥440 JPY
セール 売り切れ
税込

カテゴリ: 部門大会

論文No: OS5-9

グループ名: 【C】平成14年電気学会電子・情報・システム部門大会講演論文集

発行日: 2002/09/02

タイトル(英語): Forming Share Attention between User and Robot Based on Face Posture Estimation

著者名: 陳彬 (電気通信大学),目黒光彦 (電気通信大学),金子正秀 (電気通信大学)

キーワード: インタラクション|顔姿勢推定領域分割|領域分割|共同注意誘目度マップ|Interaction|Face Posture Estimation|Visual Acuity Map|Image Segmentation|Share AttentionSaliency Map

要約(日本語): In a human-robot interaction, the ability to detect and share user's attention is the minimal requirement for an intelligent robot, since it is very important for robot to know human's internal state. Here we present an algorithm, which is based on face posture estimation and the spatiotemporal image processing, to calculate a saliency map in order to form share attention. After the face posture estimation, we introduce an elliptic cone to approximate the user's visual field, whose axis is fitted to the user's gaze line that is not necessary to be detected beforehand. A visual acuity map on the user's retina can be yielded according to the formulization of human's visual acuity. We calculate the saliency map in term of recency-weighted average of visual acuity maps along time axis so that the dynamic scene (for example, the occasion that user's gaze line is shifting to a new object or the gazed object is moving) can affect the saliency map calculation, as well as the moving image areas are tracked to propagate the value of the visual acuity map from the current frame to the next one. Finally, we use the saliency map to form share attention in human-robot interaction, and it is also manifested that it will be possible to detect the user's attention by only considering face orientation even when the eyes can't be observed clearly.

PDFファイルサイズ: 1,343 Kバイト

販売タイプ
書籍サイズ
ページ数
詳細を表示する