研究テーマ

注視方向と歩容の関係の分析

本研究では,視線方向の変化に伴って生じる歩き方(歩容)の変化を分析した.ここでは腕・脚振りの振幅に着目して分析を行った.視線方向側の腕振りの振幅が増加し、反対側の腕振りの振幅が減少する傾向が見られた.

  • Analysis of head and chest movements that correspond to gaze directions during walking
    H.Yamazoe, I.Mitsugami, T.Okada, Y.Yagi
    Experimental Brain Research, To appear, 2019
  • Immersive Walking Environment for Analyzing Gaze-gait Relations
    H.Yamazoe, I.Mitsugami, T.Okada, T.Echigo, Y.Yagi
    Transactions of the Virtual Reality Society of Japan, Vol.22, No.3, 2017

ウェアラブルロボットによる触覚提示

We propose a tactile expression mechanism that can make physical contact and provide direction indications. We previously proposed a wearable robot that can provide physical contact for elderly support in outdoor situations. In our current scheme, wearable message robots, which we mounted on the user's upper arm, give such messages to users as navigational information, for example. Using physical contact can improve relationships between users and robots. However, our previous prototypes have a problem because the types of tactile expressions (that the robots can make) are limited. Thus, we propose a tactile expression mechanism using a pneumatic actuator array for wearable robots. Our proposed system consists of four pneumatic actuators and creates such haptic stimuli as direction indications as well as stroking a user's arm. Our wearable robots were originally designed as appropriate support and communication for two types of physical contact: notification and affection. Our proposed mechanism for physical contact and direction indications naturally extends not only notification but also the affection abilities of the robot. Our robots and our proposed mechanism are expected to support the mobility of senior citizens by reducing their anxiety on outings.

  • A tactile expression mechanism using pneumatic actuator array for notification from wearable robots
    H.Yamazoe, T.Yonezawa, HCI International 2017, 2017
  • 装着型ぬいぐるみロボットのための空気圧アクチュエータアレイによる触覚表現手法
    山添大丈, 米澤朋子, インタラクション2017 (プレミアム発表), 1-6F-01, 2017
  • 装着型寄り添いぬいぐるみロボットのための被服牽引による方向提示手法の検討
    山添大丈, 米澤朋子, ヒューマンインタフェースシンポジウム2014, 2526D, pp. 615--618, 2014

デプスセンサ(Kinect)のキャリブレーション

This research proposed a depth measurement error model of consumer depth cameras such as Microsoft KINECT, and its calibration method. These devices are originally designed for video game interface, thus, the obtained depth map are not enough accurate for 3D measurement. To decrease these depth errors, several models have been proposed, however, these models consider only camera-related parameters. Since the depth sensors are based on projector-camera systems, we should consider projector-related parameters. Therefore, we propose the error model of the consumer depth cameras especially the KINECT, considering both intrinsic parameters of the camera and the projector. To calibrate the error model, we also propose the parameter estimation method by only showing a planar board to the depth sensors. Our error model and its calibration are necessary step for using the KINECT as a 3D measuring device. Experimental results show the validity and effectiveness of the error model and its calibration.

  • Depth Error Correction for Projector-Camera Based Consumer Depth Camera
    H. Yamazoe, H. Habe, I. Mitsugami, Y. Yagi, Computational Visual Media, to appear, 2018
  • Easy Depth Sensor Calibration
    H. Yamazoe, H. Habe, I. Mitsugami, Y. Yagi, ICPR2012, , 2012.

複数カメラの幾何的・時間的キャリブレーション

We proposed a method to achieve positions and poses of multiple cameras and temporal synchronization among them by using blinking calibration patterns. In the proposed method, calibration patterns are shown on tablet PCs or monitors, and are observed by multiple cameras. By observing several frames from the cameras, we can obtain the camera positions, poses and frame correspondences among cameras. The proposed calibration patterns are based on pseudo random volumes (PRV), a 3D extension of pseudo random sequences. Using PRV, we can achieve the proposed method. We believe our method is useful not only for multiple camera systems but also for AR applications for multiple users.

  • Synchronized AR Environment for Multiple Users Using Animation Markers
    H. Yamazoe, T.Yonezawa, The ACM Symposium on Virtual Reality Software and Technology (VRST2014) , pp.237-238, 2014
  • Geometrical and temporal calibration of multiple cameras using blinking calibration patterns
    H. Yamazoe, IPSJ Transactions on Computer Vision and Applications, Vol.6 pp.78-82, 2014

ウェアラブルロボット

We proposed a wearable partner agent, that makes physical contacts corresponding to the user’s clothing, posture, and detected contexts. Physical contacts are generated by combining haptic stimuli and anthropomorphic motions of the agent. The agent performs two types of the behaviors: a) it notifies the user of a message by patting the user’s arm and b) it generates emotional expression by strongly enfolding the user’s arm. Our experimental results demonstrated that haptic communication from the agent increases the intelligibility of the agent’s messages and familiar impressions of the agent.

  • Wearable partner agent with anthropomorphic physical contact with awareness of user's clothing and posture
    T. Yonezawa, H. Yamazoe, ISWC2013, 2013
  • Physical Contact using Haptic and Gestural Expressions for Ubiquitous Partner Robot
    T. Yonezawa, H. Yamazoe, IROS2013, 2013

膝関節固定による歩容変化の分析

We analyzed the gait changes by simulating left knee disorders in subjects. Our aim is a method that estimates the presence or the absence of leg disorders and the disordered parts from the image sequences of a subject's walking. However, gait can be changed not only by physical disorders but also such factors as neural disorders or aging. Thus we simulate the physical disorders using a knee brace. Healthy subjects wore knee braces. We compared the normal and simulated disordered walking (while wearing the knee brace) to analyze what changes occur in their gait due to physical disorders. We also analyzed whether the changes are common to all subjects. Analysis results show that the changes common to all subjects are caused by the simulated left knee disorders.

  • Analysis of Gait Changes Caused by Simulated Left Knee Disorder
    T. Ogawa, H. Yamazoe, I. Mitsugami, Y. Yagi
    9th EAI International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS), pp. 57-60, 2015
  • The Effect of the Knee Braces on Gait -Toward Leg Disorder Estimation from Images
    T. Ogawa, H. Yamazoe, I. Mitsugami, Y. Yagi
    the 2nd Joint World Congress of ISPGR and Gait and Mental Function, , 2013

Voisticky/Mixticky:ユーザの周囲に貼り付けられる仮想マルチメディア付箋

We proposed "Voisticky" and "Mixticky", schemes for browsing and recording memorandum-like sticky notes in a three-dimensional virtual space around a user with a smartphone. For intuitive browsing and recall of such memos and information, it is important to achieve browsability and contemporaneousness of the memos/information. Mixticky allows the user to place and peel off memos on a virtual balloon around her/him corresponding to each relative direction (e.g., front, left, or 45 [deg] from front to right). The user can record and browse voice, movie, image, handwriting, and text memos as virtual sticky notes in each direction by pointing the phone. The system employs snapping gestures with the smartphone as metaphors for the “put” and “peel off” motions corresponding to a physical sticky note. The virtual balloon can be carried in various scenarios so that the user can easily resume her or his thought processes anywhere.

  • Mixticky: A Smartphone-Based Virtual Environment for Recordable and Browsable Multimedia Stickies
    H. Yamazoe, T. Yonezawa, The 2nd Asian Conference on Pattern Recognition (ACPR2013), , 2013
  • Voisticky: Sharable and Portable Auditory Balloon with Voice Sticky Posted and Browsed by User's Head Direction
    T. Yonezawa, H. Yamazoe, H. Terasawa, IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC 2011), pp.118--123, 2011

単眼カメラによる視線推定

従来の視線推定手法の持つ制約を緩和した単眼カメラによる視線推定手法を提案した.これまでに多くの視線推定手法が提案されているが,キャリブレーションが必要,計測範囲が狭いなどの問題があり,その応用範囲はHCI における視線計測や視線を用いたインタフェースなどに限られてきた.提案手法では,虹彩と白目のアピアランスを持った3次元眼球モデルを用い,バンドル調整法のように複数フレームにおける観測画像とモデル投影画像の間の投影誤差が最小とすることにより,眼球モデルパラメータを推定する.従来の視線推定手法とは異なり,ユーザが決まった参照点を注視するといった特殊なキャリブレーション動作が必要ない.そのためユーザに視線推定を意識させることなく,自動的にキャリブレーション処理が完了できる.視線推定においても同様に,投影誤差を最小化することにより,視線方向を推定する.実験により,解像度QVGA(320 × 240)の画像で,約6 度の推定精度が得られることを確認した.

  • 単眼カメラを用いた視線推定のための3次元眼球モデルの自動キャリブレーション
    山添大丈, 米澤朋子, 内海章, 安部伸治, 電子情報通信学会論文誌D, Vol.J94-D, No.6, pp.998-1006, 2011
  • Remote Gaze Estimation with a Single Camera Based on Facial-Feature Tracking
    H. Yamazoe, A. Utsumi, T. Yonezawa, S. Abe
    Eye Tracking Research & Applications Symposium (ETRA2008), pp.245--250, 2008

高齢者同士の遠隔対話支援システム

This paper proposes a video communication assist system using a companion robot in coordination with the user's conversational attitude toward the communication. In order to maintain a conversation and to achieve comfortable communication, it is necessary to provide the user's attitude-aware assistance. First, the system estimates the user's conversational state by a machine learning method. Next, the robot appropriately expresses its active listening behaviors, such as nodding and gaze turns, to compensate for the listener's attitude when she/he is not really listening to another user's speech, the robot shows communication-evoking behaviors (topic provision) to compensate for the lack of a topic, and the system switches the camera images to create an illusion of eye-contact, corresponding to the current context of the user's attitude. From empirical studies and a demonstration experiment, i) both the robot's active listening behaviors and the switching of the camera image compensate for the other person's attitude, ii) elderly people prefer long intervals between the robot's behaviors, and iii) the topic provision function is effective for awkward silences.

  • Assisting video communication by an intermediating robot system corresponding to each user's attitude
    T. Yonezawa, H. Yamazoe, Y. Koyama, S. Abe, K. Mase
    ヒューマンインタフェース学会論文誌, Vol.13, No.3, pp.181-193, 2011
  • Estimation of user conversational states based on combination of user actions
    H. Yamazoe, Y. Koyama, T. Yonezawa, S. Abe, K. Mase, CASEMANS 2011, pp.33--37, 2011

Interactive Guideboard with Gaze-communicative Stuffed-toy Robot

We introduced an interactive guide plate system by adopting a gaze-communicative stuffed-toy robot and a gaze-interactive display board. An attached stuffed-toy robot on the system naturally show anthropomorphic guidance corresponding to the user’s gaze orientation. The guidance is presented through gaze-communicative behaviors of the stuffed-toy robot using joint attention and eye-contact reactions to virtually express its own mind in conjunction with b) vocal guidance and c) projection on the guide plate. We adopted our image-based remote gaze-tracking method to detect the user’s gazing orientation. The results from both empirical studies by subjective / objective evaluations and observations of our demonstration experiments in a semipublic space show i) the total operation of the system, ii) the elicitation of user’s interest by gaze behaviors of the robot, and iii) the effectiveness of the gaze-communicative guide adopting the anthropomorphic robot.

  • Attractive, Informative, and Communicative Robot System on Guide Plate as an Attendant with Awareness of User's Gaze
    T. Yonezawa, H. Yamazoe, A. Utsumi, S. Abe, Paladyn. Journal of Behavioral Robotics, Vol.4, issue.2, pp.113-122, 2013
  • 広視野・高解像度カメラによる単眼・多人数視線推定を利用したインタラクティブロボット看板
    山添大丈, 内海章, 米澤朋子, 安部伸治, 画像の認識・理解シンポジウム(MIRU2008), pp.1638-1643, 2008

胸部装着カメラによる頭部姿勢推定

We proposed a body-mounted system to capture user experience as audio/visual information. The proposed system consists of two cameras (head-detection and wide angle) and a microphone. The head-detection camera captures user head motions, while the wide angle color camera captures user frontal view images. An image region approximately corresponding to user view is then synthesized from the wide angle image based on estimated human head motions. The synthesized image and head-motion data are stored in a storage device with audio data. This system overcomes the disadvantages of head-mounted cameras in terms of ease of putting on/taking off the device. It also has less obtrusive visual impact on third persons. Using the proposed system, we can simultaneously record audio data, images in the user field of view, and head gestures (nodding, shaking, etc.) simultaneously. These data contain significant information for recording/analyzing human activities and can be used in wider application domains such as a digital diary or interaction analysis. Experimental results demonstrate the effectiveness of the proposed system.

  • A Body-mounted Camera System for Head-pose Estimation and User-view Image Synthesis
    H. Yamazoe, A. Utsumi, K. Hosaka, M. Yachida, Image and Vision Computing, Vol.25, No.12, pp.1848-1855, 2007
  • A Body-mounted Camera System for Capturing User-view Images without Head-mounted Camera
    H. Yamazoe, A. Utsumi, K. Hosaka, ISWC2005, pp 114-121, 2005

ヘッドマウントカメラと固定カメラの組み合わせによる頭部位置・姿勢推定

固定カメラと移動カメラからなる多視点システムを用いて人の頭部運動を推定する手法を提案した.人間の行動の記録やインタラクションの解析において重要となる人物の位置や視線方向(注視物体)を記録するため,固定カメラと人物頭部に装着した移動カメラ(ヘッドマウントカメラ)による連続撮影を行うシステムが提案されている.本論文ではこのようなシステムで得られる映像情報から,シーン中の各人物の頭部位置・姿勢を推定することを考える.提案手法ではまず移動カメラと固定カメラの観測を利用し各人物位置の追跡を行う.推定された人物の位置変化を用いて移動カメラ上での人物・背景領域の移動を推定し,移動カメラの画像と比較することによりカメラの位置・姿勢を推定する.これにより,シーン中の全人物の三次元位置とカメラを装着した人物の頭部運動が推定できる.実験では提案手法の位置・姿勢推定精度を評価し,本手法の有効性を示した.

  • ヘッドマウントカメラ画像と固定カメラ画像を用いた頭部位置・姿勢推定
    山添大丈, 内海章, 鉄谷信二, 谷内田正彦, 電子情報通信学会論文誌D, Vol.J89-D, No.1, pp.14-26, 2006
  • Vision-based Human Tracking System by using Head-mounted Cameras and Fixed Cameras
    H. Yamazoe, A. Utsumi, N. Tetsutani, M. Yachida, ACCV2004, pp.682-687, 2004

マルチカメラのキャリブレーション

We proposed a distributed automatic method of calibrating cameras for multiple-camera-based vision systems. because manual calibration is a difficult and time-consuming task. However, the data size and computational costs of automatic calibration increase when the number of cameras is increased. We solved these problems by employing a distributed algorithm. With our method, each camera estimates its position and orientation through local computation using observations shared with neighboring cameras. We formulated our method with two kinds of geometrical constraints, an essential matrix and homography. We applied the latter formulation to a human tracking system to demonstrate the effectiveness of our method.

  • 多視点視覚システムにおけるカメラ位置・姿勢推定の分散化とその人物追跡システムへの適用
    山添大丈, 内海章, 鉄谷信二, 谷内田正彦, 映像情報メディア学会誌, Vol.58, No.11, pp.1639-1648, 2004
  • Automatic camera calibration method for distributed multiple camera based human tracking system
    H. Yamazoe, A. Utsumi, N. Tetsutani, M. Yachida, In Proc. of the 5th Asian Conference on Computer Vision (ACCV2002), pp. 424-429, 2002