计算机英语论文栏目提供最新计算机英语论文格式、计算机英语硕士论文范文。详情咨询QQ:1847080343(论文辅导)

帮写计算机英语论文:Gesture Recognition Using 3D Appearance and Motion Features

日期:2018年01月15日 编辑: 作者:无忧论文网 点击次数:2134
论文价格:200元/篇 论文编号:lw201105190932084414 论文字数:4947 所属栏目:计算机英语论文
论文地区:其他 论文语种:English 论文用途:本科毕业论文 BA Thesis

Gesture Recognition Using 3D Appearance and Motion Features
Computational Interaction and Robotics Laboratory
Abstract
We present a novel 3D gesture recognition scheme that combinesthe 3Dappearance of the hand and the motion dynamics
帮写计算机英语论文of the gesture to classify manipulative and controllinggestures. Our method does not directly track the hand. Instead,
we take an object-centered approach that efficientlycomputes the 3D appearance using a region-based coarsestereo matching algorithm in a volume around the hand.
The motion cue is captured via differentiating the appearancefeature. An unsupervised learning scheme is carriedout to capture the cluster http://www.51lunwen.org/jisuanjiyingtyu/2011/0519/lw201105190932084414.htmlstructure of these feature-volumes.Then, the image sequence of a gesture is converted to a seriesof symbols that indicate the cluster identities of eachimage pair. Two schemes (forward HMMs and neural networks)are used to model the dynamics of the gestures. Weimplemented a real-time system and performed numerousgesture recognition experiments to analyze the performance
with different combinations of the appearance and motionfeatures. The system achieves recognition accuracy of over
96% using both the proposed appearance and the motioncues.
1 Introduction
Gestures have been one of the important interaction mediain current human-computer interaction (HCI) environments[3, 4, 11, 12, 14, 16, 18, 21, 24, 25, 26]. Furthermore,for 3D virtual environments (VE) in which the usermanipulates 3D objects, gestures are more appropriate andpowerful than traditional interactionmedia, such as a mouseor a joystick. Vision-based gesture processing also providesmore convenience and immersiveness than those based onmechanical devices.
Most reported gesture recognition work in the literature(see Section 1.1) relies heavily on visual tracking and
template recognition algorithms. However general humanmotion tracking is well-known to be a complex and difficult
problem [8, 17]. Additionally, while template matchingmay be suitable for static gestures, its ability to capturethe spatio-temporal nature of dynamic gestures is in doubt.
Alternatively, methods that attempt to capture the 3D informationof the hand [11] have been proposed. However, it is
well-known that, in general circumstances, the stereo problemis difficult to solve reliably and efficiently.
Human hands and arms are highly articulate and deformableobjects and hand gestures normally consist of 3D
global and local motion of the hands and the arms. Manipulativeand interaction gestures [14] have a temporal nature
that involve complex changes of hand configurations. Thecomplex spatial properties and dynamics of such gesturesrender the problem too difficult for pure 2D (e.g. templatematching) methods. Ideally we would capture the full 3Dinformation of the hands to model the gestures [11]. However,
the difficulty and computational complexity of visual3D localization and robust tracking prompts us to question
the necessity of doing so for gesture recognition.To that end, we present a novel scheme to model and
recognize 3D temporal gestures using 3D appearance andmotion cues without tracking and explicit localization of
the hands. Instead we follow the site-centered computationfashion of Visual Inteface Cues (VICs) paragigm [3, 24].We propose that interaction gestures can be captured ina local neighborhood around the manipulated object based
on the fact that the user only initiates manipulative gestureswhen his or her hands are close enough to the objects. The
advantage of this scheme is that it is efficient and highly
flexible. The dimension of the volume of the local neighborhoodaround the manipulated object can be adjusted convenientlyaccording to the nature of the particular interactionenvironment and the applicable gestures. For example, in a
desktop interaction environment where the interaction elementsare represented as small icons on a flat panel and manipulativegestures are only initiated when