A Prototype System for Computer Vision Based
Human Computer Interaction
Computational Vision and Active Perception Laboratory (CVAP)
Center for User-Oriented IT-Design (CID)
Department of Numerical Analysis and Computing Science
KTH (Royal Institute of Technology)
S-100 44 Stockholm, Sweden.
Technical report ISRN KTH/NA/P-01/09-SE
With the development of information technology in our society,
we can expect that computer systems to a larger extent will
be embedded into our environment.
These environments will impose needs for new types of
with interfaces that are natural and easy to use.
In particular, the ability to interact with computerized equipment
without need for special external equipment
Today, the keyboard, the mouse and the remote control are
used as the main interfaces for transferring information
and commands to computerized equipment.
In some applications involving three-dimensional information,
such as visualization, computer games and control of robots,
other interfaces based on trackballs, joysticks and datagloves
are being used.
In our daily life, however, we humans use our vision and
hearing as main sources of information about our environment.
Therefore, one may ask to what extent it would be possible
to develop computerized equipment able to communicate
with humans in a similar way, by understanding visual and auditive input.
Perceptual interfaces based on speech have already started
to find a number of commercial and technical applications.
For examples, systems are now available where speech commands
can be use for dialing numbers in cellular phones
or for making ticket reservations.
Concerning visual input, the processing power of computers
has reached a point where real-time processing of visual
information is possible with common workstations.
The purpose of this article is to describe ongoing work in
developing new perceptual interfaces with emphasis on
commands expressed as hand gestures.
Examples of applications of hand gesture analysis include:
Potential advantages of using visual input in this context
are that visual information makes it possible to communicate with computerized
equipment at a distance, without need for physical contact
with the equipment that is to be controlled.
Moreover, the user should be able to control the equipment without need for specialized external devices, such as a remote control.
- Control of consumer electronics
- Interaction with visualization systems
- Control of mechanical systems
- Computer games
Example of a simple situation where the user controls
actions on a screen using hand gestures.
In this application, the position of the cursor is
controlled by the motion of the hand,
and the user can induce a click by changing the hand posture.
Figure 1 shows an illustration of a type of
scenario we are interested in.
The user is in front of a camera connected to a computer.
The camera follows the movements of the hand,
and performs actions depending on the state and the motion
of the hand.
Three basic types of hand gestures can be identified in
such a situation:
- A static hand posture implies that the hand is held in
a fixed state during a certain period of time,
during which the system recognizes the state given a
predefined set of states.
Examples of interpretations that are possible include
on or off for a TV, start or stop for a video recorder,
or a choice between different modes for a command involving motion.
- A quantitative hand motion means that the two-dimensional
or the three-dimensional motion of the hand is measured,
and the estimated motion parameters (translations and rotations)
are being used for controlling the motion of other computerized
equipment, such as visualization parameters for displaying
a three-dimensional object, the volume of a TV or the motion of robot.
- A qualitative hand motion means that the hand moves
according to a pre-defined motion pattern (a trajectory
in space-time) and that the motion pattern is recognized from a
predefined set of motion patterns.
Examples of interpretations include letters (the Palm Pilot sign
language) or control of consumer electronics in a similar manner
as for static hand postures.
Hand postures controlling a prototype scenario:
(a) a hand with three open fingers toggles the TV on or off,
(b) a hand with two open fingers and the index finger pointing
to one side selects the next TV channel,
(c) a hand with two open fingers and the index finger pointing
upwards selects the previous TV channel,
(d) a hand with five open fingers toggles the lamp on or off.
| Toggle TV on/off
|| Next channel
|| Previous channel
|| Toggle lamp on/off
A few snapshots from a scenario where a user enters a room and
turns on the lamp (a)-(b), turns on the TV set (c)-(d) and
switches to a new TV channel (e)-(f).
To be able to test computer-vision-based human-computer-interaction
in practice, we developed a prototype test bed system,
where the user can control a TV set and a lamp using the following
types of hand postures:
Figure 3 shows a few snapshots from
a demonstration, where a user controls equipment in the environment
in this way.
In figures 3(a)-(b) a user turns on the lamp,
in figures 3(c)-(d) he turns on the TV set,
and in figures 3(e)-(f) he switches the
TV set to a new channel.
All steps in this demonstration have been performed in real-time
and during continuous operation of the prototype system described
in next section.
- Three open fingers (figure 2(a))
toggle the TV on or off.
- Two open fingers
change the channel of the TV.
With the index finger pointing to one side, the next TV channel is selected,
while the previous channel is selected if the index finger points
- Five open fingers
toggle the lamp on or off.
A prototype system
To track and recognize hands in multiple states, we have developed
a system based on a combination of shape and colour information.
At an overview level, the system consists of the following
functionalities (see figure 4):
Overview of the main components of the prototype system
for detecting and recognizing hand gestures, and using
this information for controlling consumer electronics.
The image information from the camera is grabbed at frame rate,
the colour images are converted from RGB format to a new colour
space that separates the intensity and chromaticity components
of the colour data.
In the colour images, colour feature detection is performed,
which results in a set of image features that can be matched
to a model.
Moreover, a complementary comparison between actual colour
and skin colour is performed to identify regions that are
more likely to contain hands.
Based on the detected image features and the computed skin
colour similarity, comparison with a set of object hypotheses
is performed using a statistical approach referred to as
particle filtering or condensation.
The most likely hand posture is estimated, as well as
the position, size and orientation of the hand.
This recognized gesture information is bound to different actions
relative to the environment, and these actions are carried
under the control of the gesture recognition system.
In this way, the gesture recognition system provides a medium
by which the user can control different types of equipment
in his environment.
Appendix A gives a more detailed description
of the algorithms and computational modules in the system.
The problem of hand gesture analysis has received increased attention
Early work of using hand gestures for television control was presented
by (Freeman & Weissman, 1995) using normalized correlation;
see also (Cipolla & Pentland, 1998; Kuch & Huang, 1995; Maggioni & Kämmerer, 1998; Pavlovic et al., 1997)
for related works.
Some approaches consider elaborated 3-D hand models (Regh & Kanade, 1995),
while others use colour markers to simplify feature detection
(Cipolla et al., 1993).
Appearance-based models for hand tracking and sign recognition
were used by (Cui & Weng, 1996),
while (Heap & Hogg, 1998; MacCormick & Isard, 2000) tracked silhouettes of hands.
Graph-like and feature-based hand models have been proposed
by (Triesch & von der Malsburg, 1996) for sign recognition and in (Bretzner & Lindeberg, 1998)
for tracking and estimating 3-D rotations of a hand.
The use of a hierarchical hand model continues along the works by
(Crowley & Sanderson, 1987) who extracted peaks from
a Laplacian pyramid of an image and linked them into a tree structure
with respect to resolution,
(Lindeberg, 1993) who constructed scale-space primal sketch
with an explicit encoding of blob-like structures in scale space as well as
the relations between these,
(Triesch & von der Malsburg, 1996)
who used elastic graphs to represent
hands in different postures with local jets of Gabor filters computed
at each vertex,
(Lindeberg, 1998) who performed feature detection with
automatic scale selection by detecting local extrema of
normalized differential entities with respect to scale,
(Shokoufandeh et al., 1999) who detected maxima in a
multi-scale wavelet transform,
as well as (Bretzner & Lindeberg, 1999),
who computed multi-scale blob and ridge features and defined explicit
qualitative relations between these features.
The use of chromaticity as a primary cue for detecting skin coloured
regions was first proposed by (Fleck et al., 1996).
Our implementation of particle filtering largely follows
the traditional approaches for condensation as presented by
(Black & Jepson, 1998; Deutscher et al., 2000; Isard & Blake, 1996; Sidenbladh et al., 2000) and others.
Using the hierarchical multi-scale structure of the hand models,
however, we adapted the layered sampling approach (Sullivan et al., 1999)
and used a coarse-to-fine search strategy to improve the
computational efficiency, here, by a factor of two.
The proposed approach is based on several
of these works and is novel in the respect that it combines a
hierarchical object model with image features at multiple scales
and particle filtering for robust tracking and recognition.
For more details about the algorithmic aspects underlying
the tracking and recognition components in the current system,
see (Laptev & Lindeberg, 2000).
The work is carried out as a collaboration project
between the Computational Vision and Active Perception
Laboratory (CVAP) and the Center for User-Oriented IT-Design at KTH,
where CVAP provides expertise on computer vision,
while CID provides expertise on human-computer-interaction.
In the development of new forms of human computer interfaces,
it is of central importance that user studies are being carried out
and that the interaction is tested in prototype systems as early
Computer vision algorithms for gesture recognition will be developed
by CVAP, and will be used in prototype systems in scenarios defined
in collaboration with CID.
User studies for these scenarios will then be performed and be
developed by CID, to guide further developments.
Black, M. & Jepson, A. (1998),
probabilistic framework for matching temporal trajectories:
Condensation-based recognition of gestures and expressions, in `Fifth
European Conference on Computer Vision', Freiburg, Germany, pp. 909-924.
Bretzner, L. & Lindeberg, T. (1998)
- , Use your hand as a 3-D mouse or relative orientation from extended
sequences of sparse point and line correspondences using the affine trifocal
tensor, in H. Burkhardt & B. Neumann, eds, `Fifth European
Conference on Computer Vision', Vol. 1406 of Lecture Notes in Computer
Science, Springer Verlag, Berlin, Freiburg, Germany, pp. 141-157.
Bretzner, L. & Lindeberg, T. (1999)
- , Qualitative multi-scale feature hierarchies for object tracking, in
O. F. O. M. Nielsen, P. Johansen & J. Weickert, eds, `Proc. 2nd
International Conference on Scale-Space Theories in Computer Vision', Vol.
1682, Springer Verlag, Corfu, Greece, pp. 117-128.
Cipolla, R., Okamoto, Y. & Kuno, Y. (
- Robust structure from motion using motion parallax,
in `Fourth International Conference on Computer Vision', Berlin,
Germany, pp. 374-382.
Cipolla, R. & Pentland, A., eds (
- Computer vision for human-computer interaction,
Cambridge University Press, Cambridge, U.K.
Crowley, J. & Sanderson, A. (1987)
- , `Multiple resolution representation and probabilistic matching of 2-d
gray-scale shape', IEEE Transactions on Pattern Analysis and Machine
Intelligence 9(1), 113-121.
Cui, Y. & Weng, J. (1996),
View-based hand segmentation and hand-sequence recognition with complex
backgrounds, in `13th International Conference on Pattern Recognition',
Vienna, Austria, pp. 617-621.
Deutscher, J., Blake, A. & Reid, I. (
- Articulated body motion capture by annealed particle
filtering, in `CVPR'2000', Hilton Head, SC, pp. II:126-133.
Fleck, M., Forsyth, D. & Bregler, C. (
- Finding naked people, in `Fourth European
Conference on Computer Vision', Cambridge, UK, pp. II:593-602.
Freeman, W. T. & Weissman, C. D. (
- Television control by hand gestures, in `Proc.
Int. Conf. on Face and Gesture Recognition', Zurich, Switzerland.
Heap, T. & Hogg, D. (1998),
Wormholes in shape space: Tracking through discontinuous changes in shape,
in `Sixth International Conference on Computer Vision', Bombay, India,
Isard, M. & Blake, A. (1996),
Contour tracking by stochastic propagation of conditional density, in
`Fourth European Conference on Computer Vision', Cambridge, UK,
Kuch, J. J. & Huang, T. S. (1995),
Vision based hand modelling and tracking for virtual teleconferencing and
telecollaboration, in `Proc. 5th International Conference on Computer
Vision', Cambridge, MA, pp. 666-671.
Laptev, I. & Lindeberg, T. (2000),
Tracking of multi-state hand models using particle filtering and a hierarchy
of multi-scale image features, Technical Report ISRN KTH/NA/P-00/12-SE,
Dept. of Numerical Analysis and Computing Science, KTH, Stockholm, Sweden.
Lindeberg, T. (1993),
- `Detecting salient
blob-like image structures and their scales with a scale-space primal sketch:
A method for focus-of-attention', International Journal of Computer
Vision 11(3), 283-318.
Lindeberg, T. (1998),
- `Feature detection with
automatic scale selection', International Journal of Computer Vision
MacCormick, J. & Isard, M. (2000),
Partitioned sampling, articulated objects, and interface-quality hand
tracking, in `Sixth European Conference on Computer Vision', Dublin,
Ireland, pp. II:3-19.
Maggioni, C. & Kämmerer, B. (
- Gesturecomputer-history, design and applications,
in R. Cipolla & A. Pentland, eds, `Computer vision for
human-computer interaction', Cambridge University Press, Cambridge, U.K.,
Pavlovic, V. I., Sharma, R. & Huang, T. S. (
- `Visual interpretation of hand gestures for
human-computer interaction: A review', IEEE Trans. Pattern Analysis and
Machine Intell. 19(7), 677-694.
Regh, J. M. & Kanade, T. (1995),
Model-based tracking of self-occluding articulated objects, in `Fifth
International Conference on Computer Vision', Cambridge, MA, pp. 612-617.
Shokoufandeh, A., Marsic, I. & Dickinson, S. (
- `View-based object recognition using saliency maps',
Image and Vision Computing 17(5/6), 445-460.
Sidenbladh, H., Black, M. & Fleet, D. (
- Stochastic tracking of 3d human figures using 2d
image motion, in `Sixth European Conference on Computer Vision',
Dublin, Ireland, pp. II:702-718.
Sullivan, J., Blake, A., Isard, M. & MacCormick, J.
- Object localization by bayesian
correlation, in `Seventh International Conference on Computer Vision',
Corfu, Greece, pp. 1068-1075.
Triesch, J. & von der Malsburg, C. (
- Robust classification of hand postures against
complex background, in `Proc. Int. Conf. on Face and Gesture
Recognition', Killington, Vermont, pp. 170-175.
Computational modules in the prototype system
This appendix gives a more detailed description of the algorithms
underlying the different computational modules in the prototype
system for hand gesture recognition outlined in section 4.
In contrast to the main text, this presentation assumes
knowledge about computer vision.
For each image, a set of blob and ridge features is detected.
The idea is that the palm of the hand gives rise to a blob at
a coarse scale, each one of the fingers gives rise to a ridge
at a finer scale, and each finger tip gives rise to a fine scale blob.
Figure 5 shows an example of such image
features computed from an image.
The result of computing blob features and ridge features
from an image of a hand. (a) circles and ellipses corresponding
to the significant blob and ridge features extracted from an image of a hand;
(b) selected image features corresponding to the palm, the fingers and the
finger tips of a hand; (c) a mixture of Gaussian kernels associated with
blob and ridge features illustrating how the selected image features capture
the essential structure of a hand.
Technically, this feature detection step is based on the following
computational steps. The input colour image is transformed from
the RGB colour space to a Iuv colour space according to:
A scale-space representation is computed of each colour channel
by convolution with Gaussian kernels
and the following normalized differential expressions are computed and
summed up over the channels at each scale:
Then, scale-space maxima of these normalized differential entities
are detected, i.e., points at which
assume normalized maxima with respect to space and scale.
At each scale-space maximum
a second-moment matrix
is computed at integration scale
proportional to the scale
of the detected image features.
To allow for the computational efficiency needed to reach real-time
performance, all the computations in the feature detection step
have been implemented within a pyramid framework.
Figure 5 shows such features,
illustrated by ellipses centered at
and with covariance
is the smallest eigenvalue of .
As mentioned above, an image of a hand can be expected to give
rise to blob and ridge features corresponding to the fingers
of the hand.
These image structures, together with information about their
relative orientation, position and scale, can be used for
defining a simple but discriminative view-based model of
Thus, we represent a hand by a set of blob and ridge features
as illustrated in figure 6,
and define different states,
depending on the number of open fingers.
To model translations, rotations and scaling transformations of
the hand, we define a parameter vector
which describes the global position , the size ,
and the orientation
of the hand in the image,
together with its discrete state
uniquely identifies the hand configuration
in the image and estimation of
from image sequences corresponds
to simultaneous hand tracking and recognition.
Feature-based hand models in different states.
The circles and ellipses correspond to blob and ridge features.
When aligning models to images, the features are translated,
rotated and scaled according to the parameter vector .
When tracking human faces and hands in images,
the use of skin colour has been demonstrated to
be a powerful cue. In this work, we explore similarity
to skin colour in two ways:
- For defining candidate regions (masks) for searching for hands.
- For computing a probabilistic measure of any pixel being skin coloured.
To delimit regions in the image for searching for hands,
an adaptive histogram analysis of colour information is
For every image, a histogram is computed for the chromatic
-components of the colour space.
In this -space a coarse search region has been
defined, where skin coloured regions are likely to be.
Within this region, blob detection is performed,
and the blob most likely to correspond to skin colour
is selected. The support region of this blob in colour
space is backprojected into the image domain, which
results in a number of skin coloured regions.
Figure 7 shows an example of
a region-of-interest interest computed in this way,
which are used as a guide for subsequent processing.
To delimit the regions in space where to perform
recognition of hand gestures, an initial computation
of regions of interest is carried out, based on
adaptive histogram analysis.
This illustration shows the behaviour of the histogram
based colour analysis for a detail of a hand.
In the system, however, the algorithm operates on overview images.
(a) original image,
(b) histogram over chromatic information,
(c) backprojected histogram blob giving a hand mask,
(d) results of blob detection in the histogram.
For exploring colour information
in this context, we compute a probabilistic colour
prior in the following way:
For each hand model, this prior is evaluated at a number of image
positions, given by the positions of the image features.
Figure 8 shows the result of computing
a map of this prior for an image with a hand.
- Hands were segmented manually from the background for
approximately 30 images, and two-dimensional histograms
over the chromatic information
for skin regions and background.
- These histograms were summed up and normalized to unit mass.
- Given these training data, the probability of any measured
image point with colour values
being skin colour
was estimated as
Illustration of the effect of the colour prior.
(a) original image,
(b) map of the the probability of skin colour
at every image point.
Tracking and recognition of a set of object models in time-dependent
images can be formulated as the maximization of a posterior probability
distribution over model parameters, given a sequence of input images.
To estimate the states of object models in this respect, we follow the
approach of particle filtering to propagate hypotheses over time.
Particle filters aim at estimating and propagating the posterior probability
over time, where
are static and dynamic model parameters and
observations up to time . Using Bayes rule, the posterior at time
be evaluated according to
is a normalization constant that does not depend on variables ,.
denotes the likelihood that a model
gives rise to the image
Using a first-order Markov assumption, the dependence on observations before time
can be removed and the model prior
can be evaluated using a posterior from a previous time step and the distribution
for model dynamics according to
Since the likelihood function is usually multi-modal and cannot be expressed in
closed form, the approach of particle filtering is to approximate the posterior
weighted according to their likelihoods
The posterior for a new time moment is then computed by populating
the particles with high weights and predicting them according to
their dynamic model
To use particle filtering for tracking and recognition of hierarchical
we let the state variable
denote the position ,
the size , the orientation
and the posture
of the hand model, i.e.,
denotes the time derivatives of the first four variables,
Then, we assume that the likelihood
does not explicitly depend on , and approximate
for each particle according to (15).
Concerning the dynamics
of the hand model, a constant velocity model is adopted,
where deviations from the constant velocity assumption
are modelled by additive Brownian motion,
from which the distribution
To capture changes in hand postures, the state parameter
to vary randomly for
of the particles at each time step.
At every time moment, the hand tracker based on particle
filtering evaluates and compares a set of object hypothesis.
From these hypothesis, which represent the probability
distribution of the object, the most likely object state
When the tracking is started, all particles are first distributed
uniformly over the parameter spaces
After each time step of particle filtering,
the best hypothesis of a hand is estimated,
by first choosing the most likely hand posture
and then computing the mean of
for that posture.
Hand posture number
is chosen if
is the sum of the weights of all particles with state .
Then, the continuous parameters are estimated by computing a weighted
mean of all the particles in state .
Figure 9 shows an example of model selection
performed in this way.
To compute the likelihood of an object model given a set of image
features, we represent each feature in the model and the data by
a Gaussian kernel
having the same mean and
covariance as the image features.
Thus, the model and the data are represented by Gaussian
mixture models according to
and the normalization factor is chosen as
to give scale invariance.
To compare the model with the data, we integrate the square difference
between their associated Gaussian mixture models
which after a few approximations can be simplified to
denotes the square difference between
a Gaussian representative of a model feature
and its nearest
data feature .
An important property of this penalty term is that it allows for
simultaneous localization and recognition of the object.
Two model features (solid ellipses) and two data features
(dashed ellipses) in (a) are compared by evaluating the square
difference of associated Gaussian functions. While the overlapping
model (A) and the data (B) features cancel each other, the mismatched
features (C and D) increase the square difference in (b).
After a few calculations, it can be shown that (12)
can be expressed in closed form as
Given these entities, the likelihood of a model feature is then
controls the sharpness of the
likelihood function, and this entity is multiplied
by the prior
on skin colour.
A Prototype System for Computer Vision Based
Human Computer Interaction
This document was generated using the
LaTeX2HTML translator Version 98.1p7 (June 18th, 1998)
Copyright © 1993, 1994, 1995, 1996,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998,
Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -split 2 cvap251
The translation was initiated by Tony Lindeberg on 2001-04-21