Human-Computer Interaction: Communication, Cooperation, and Application Design

By Hans-Jörg Bullinger; Jürgen Ziegler | Go to book overview

components in the face image, and DCT features are calculated as the DCT energies for the horizontal, diagonal and vertical directions.

Body posture at each time instant is estimated from the trinocular images acquired by the three CCD cameras observing the person from three directions [5]. First, some significant points such as the head top and finger tips are located by analyzing the contour of the silhouette extracted from the background by thresholding each image. Main joints such as elbows and knees are located using a learning procedure, because these points are difficult to estimate by a simple contour analysis. By evaluating appropriateness of the three views, two views are selected for each significant point so that 3D coordinates of the significant points can be calculated by the principle of triangulation.


2.3 Real-time reproduction of facial expressions and body postures

To reproduce facial expressions in the 3D model, the DCT features obtained by the method described in Section 2.2 are utilized to deform the face model, where the authors' reproduction method based on anatomy of artists or plastic anatomy is used for the deformation [6]. To reproduce body postures, the 3D coordinates of the significant points are given to the 3D model, and the other vertices of the model are displaced by an interpolation method.


3 Facial expression recognition from time-sequential images

In general, humans display different facial expressions sequentially. Therefore, computers need to spot each facial expression from video sequences and recognize the spotted facial epxression. The authors have developed an HMM (Hidden Markov Models) based method [2].

First, the motion around the right eye and mouth is estimated by using the gradient-based optical flow algorithm. Then 2-D Fourier transform is applied, and lower-frequency coefficients are extracted as a 15-dimensional feature vector. Then, the temporal sequence of the feature vector is matched to the HMM models representing the facial expressions to be recognized so that each facial expression is spotted and recognized. Our method basically uses left-to- right HMM, and each state of HMM is assigned to a condition of facial muscles: i.e., neutral, contracting, apex and relaxing. The right most (final) state has transitions back to the left most (initial) state of not only that category but also the other categories. By thresholding the forward probability of the state apex, it is possible to spot the duration corresponding to a facial expression from the video sequence. At the same time, the expression category for the spotted duration can be recognized. As shown in Fig. 2, the duration that corresponds to each facial expression in the sequence can be spotted and recognized accurately.

-174-

Notes for this page

Add a new note
If you are trying to select text to create highlights or citations, remember that you must now click or tap on the first word, and then click or tap on the last word.
One moment ...
Default project is now your active project.
Project items

Items saved from this book

This book has been saved
Highlights (0)
Some of your highlights are legacy items.

Highlights saved before July 30, 2012 will not be displayed on their respective source pages.

You can easily re-create the highlights by opening the book page or article, selecting the text, and clicking “Highlight.”

Citations (0)
Some of your citations are legacy items.

Any citation created before July 30, 2012 will labeled as a “Cited page.” New citations will be saved as cited passages, pages or articles.

We also added the ability to view new citations from your projects or the book or article where you created them.

Notes (0)
Bookmarks (0)

You have no saved items from this book

Project items include:
  • Saved book/article
  • Highlights
  • Quotes/citations
  • Notes
  • Bookmarks
Notes
Cite this page

Cited page

Style
Citations are available only to our active members.
Buy instant access to cite pages or passages in MLA, APA and Chicago citation styles.

(Einhorn, 1992, p. 25)

(Einhorn 25)

1. Lois J. Einhorn, Abraham Lincoln, the Orator: Penetrating the Lincoln Legend (Westport, CT: Greenwood Press, 1992), 25, http://www.questia.com/read/27419298.

Cited page

Bookmark this page
Human-Computer Interaction: Communication, Cooperation, and Application Design
Table of contents

Table of contents

Settings

Settings

Typeface
Text size Smaller Larger Reset View mode
Search within

Search within this book

Look up

Look up a word

  • Dictionary
  • Thesaurus
Please submit a word or phrase above.
Print this page

Print this page

Why can't I print more than one page at a time?

Help
Full screen
/ 1364

matching results for page

    Questia reader help

    How to highlight and cite specific passages

    1. Click or tap the first word you want to select.
    2. Click or tap the last word you want to select, and you’ll see everything in between get selected.
    3. You’ll then get a menu of options like creating a highlight or a citation from that passage of text.

    OK, got it!

    Cited passage

    Style
    Citations are available only to our active members.
    Buy instant access to cite pages or passages in MLA, APA and Chicago citation styles.

    "Portraying himself as an honest, ordinary person helped Lincoln identify with his audiences." (Einhorn, 1992, p. 25).

    "Portraying himself as an honest, ordinary person helped Lincoln identify with his audiences." (Einhorn 25)

    "Portraying himself as an honest, ordinary person helped Lincoln identify with his audiences."1

    1. Lois J. Einhorn, Abraham Lincoln, the Orator: Penetrating the Lincoln Legend (Westport, CT: Greenwood Press, 1992), 25, http://www.questia.com/read/27419298.

    Cited passage

    Thanks for trying Questia!

    Please continue trying out our research tools, but please note, full functionality is available only to our active members.

    Your work will be lost once you leave this Web page.

    Buy instant access to save your work.

    Already a member? Log in now.

    Oops!

    An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.