RES: computing the interactions between real and virtual objects in video sequences
Abstract
Possibilities for dynamic interactions of people with machines created by
combination of virtual reality and communication networking provide new
interesting problems at the intersection of two domains (among others):
computer vision and computer graphics.
In this paper, a technical solution to one of these problems is presented to
automate the mixing of real and synthetic objects in a same animated video
sequence. Current approaches usually involve mainly 2D-based effects and rely
heavily on human expertise and interaction. We aim at achieving close
interaction between 3D-based analysis and synthesis techniques to compute the
interaction between a real scene captured in a sequence of calibrated images,
and a computer-generated environment.
Reference
@INPROCEEDINGS{jptnr95,
author = {Pierre Janc\`ene and Fabrice Neyret and Xavier Provot
and Jean-Philippe Tarel and Jean-Marc V\'ezien
and Christophe Meilhac and Anne V\'erroust},
title = {RES: computing the interactions between real and virtual
objects in video sequences},
booktitle = {Second IEEE Workshop on Networked Realities},
address = {Boston, Massachusetts (USA)},
month = Oct,
year = {1995},
pages = {27--40},
note = {http://perso.lcpc.fr/tarel.jean-philippe/publis/nr95.html}
}
Pdf file (1234 Kb)
(c) IEEE