Virtual Vision advocates developing visually and behaviorally realistic 3D synthetic environments to serve the needs of computer vision research. Virtual vision, especially, is well-suited for studying large-scale camera networks. A virtual vision simulator capable of generating "realistic" synthetic imagery from real-life scenes, involving pedestrians and other objects, is the sine qua non of carrying out virtual vision research. Here we develop a distributed, customizable virtual vision simulator capable of simulating pedestrian traffic in a variety of 3D environments. Virtual cameras deployed in this synthetic environment generate imagery using state-of-the-art computer graphics techniques, boasting realistic lighting effects, shadows, etc. The synthetic imagery is fed into a visual analysis pipeline that currently supports pedestrian detection and tracking. The results of this analysis can then be used for subsequent processing, such as camera control, coordination, and handoff. It is important to bear in mind that our visual analysis pipeline is designed to handle real world imagery without any modifications. Consequently, it closely mimics the performance of visual analysis routines that one might deploy on physical cameras. Our virtual vision simulator is realized as a collection of modules that communicate with each other over the network. Consequently, we can deploy our simulator over a network of computers, allowing us to simulate much larger camera networks and much more complex scenes then is otherwise possible.
The different versions of the original document can be found in: