Society of Motion Picture and Television Engineers, New England Section, Web Page Our logo has a history...
Our logo has a history...
MenuClick here to see this and previous seasons' calendar...Click here to send us feedback...Click here to join SMPTE/NE!Click here for linformation about our section...Click here for links to interesting sites...Click here for links to companies in our industry...Click here for links to other interesting groups...Click here for a complete list of articles from our newsletter...Click here for the latest section news...Click here for the latest from our Chair...Click here for info about our last meeting...Click here for info about out next meeting...

Virtual Reality Movies
Our October 27 Meeting at the MIT Media Lab

By Robert Lamm

Our joint meeting with SIGGRAPH was a real eye-opener! Three MIT professors are working on separate technologies to make virtual-reality movies possible in the broadest meaning of the word...

Prof. Mike Bove
Object-Based Image Transmission

Professor Bove thinks it's illogical to render out truetype fonts, bitmaps, rendered 3D models and other filetypes just to recompress them in MPEG form for transmission. He’d rather send the raw filetypes to the receiver and synthesize the image there. This broadens his display options considerably: Pictures can be rendered at different resolutions, framerates, aspect ratios, even viewpoints as the viewer prefers or the display requires. Low-resolution/small displays can zoom in on the action while movie-quality displays can show more of the surrounding landscape. All from the same 'script'.

Of course, this requires a fair amount of computational power at the receiver. But Prof. Bove pointed out that the kinds of things one needs, such as DVE’s, aren’t much more technologically advanced than the PC’s on most people’s desks, so this power could realistically eventually find its way into a receiver.

But since none are available today, he built his own: A computer with several 1GB memory boards, a general-purpose Intel I-960 processor, and eight special-purpose processors to do specific classes of mathematical operations, such as pixel arranging, convolutions, etc.

And he showed a video of what they’ve done so far: A man wanders into a gallery and sees a statue. The statue comes alive and walks out leaving the original viewer frozen. The image was rendered out at what appeared to be about 10fps at 640x480 from several computer generated bitmaps of the gallery that were texture-mapped onto a 3-D model of the space. The rendering engine is fairly intelligent: It pre-renders the background and foreground static objects and composites the moving bitmaps (the person, who was shot in front of a blue-screen) in between. This person was originally shot with cameras at three different angles, so we could have watched the action from two other viewpoints if we had wished.

Prof. Bove's group is also working on making the input process more object-oriented: They didn’t have to create the 3-D model of the gallery. Instead, they photographed a real gallery from several points of view and the computer extrapolated the dimensionality of the space by comparing the pictures. The group is also working on algorithms to separate moving objects from background environments, make qualitative assessments of picture quality (shaky camerawork, fast zooms, etc.) and interpolate a continuity of viewpoints from a fixed number of actual camera locations.

Prof. Glorianna Davenport
Multithreaded Movies

But let's make the movie more interesting. Instead of watching a linear script from a single point of view, let’s see the movie from any of several characters’ viewpoints, taking their personalities and interests into account. For example, you can watch a traditional story about a boy who falls in love with a girl. But if the girl interests you more, you can see things from her point of view instead. Or from the girl’s mother’s angle. Each ‘character’ will tell a different story from the same video database depending on their motivation as characters: The boy might focus on the girl, the girl might discuss her hesitation about dating a new guy, and the mother might concentrate on how this relationship is affecting her own relationship with her parents.

Prof. Davenport's authoring interface for such multi-plot, multi-point-of-view worlds is surprisingly straightforward: The author defines the characters according to interests (keywords for the time being) and motivation (narrowly interested, argumentative, etc.) describes video clips according to subject, order of precedence in a sequence, whether it supports or counters story themes, resolves a particular situation, etc. and then lets the viewer loose. The viewer chooses a character’s viewpoint and gets a tour of various plotlines as the character would find important.

Another database management tool being developed by Prof. Davenport’s group allows documentary filmmakers to automatically compile their movies without tedious logging and editing! They’re working on the design of a camcorder that will record production notes, positioning information, and lens settings with the shot footage. The filmmaker then goes to his computer, creates a rough outline of the subjects he wants to cover, and the computer fills in appropriate shots, down to cutaway closeups. The filmmaker still has to do some weeding and trimming, but the tedious logging and rough assembly stages are eliminated...

But the greatest freedom to wander through a video database virtual-reality-wise is a museum exhibit being developed on an SGI Onyx workstation. It’s an interactive video playback system that automatically displays icons of related subjects next to the footage being played. As the user selects material to view, the computer notes areas of interest and makes intelligent guesses about what related subjects to offer on the sidebars. It's still keyword-dependent at the moment, but by incorporating camcordered production-notes, voice recognition and image interpretation, the computer will be eventually able to figure out what the footage is about and how to present it to the viewer with minimal prompting from the filmmaker.

How user-friendly is all this? Prof. Davenport thinks the general public will eventually use it to sort through their home movies...

Prof. Steven Benton
Holographic Displays

But a virtual-reality movie isn’t really virtual unless the scene is convincingly real. So throw out that CRT folks, Prof. Benton wants to show us true 3D holographic movies...

The technical challenges are considerable: Holograms are created from interference patterns generated by lasers, and the pattern grain is on the order of microns. To accurately reproduce these, one would need a screen with 100 billion pixels!

Now, there are a couple of ways to bring this down a bit: First, the holographic effect is limited to the horizontal dimension. (The only time people appreciate vertical parallax is when they jump up and down with delight at their first hologram!) He also cut the resolution to video levels and the image size to three by five inches. This brings the required pixel count to only thirty-six megabytes...

That's still quite an image to render! Prof. Benton uses a Connection Machine and Mike Bove's computer to generate 30 frame/second imagery from standard 3D models. They start with a wireframe model and render points on the surface. Experimentation has shown that points a fifth of a millimeter apart are optimal: Any closer, and you get moire patterns, any less and the picture gets blurry. These surface points are used to calculate the interference pattern and this data is pushed out to the display.

No, there aren't any thirty-six megapixel LCD displays. They use a 2000-pixel acousto-optical modulator, a tellurium dioxide crystal that’s only half an inch long. It displays a small portion of the interference pattern which is then positioned in image space by a set of revolving mirrors. This happens so fast that the eye thinks it’s seeing a whole 2456Kpixel x 192-line image when in fact it’s only seeing a 2000 pixel X 18 line section being scanned across the image space.

I should mention that Prof. Benton has built a version with separate RGB imagers that can display color images.

The system isn't perfect: The mirrors aren't as aligned as he'd like them to be. And they're having problems getting the illumination even and eliminating small tiling overlaps, etc. But the basic concept is proven.

In fact, all of the systems shown to us were still clearly in the prototype stage. But the concepts have been validated, and as computational power increases/hardware is perfected, these systems will make it possible for viewers to realistically lose themselves in complex virtual worlds previously undreamed of.

Robert Lamm is Manager at CYNC Corp., a dealership specializing in professional video and multimedia equipment. He can be reached at (617) 277-4317, cync@world.std.com

Posted: January 1996
Bob Lamm, SMPTE/New England Newsletter/Web Page Editor
blamm@cync.com