Society of Motion Picture and Television Engineers, New England Section, Web Page Our logo has a history...
Our logo has a history...
MenuClick here to see this and previous seasons' calendar...Click here to send us feedback...Click here to join SMPTE/NE!Click here for linformation about our section...Click here for links to interesting sites...Click here for links to companies in our industry...Click here for links to other interesting groups...Click here for a complete list of articles from our newsletter...Click here for the latest section news...Click here for the latest from our Chair...Click here for info about our last meeting...Click here for info about out next meeting...

Cruising at Hyperspeed
Recent Progress in Computer Graphics

By Robert Lamm
(From the SIGGRAPH/NE Newsletter)

The speed with which our field is advancing is truly stunning...

First and foremost is the consumerization of the field. Check out the latest SIGGRAPH reel (playing Sunday, February 26, 1-3pm at the Computer Museum). In the past, most submissions came from programmers and others with access to specialized hardware. Now, most entries come from artsy types using off-the-shelf tools. Even the high-end pieces created on Cray computers could have been emulated, at least in end-effect, by home users on their own systems. The only reason the expensive hardware was used was to speed execution or to crunch lots of research data prior to visualization.

Although PC/Mac/Amigas don't have the rapid screen refresh rates and instantaneous touch-and-feel that high-end systems do, application developers for these platforms have found ways to improvise just about every other high-end feature into their products. Some examples:

-Autodesk 3D-Studio, probably the most popular DOS-based 3D animation package, has added Inverse Kinematics. This intuitive and powerful heirarchical structure makes it much easier to pose and animate anthropomorphic objects. It was pioneered a few years ago by SoftImage on SGI workstations. (Meanwhile, SoftImage has been bought by Microsoft, and a Windows version is supposedly under development.) 3D-Studio also emulates the faster workstation rendering times by searching the host’s computer network connection for free machines and recruiting them to work in parallel on the task.

-The introduction of video cards like the Truevison Targa 2000 that can play back 640x480 24-bit 60-field animations in real time (This is better than Betacam professional tape.)

-The large selection of specialized add-on software packages to do things like automatic vegetation generation, Michael-Jackson morphing, collision detection, newtonian mechanics in a gravitational environment, etc. These modules, which the user can pick and choose between to customize his setup, give the consumer more features in an environment that is better tailored to the task at hand than most professionals have.

And the consumer's been improving too! The average high-school kid is more computer literate and proficient than most of us over-30 professionals. 5 years ago, I spent most of my time explaining how icons and pull-down menus worked to professional animators. Now, I spend most of my time comparing notes on the latest introductions with relative laypeople.

Not that the professional side hasn't been advancing too...

The big boys seem to be concentrating on two areas: Making the animation process more intelligent and rendering the imagery out in real time.

Real-time rendering makes the user-interface much simpler and changes the graphics experience from the passive viewing of an animation to an active interaction with the animated environment. At it's furthest state of development: Virtual Reality.

But just as important is the parallel effort to replace keyframing and other clumsy command structures with a more efficient, intuitive user-interface. Only then can the viewer interact with his world as quickly as he can experience it: Eventually, he'll be issuing commands like 'Fetch a soda' and 'Take out the Trash' to objects that have a certain amount of intelligence.

Several people have created models with some of these interpretational capabilities: The SIGGRAPH reel at the Computer Museum includes an animation from NYU of a dancer who runs, jumps, pirouettes, etc. in real-time to about a dozen simple commands like ‘come forward’, ‘turn’, etc. In the same show (I forgot who made it) was a similar animation of a man who unpacks a robot arm and then plays chess with it. The narrator explained that only ten commands were issued to choreograph this scene. (The robot arm wins.)

The effort to implement these higher-order command structures is progressing on many fronts: A company in Canada is working on software that will automatically animate speech: A generic model of a face was created and all the basic phonemes were digitized (from a real actor) as well as emotional expressions like angry, jealous, jolly, etc. The animator feeds the script and emotional prompts into the computer and the computer makes the animated character mouth the words in the proper manner. The system seemed to work pretty well with faces that were more-or-less humanoid, the company was working on applying the formula to faces with more extreme topologies like Donald Duck, etc.

And at MIT, there's a project with an animated puppet that looks at you and interprets what you want it to do: If you point left, it'll move left, if you jump up and down, it'll do that too. If you pat it on the head (you're composited onto the screen with it), it'll smile and purr, and if you move quickly, it'll get scared and run away...

Did I say Intelligent?

Nevertheless, all these objects have essentially been preprogrammed with some sort of keyframing on how to go about their tasks. Can an object figure out how to do a new action totally by itself? Professor Joe Marks at Harvard has been working on this very problem: He's creating simple objects (hinges, etc.) and letting them learn how to do tasks on their own. For example, walking. The objects are programmed with the degrees of freedom they possess as well as the desired goal and let loose. Some of them learn very quickly, and a few even came up with more efficient ways of getting from point A to point B than the programmers had foreseen.

How soon will all this reach the consumer? One can already buy programs that will make human models walk along specific paths, etc. for a couple of hundred dollars...

The Application Explosion

One aspect of the computer graphics revolution that’s been overlooked is the tremendous growth in scientific and industrial visualization. Most industrial products are now designed with CAD/CAE/CAM programs, and many of these can do stress analysis, dynamic modeling, etc.. This segment alone is probably larger than the rest of the computer graphics industry combined. And the advent of electronic sensors capable of feeding multipoint measurements every fraction of a second has forced the creation of many sophisticated scientific/medical/industrial visualizations just to present all this data in some comprehensible way.

One of the leaders in this field is right here in Waltham, MA: Advanced Visual Systems. Their software can plot and analyze everything from finite-element structural analysis to cloud cover over Argentina.

There has also been an explosion in the number of people doing sophisticated image analysis on regular PC’s. The biggest factor in this advance has been the introduction of real-time digitizing cards capable of 640x480 24-bit real-time digitizing. These cards, such as the Truevision Targa 2000, can digitize live video and export it as a standard bitmap file sequence that can be analyzed with off-the-shelf software like Adobe Photoshop! The complicated videodisk and single-frame-digitizing that was formerly neccessary to do this has been rendered totally obsolete.

And then there's Virtual Reality

The entertainment side of virtual reality gets all the attention, but it’s the scientific aspect that seems to pose some of the most interesting challenges..

MIT is developing a very interesting medical VR application: Brain surgeons will be able to ‘look’ inside the head of a patient with virtual-reality glasses when aiming the radiation beams that kill brain tumors. This way, they'll be able to see the exact position of the tumor and where the beams are intersecting. (Only at the intersection is the radiation sufficient to kill: This prevents non-cancerous tissue from being killed.) The transparent imagery of the brain is generated in real-time from CAT-scan and MRI data.

Another MIT project attempts to deal with the time delay in telemedicine applications: It’s hard to manipulate grips and other remote-controls when the feedback is delayed by a second or more. The application of fuzzy logic makes these mechanisms feel much more natural and easy to use.

Of course, you may never have to even hold a grip: A project at the MIT Media Lab is developing a sensor that can detect your hand’s position from it’s impact on the sensor’s electromagnetic field...

And then there’s the desire to render the VR graphics with greater complexity: How far have we gotten? TASC, in Reading MA has a demo of Sarajevo that not only plots detailed satellite terrain imagery in 3-D, it also calculates the cloud cover in real-time with convection physics! (The March 16 SMPTE meeting will be at TASC, contact Phil Ozek at 942-2000, x2518 for details.)

Where to from here?

The personal computer industry seems to have pegged its most ambitious hopes on the desktop video market. Consequently, a lot of the significant new features promised in Windows 95 (aka Windows 2000) offer higher-quality motion video. In particular, better accomodation for hardware compression cards should make it possible to get better digitaizing/playback rates out of hard drives. This will cut down on compression artifacts, etc.

Inexpensive MPEG playback cards also promise to revolutionize the industry: Computers will soon be able to play VHS-quality video off a CD-ROM.

As user interfaces continue to simplify and the basic video environment becomes more capable, most computer graphics users won't need any computer graphics knowledge at all. The CG field will break up into narrow specialities like medical imagery, teleprescence, network transmission/compression, character animation, etc. The general-purpose graphics/animation capabilities we've been struggling to port onto the computer will be completely taken for granted. Lets hope the people who made this possible won't!

Bob Lamm is Manager at CYNC Corp., a video dealership specializing in multimedia applications. He can be reached at (617) 277-4317, cync@world.std.com.

Posted: March 1995
Bob Lamm, SMPTE/New England Newsletter/Web Page Editor
blamm@cync.com