• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Video: How Pixar's Renderman changed the movie industry (14 minutes)

Underseer

Contributor
Joined
May 29, 2003
Messages
11,413
Location
Chicago suburbs
Basic Beliefs
atheism, resistentialism
[YOUTUBE]dz7_QqqFt9w[/YOUTUBE]

video description said:
You may not have heard of RenderMan, but you've definitely seen the incredible effects it has made possible. From Star Trek to Star Wars, from the Abyss to Terminator 2, and across all of Pixar's animated features -- RenderMan has changed the world of special effects. Read much more about Pixar on WIRED.com

Double hyphen? Wouldn't you expect Wired[ent]mdash[/ent]one of the pre-eminent tech periodicals[ent]mdash[/ent]to figure out how to create a proper em dash in HTML? Knowing how and when to do it is really, really basic typesetting.

Before Pixar, creating CGI required writing code. A Pixar programmer created software that included an interface more intuitive for artists that required far less technical know-how to create and manipulate the data that makes up a 3D computer-generated image. Not only did this make CGI animated movies possible, not only did it make computer-generated special effects economically viable, but it was also used in 27 out of the last 30 Oscars for visual effects.
 
 Pixar was founded in 1986 as a spinoff of Lucasfilm's Graphics Group. That was founded in 1979 from the  New York Institute of Technology Computer Graphics Lab, founded in 1974.

 Computer-aided design goes back to the 1960's, though 3D rendering may be later than that.  History of computer animation,  Timeline of computer animation in film and television, and a 1970's animation workstation,  SuperPaint, likely 2D compositing only.

A famous 3D-model demo shape, the  Utah teapot, dates back to 1975, and it has been a common demo shape ever since.
The teapot shape contained a number of elements that made it ideal for the graphics experiments of the time: it was round, contained saddle points, had a genus greater than zero because of the hole in the handle, could project a shadow on itself, and could be displayed accurately without a surface texture.
 
As to following the evolution of computer graphics, a good place to do so is videogame graphics. It has always been far behind what one could make with contemporary high-end computers, but that is because it has to be run on much affordable hardware.

One of the first videogames ever was  Spacewar!, written in 1962 for the PDP-1 computer, and one of the first mass-market ones ever was  Pong, released in 1972 by Atari. These games and Pong's contemporaries used very simple shapes, and the first mass-market ones did not use computer chips but discrete components.

The first computer-chip game consoles appeared in the mid 1970's, and computer-chip arcade machines appeared around then. They were soon joined by computer games on early desktop computers.

Their graphics were nearly all two-dimensional raster graphics, made by compositing 2D images. The images for characters and the like are often called sprites. This paradigm has continued to be nearly universal in the lower end of game graphics, though almost always with higher display resolution and greater color depth.

Animation is done by going through a sequence of sprites. For instance, one does walking by showing a character's legs in different parts of the walk cycle.

A few games used vector graphics, however. In most, like Tempest and Star Castle, the graphics was still 2D, but Battlezone was an exception, having simple wireframe 3D models of tanks and missiles and obstacles.


There are two basic kinds of view direction in 2D-graphics games: vertical and horizontal. Oblique is essentially a variation of vertical that gives side views as well as top-down views. Horizontal-viewpoint games are often called sidescrollers from where one's view moves as one's character moves. Some sidescrollers have multiple backgrounds, with the farther ones scrolling more slowly than the nearer ones, thus making a parallax effect.
 
In the early 1990's there appeared the first computer games with 3D gameworlds, games like Ultima Underworld and Wolfenstein 3D (both 1992).

I recall John Carmack once discussing why Wolf3D's geometry, but I can't find it. So I will reconstruct what seems like the likely reasoning. A full-scale 3D gameworld would have everything in it be a 3D model, but that is very computationally expensive.

The first step is not 3D modeling all the fine detail, but instead doing it with texture mapping, projecting an image file onto the model surface. Each bit of surface then gets the color of the bit of image that was projected onto it. That lets the geometry be much simpler.

The next step is turning the curved surfaces into sets of polygons, a process called tessellation. This also makes rendering easier, since that makes hit testing much easier.

The next step is choosing a projection from 3D world space to 2D view space. The best sorts are orthographic (linear) and perspective (bilinear or linear divided by linear), because they turn lines into lines. With both of them, it is easy to invert them for projected polygons, finding polygon internal coordinates as a linear or bilinear function of view coordinates.

The projection most often used in 3D videogames is perspective, because it reduces 3D to 2D by making lines from a point and finding where they hit one's view plane.

This bilinear-projection inversion requires one reciprocal operation per pixel, and that is much more computationally expensive than addition, all one needs for sprite rendering.


John Carmack and others then discovered how to go further. For a horizontal view direction, one can greatly simplify the rendering of a vertical polygon. When one works out the math, one finds that one has to take a reciprocal only once for each vertical line in the polygon, and that rendering each such line requires only addition, just like sprite rendering. Likewise, for a horizontal polygon, one needs to take a reciprocal only once for each view-space horizontal line, and that rendering that line also requires only addition.

Thus, one can get very fast rendering by restricting one's game world to horizontal and vertical surfaces. That is OK for the world geometry, but it does not seem to work for characters and scenery and the like. The solution there is to make those items sprites or billboards, as they are sometimes called, rendered with the size scaling appropriate for their distance. An interesting consequence is that one has to choose which sprite by view direction as well as by animation state -- that requires more artwork.

Wolf3D had a further simplification: its game world was laid out as a 3D grid, with walls on the grid walls, and the grid cells' floors and ceilings had constant heights. id Software's next game, Doom (1993), relaxed all these restrictions. It still had horizontal and vertical surfaces, and also sprite characters, but its floors and ceilings could have arbitrary heights, and its walls arbitrary orientations.

Wolf3D and especially Doom were very successful in creating 3D-model worlds that one could wander around in in virtual-reality fashion, and other game makers made similar sorts of games. Bungie with Pathways into Darkness (1993, Wolf3D-ish) and its Marathon series (1994, Doom-ish), LucasArts's Dark Forces (1995, Doom-ish), and 3D Realms's Duke Nukem 3D (1996, Doom-ish).
 
John Carmack and id Software did not rest with Wolf and Doom. They went on to make Quake in 1996. It eliminated orthogonality, and all surfaces could have arbitrary orientations. Its game characters were 3D models instead of sprites, though with polygonal surfaces. However, the texture mapping was fairly successful in disguising the models' blockiness. Animations were done by going through sets of vertex positions.

Also in 1996, Core Design came out with the first Tomb Raider game. Its characters were all 3D models, including the titular character, Lara Croft herself. These models were animated with skeletal or parametric animation, with each part having its own "bone". Each bone has a position and an orientation relative to its parent bone, forming a tree structure. Animation would be done by going through sets of bone angles.

The world geometry, however, used a rather noticeable horizontal grid. This was done for the purpose of making it easier to judge distances. That geometry's floors and ceilings, however, had arbitrary heights, and could be tilted. This grid stayed in some successor Tomb Raider games, though it has disappeared from more recent ones.

Also in the earlier ones, each bone was associated with a model part, making the models something like marionettes. But that separateness was disguised by making them overlap. Lara Croft did not have a ponytail in the first of the TR games, but instead, a bun. But she did have one in all the later ones, a ponytail that would be blown in wind and that would point upward if Lara fell -- some nice game physics.
 
The Wolf-Doom-Quake sort of engine is good for indoor scenes and small outdoor ones, but not for large outdoor ones. For that, one needs a terrain engine, and one of the first 3D-model ones was in Bungie's Myth: The Fallen Lords (1997). It used a heightmap for its landscape, an image file whose pixel values translated into heights. It had some 3D-model objects, like buildings, but all its characters were sprites. A successor, Myth III: The Wolf Age (2001), used 3D-model characters instead.

The next step was to hybridize these two types of engines, with indoor engines for inside of buildings and terrain engines for outside of them. Like in Starsiege: Tribes (1998), Drakan: Order of the Flame (1999), and Halo: Combat Evolved (2001).


So the evolution of video-game graphics has had this trajectory:
  • 1970 - 1980: simple geometric shapes to 2D shapes
  • 1980 - 1990: improvement of 2D shapes
  • 1990 - 2000: 2D shapes to 3D shapes
  • 2000 - : improvement of 3D shapes
 
Back
Top Bottom