Rendering in computer generated movies
Seminar: How to Make a PIXAR Movie
1 Technische Universität München
Abbildung in dieser Leseprobe nicht enthalten
Figure1: Improvements in rendering capabilities over the span of 20 years [ CFS * 18 , p. 2, 15], [ CJ16 , p. 118]
Rendering is a challenging process in the making of modern computer generated movies. There are many different approaches to render high quality images, such as rasterization, ray and path tracing. The general rendering pipeline will be explained and real time rasterization will be compared to the REYES algorithm used in movie production. Ray tracing, its extensions, problems and current solutions will be discussed in detail. Finally, hybrid rendering systems and the performance of rendering systems over the years will be discussed in this state-of-the-art report.
In modern computer generated movies, rendering prevails to be one of the most important and difﬁcult steps. With computing power constantly increasing, it becomes possible to achieve almost photo- realistic computer generated images. Actors that have passed away can be digitally recreated, e.g. Peter Cushing in "Rogue One - A Start Wars Story" as seen in the right most picture in ﬁgure 1. Ren- dering, that is the generation of a two-dimensional images based on three-dimensional objects, virtual cameras, lights, materials and so on [AMHH08, p. 11] can be approached in various different ways. In this paper the two most important techniques, rasterization and ray tracing, as well as hybrids will be discussed. The general raster- ization process will be explained and compared to the rasterization- based REYES algorithm that was used in Pixar movies. Ray tracing and path tracing, a more modern and sophisticated approach will be discussed in detail. Hybrid variants of the two techniques are go- ing to be explained using examples found in the movies Cars and A Bugs Life. Finally, the advantages and disadvantages of those ap- proaches are going to be compared regarding performance and pro- duction quality.
2. The Rendering Pipeline
In general, pipelines are used to achieve a speed-up by splitting a process into well deﬁned stages. The rendering pipeline of mod- ern computer graphics applications only slightly differs, no matter which rendering technique is used. It is generally split into three conceptual stages: application, geometry and rasterizer [AMHH08, p. 12]. The ﬁrst two stages are very similar for the described ren- dering techniques, the last one is, however, quite different and the main focus of this paper.
2.1. Application Stage
In the application stage, the geometry that should be displayed as well as various other necessary data is transported to the graphics processing unit (GPU). It is the connection between the main pro- cessor as well as main memory and the GPU. This stage also takes care of calculations that are not performed in any other stage, such as animations with transformation matrices [AMHH08, p. 15] or generating MIP maps of textures to reduce the amount of memory needed to render the scene by a large amount [CFS ∗ 18, p. 2].
The geometry stage itself is split into several parts. The ﬁrst step transforms the geometric objects into camera or eye space. This is done by transforming the points - also called vertices - that each primitive consists of by adjusting their size, rotation and position. In an intermediate step, they are placed in the scene, then the scene itself is rotated and positioned in a way that positions the camera in the scene’s origin, looking in the direction of the negative Z- axis [AMHH08, p. 16, 2.3.1]. The inﬂuence of lights in the scene on each vertex is calculated - also referred to as shading - next, depending on the vertex’s material [AMHH08, p. 17, 2.3.2]. The following steps are usually skipped by ray tracing renderers. For rasterization based renderers the scene is then projected into a unit cube (the canonical view volume). Before the projection transfor- mation, objects look the same no matter the distance to the camera. This deﬁnes an orthographic camera and even though this type is used sometimes, perspective cameras are more common. This type is closer to how humans perceive the world. Hence, objects appear smaller the further away they are [AMHH08, p. 18, 2.3.3]. After the projection transformation, the triangles that are not in the canonical view volume (CVV) are clipped against it. Clipping replaces those vertices outside of the CVV with new ones, which are on the inter- sections of the CVV and the edges of the triangle [AMHH08, p. 19, 2.3.4]. Finally, the remaining vertices are transformed once more to screen space, scaling the unit cube to the ﬁnal image size. Coor- dinates after the transformation represent pixel positions, they are now in so-called "screen coordinates" [AMHH08, p. 20, 2.3.5].
2.3. Rasterizer Stage
The last stage before the ﬁnal image output is the rasterizer stage. The transformed vertices are now tested for visibility and pixels are drawn according to the scene setup. This stage varies heavily based on the used technique and will be discussed in greater detail later on in sections 3, 4 and 5.
The process of splitting the geometry of the scene into pixels and giving them an appropriate color is called rasterization. This pro- cess occurs after all transformations are completed and the ﬁnal im- age can be computed. There are several ways to resolve the scene into pixels using rasterization, two of which will now be explained.
3.1. General Approach
The traditional version used in most real time rendering applica- tions is done in four stages: triangle setup, triangle traversal, pixel shading and merging [AMHH08, p. 22]. In the ﬁrst step, the trian- gle setup, the necessary data for shading is computed [AMHH08, 2.4.1]. In the second step, triangle traversal (or scan conversion), each primitive (triangles, lines, etc.) is then tested against each pixel for coverage. If the pixel is covered by the primitive, a frag- ment is generated. The data from the previous step is interpolated for each fragment based on the primitive type [AMHH08, 2.4.2]. This process is shown in ﬁgure 2. Each fragment is now processed by a so-called shader, that computes the corresponding color. The common way is to use an image containing surface information to determine the color. This procedure is called texturing [AMHH08 2.4.3]. In the ﬁnal stage the processed fragment is tested for visi- bility. This is done using several buffers, most importantly one that contains depth information of previously processed fragments. If the buffer already contains a fragment, the fragment that is closer to the camera is kept. It must be noted that this algorithm needs to process opaque fragments before processing transparent ones to work. If this is not the case, transparent fragment colors can not be combined (blended) with opaque fragment colors properly, which results in a wrong ﬁnal color [AMHH08, 2.4.4]. After all fragments have been processed, the image can be stored or displayed.
Abbildung in dieser Leseprobe nicht enthalten
Figure 2: Rasterization visually explained (adapted from [ scr ])
3.2. REYES (Renders Everything You Ever Saw)
While the previous approach is fast and the quality is good enough for real time rendering, it has its drawbacks. Traditionally, this ver- sion of rasterization has major problems with edges, causing alias- ing artifacts. This collided with the image quality requirements of Pixar [CCC87, p. 96, 2.2]. The developers also tried to avoid using traditional solutions or environments in order to achieve the best result possible [CCC87, p. 95, 1.]. Since the goal was to be able to render images more complex and in a better quality than possible at the time, [CCC87, p. 95, 1.] a new rendering technique was in- vented: The Reyes algorithm. It was used to build Pixar’s in-house renderer Renderman and produced many animated movies as well as special effects.
3.2.1. The REYES Algorithm
The algorithm has three distinct methods; bound, dice and split. Bound computes the bounds (with displacement) of each primitive that fully contains it. They do not have to be tight. Dice tessellates the primitives into a grid of quadrilaterals ("micro-polygons"), their use will be explained later on. Split cuts a primitive into several smaller primitives of either the same or a different type [CCC87, p. 99, 3.]. The implementation used for Renderman is optimized by sorting objects into image tiles ("buckets"). The screen space is di- vided into an matrix of buckets, which are processed one at a time in a given order [CCC87, p.100, 5.]. The algorithm determines for all primitives in a bucket if they can be diced or if they need to be split ﬁrst. A primitive may only be considered for dicing when it does not produce a large grid of micro-polygons or a wide range of micro-polygon sizes. Once a primitive can be diced, it is con- verted into a grid of micro-polygons. Those micro-polygons are roughly half the size of an image pixel [CCC87, p.97, 2.3]. Figure 3 shows an example of this process. Afterwards, each vertex of the micro-polygon grid is shaded. Finally, pixel colors are computed by stochastically sampling the grid at various points and testing against a depth buffer [CFS * 18, p. 2, 2.1].
Abbildung in dieser Leseprobe nicht enthalten
Figure 3: A sphere made of primitives being split and diced into micro-polygons using the REYES algorithm [ CCC87 , p. 99]
3.2.2. REYES versus Standard Rasterization
The Reyes algorithm has several advantages over the technique used in real time rendering in regard to the requirements of Pixar. Firstly, it allows for very complex scenes. Since one image tile typi- cally consists of either 16x16 or 32x32 pixels the computation usu- ally only has to access a small amount of the geometry and tex- tures. The micro-polygons are smaller than pixels, which means it is easy to determine and load only the ones needed for rendering an image tile. Since those micro-polygons are usually part of a small amount of objects, the accessed textures are often the same and al- ready in the cache. This is ideal to keep memory consumption low and avoid texture trashing (loading and discarding the same texture from the cache many times). All data can also be removed from memory after the tile has been processed [CFLB06, p. 2, 4.]. An- other advantage lies in Reyes avoiding clipping calculations, since micro-polygons that aren’t in the viewing frustum of the camera are culled immediately [CCC87, p. 100, 3.]. Finally, it removes the need for expensive texture ﬁltering, because the micro-polygons are diced in UV-space (texture coordinates). This leads to one polygon in the grid covering almost exactly one texture pixel (texel). Due to this, colors can be directly read from the texture, using the polygon coordinates, without ﬁltering them ﬁrst. [CCC87, p. 98, 2.4.].
4. Ray tracing
The principle of utilizing rays to generate images has been known for a long time, in fact the concept was already used in the middle ages by Albrecht Dürer to paint pictures with correct perspective projection [JC07, p. 14, 2.1]. This section will describe different ray tracing methods and their advantages.
4.1. General Approach
The easiest algorithm for ray tracing can be split into two calcula- tions per pixel: Step one is to ﬁnd the closest surface, step two cal- culates the color of the hit point. Finding the closest surface is also rather simple, since every object in the scene can be intersection tested against the current ray [JC07, p.14, 2.2]. There exist many calculation methods for intersecting rays with primitives, they will however not be discussed. To extend the capabilities, shadow rays can be used to calculate light contribution and, as the name im- plies, shadows. This is done by tracing rays from each hit point to each light source in the scene or parallel rays for direction lights. If a ray does not hit an opaque object on the way to a light source, that light contributes to the ﬁnal color, otherwise the hit point is in shadow [JC07, p.24, 2.5]. This results in more realistic lighting and shadows compared to rasterization. It is further possible to render realistic reﬂections and refractions, which are important for specu- lar and transparent materials. Reﬂection is the concept of specular materials like metals reﬂecting rays in the mirror direction. Refrac- tion is important for transparent materials such as glass and water. It deﬂects a ray, giving it a slightly altered direction through the ma- terial. Those new rays inﬂuence the color of the original hit point and may lead to more reﬂection or refraction rays [JC07, p. 25-28, 2.6].
4.2. Recursive Ray Tracing and Path Tracing
Those effects can be computed using recursive ray tracing or path tracing. The r ecursi v e model is capable of specular reﬂections, refractions and shadows. Diffuse reﬂections and indirect illumina- tion is not considered. For each hit point, a reﬂection and/or re- fraction ray is spawned and the illumination contribution of each light source is calculated using shadow rays. This continues until a diffuse and opaque material, or no material is hit. In the path tracing model, every ray can cause either a reﬂection or a refrac- tion ray. The reﬂection here can be either diffuse or specular. This enables indirect and global illumination. The direction for diffuse reﬂections, as well as which type of ray is spawned is stochastically chosen. The recursion continues until random termination (also re- ferred to as "Russian Roulette"), when reaching a given recursion depth or if the ray does not hit a surface [CJ16, p. 111, 3.2.]. The recursive concept is illustrated in ﬁgure 4, showing a ray scattering into several reﬂection and refraction rays. Figure 5 is using the path tracing model.
Abbildung in dieser Leseprobe nicht enthalten
Figure 4: Recursive ray tracing example [adapted, [ JC07 , p. 25]]i
- Quote paper
- Alexander Epple (Author), 2018, Rendering in computer generated movies, Munich, GRIN Verlag, https://www.grin.com/document/471236