11

Click here to load reader

DocumentDX

Embed Size (px)

Citation preview

Page 1: DocumentDX

INTRODUCTION:

The performance of graphics was ever increasing since 1995 when the first 3dfx Voodoo card was

released which allowed the graphics to run faster but arguably not the best because of some restrictions

, as developers were limited to using hard coded specific fixed pipeline algorithms.

Thus for tackling such overheads & giving the freedom to the developers for implementing the basics of

programmable pipeline to achieve awesome real time rendering the principal 3d API’s (DirectX &

OpenGL) have evolved.

In DirectX API after the successful initialization of direct3d & direct3ddevice interface the object is then

rendered either with a fixed semi static flow of objects which restricts scenes to be single colored flat

polygons or integrate some depth to the scene to achieve realism via robust dynamic flow.

THE RENDERING PIPELINE:

The rendering process is a series of transformations that take the *object from the co-ordinate system

in which it is encoded to the system of the display surface.

*An object is represented as triangle meshes, a list of triangles that approximates the shape and contours of the object. Stages of Rendering Pipeline: (PLUMBING)

Fixed Semi Static

Flow Robust Dynamic Flow

Page 2: DocumentDX

The model used in Microsoft's Direct 3D is based on a transformation sequence that starts with polygon vertices being fed into a transformation pipeline. The pipeline performs world, view, projection, and clipping transformations before data is sent to the rasterizer for display. Some Techniques of rendering: Geometry: Geometry is a set of points (vertices) describing the layout which is sent to the graphics card for drawing. A sprite, like a player or a platform is usually a “quad”, and is often sent as two triangles arranged in a rectangle shape. Shader: Shader is a small function that is run over each vertex, and every fragment (a fragment is like a pixel) when rendering. Vertex: A vertex is a special kind of point of a polygon, polyhedron or polytopes formed by intersection of edges, faces or facets of the object. There are two different ways of processing vertices: 1. The “fixed-function” pipeline: This is the standard Transform & Lighting (T&L) pipeline, where the functionality is essentially fixed. The T&L pipeline can be controlled by setting render states, matrices, and lighting and material parameters. 2. Vertex shaders: This is the new mechanism introduced. Instead of setting parameters to control the pipeline, you write a vertex shader program that executes on the graphics hardware.

Vertex shaders are small programs that are written mainly for transforming the vertices from the

*vertex buffer into 3D space. The vertex shader program will be called by the GPU for each vertex it

needs to process. For example a 5,000 polygon model will run your vertex shader program 15,000 times

each frame just to draw that single model. So if you lock your graphics program to 60 fps it will call your

vertex shader 900,000 times a second to draw just 5,000 triangles.

*Vertex Buffer: Each of the triangles in the sphere model has three points to

it, we call each point a vertex. So for us to render the sphere model we need

to put all the vertices that form the sphere into a special data array that we

call a vertex buffer.

Page 3: DocumentDX

WHY VERTEX SHADERS:

To bypass T&L pipeline

To avoid redundancy occurred when processing shared vertex attribute calculations b/w

geometry engine & CPU

To achieve unique & revolutionary graphical effects

More general syntax for common operations

Vertex shaders even gives users the ability to decompress compressed position, normal, color, matrix, and texture coordinate data and to save a lot of valuable bandwidth without any additional cost

More Scalable

APPLICATIONS OF VERTEX SHADERS:

Procedural geometry (cloth simulation, soap bubble)

Advanced vertex blending for skinning and vertex morphing (tweening)

Texture generation

Advanced key frame interpolation (complex facial expression and speech)

Particle system rendering

Real-time modifications of the perspective view (lens effects, underwater effects)

Advanced lighting models (often in cooperation with the pixel shader)

First steps to displacement mapping

& many more even effects realized only on SGI hardware

The first & foremost effect used with a vertex shader is vertex lighting. Earlier in T&L pipeline scenes

were rendered using default lighting which meant everything was lighted equally & brightly leaving no

room for realism.

Lighting in scene helps to set the mood for a scene, be it the dim lighting which instills fear & a sense of

unease in the player or bright lighting which rather shows the happiness & the comfort with which the

player plays.

Direct3D allows the developer with multiple types of lights to be used within the scene.

Ambient Light:

An ambient light source illuminates the entire scene. It has no orientation or position and is unaffected by surface characteristics of the illuminated objects. Because all objects are illuminated with equal strength, the position and orientation of the frame is inconsequential. Directional Light: A directional light has a specific orientation, but no position. It appears to illuminate all objects with equal intensity. This light source is often used to simulate the effect of distant sources, such as the sun. Rendering speed is maximum for directional lights.

Page 4: DocumentDX

Parallel Point Light: The parallel point light is a variation of directional light. The orientation of a parallel point light is determined by the position of the source. Whereas a directional light source has orientation, but no position, a parallel point light source has orientation and position. The rendering performance is similar to the directional source. Point Light: A point light source radiates light equally in all directions. This light is computationally expensive to render because retained mode must calculate a new lighting vector for every face it illuminates. On the other hand, a point light source produces a more faithful lighting effect. When visual fidelity is a concern a point light source is the best option. Spot Light: A spotlight is cone-shaped light with the source located at the cone's vertex. All objects within the cone are illuminated, but at two degrees of intensity. The central area of the cone is brightly lit. This section is called the umbra. The surrounding section, which is dimly lit, is called the penumbra. In retained mode the angles of the umbra and penumbra can be individually specified.

Page 5: DocumentDX

A scene in the world can include any combination of light sources mentioned above. Along with

selecting the light sources the developer needs to select the color of the object which in turn is

determined by its material.

Material is determined by 4 types of light that our object can absorb and the sharpness of specular

highlights. These types of lights are: diffuse, ambient, specular and emissive.

For creating Lights in direct X there is no such thing like a single function call, the developer needs to do

the following:

1. Setting up a New Flexible Vertex Format

2. Turning the Lighting On

3. Setting the Ambient Light

4. Creating and Setting the Diffuse Light

5. Creating and Setting the Material

The detailed explanations of these steps are explained in the source code attached along with this

document.

In DirectX10 and above the robust dynamic flow of rendering is done through HLSL.

HLSL:

HLSL is the language we use in DirectX 10 to code these small vertex and pixel shader programs. The

syntax is pretty much identical to the C language with some pre-defined types. HLSL program files are

composed of global variables, type defs, vertex shaders, pixel shaders, and geometry shaders.

FRAMEWORK:

Page 6: DocumentDX

While calculating the lighting the only thing that the program should look at is its vertices.

For example when entering into a game scene it somewhat resembles this:

In this example the light has a small radius yet it still

manages to illuminate the entire triangle. This is

because the small light radius touches a vertex of the

triangle mesh.

Here the problem arises when the light is not able to

reach a vertex, it is unable to lit the triangle.

Shadowing is another factor in game when proper lighting is not rendered.

All these in game issues can be sorted out using proper tessellated geometry.

The mesh on the left might become completely

black in game for previously explained reasons.

The right mesh however is tessellated and will

react better to lighting because it contains more

vertices. The lighting can be more precise and it

is less of an issue if parts of the meshes are

stuck in other geometry.

*Note: Attached meshes (placing objects on one another) & complex triangulations can also lead to bad

in game lighting experiences.

Once the vertices are done moving about, the next stage of the shader begins i.e convertion of the

vertices into fragments that ends up as pixels on screen.

When doing this stage of rasterizing geometry (which are now called fragments), each fragment is given

to the fragment shader. Now, the fragment shader, much like the vertex shader, is run on every single

fragment. The fragment shader mainly cares about what the resulting color of the fragment becomes. It

will also interpolate (blend) from each vertex, based on it's location between them.

Page 7: DocumentDX

A game rendered at 1080p, a resolution of 1920x1080 pixels, would be 1920 * 1080 = 2,073,600 pixels. That is per frame - usually games run at 30 or 60 frames per second. That means (1920 x 1080) x 60 for one second of time, that's a total of 124,416,000 pixels each second.

*FIND THE SOURCE CODE ATTACHED ALONG WITH THIS DOCUMENT

TEXTURING:

Texture mapping is the process of loading a picture or an image and wrapping it around 3D objects. For

instance, a green square doesn’t look much like grass, but if you apply an image of grass to the square,

things start looking a little more real.

Texturing allows us to add photorealism to our scenes by applying photographs and other images onto

polygon faces.

The format of the textures we will be using are .dds files. This is the Direct Draw Surface format that

DirectX uses. The tool used to produce .dds files comes with the DirectX SDK. It is under DirectX Utilities

and is called DirectX Texture Tool. You can create a new texture of any size and format and then cut and

paste your image or other format texture onto it and save it as a .dds file. It is very simple to use.

To map pixels from the .dds image onto the polygon we use what is called the Texel Coordinate System.

This system converts the integer value of the pixel into a floating point value between 0.0f and 1.0f. For

example if a texture width is 256 pixels wide then the first pixel will map to 0.0f, the 256th pixel will map

to 1.0f, and a middle pixel of 128 would map to 0.5f.

In the texel coordinate system the width value is named "U" and the height value is named "V". The

width goes from 0.0 on the left to 1.0 on the right. The height goes from 0.0 on the top to 1.0 on the

bottom. For example top left would be denoted as U 0.0, V 0.0 and bottom right would be denoted as U

1.0, V 1.0.

Page 8: DocumentDX

Framework:

Texture Stages

Texture stages allow you to apply more than one texture to an object in a single rendering pass. Each

rendering pass can consist of up to eight stages, with each stage letting you apply a single texture and

control the type and amount of blending used.

**The source code is explained in detail & attached along with this document.

BUMP MAPPPING:

Bump mapping adds surface detail (bumpiness) to objects in 3D scenes, without adding more geometry

than already exists. It does so by varying the lighting of each pixel according to values in a bump map

texture. As each pixel of a triangle is rendered, a lookup is done into a texture depicting the surface

relief (a.k.a. the bump map). The values in the bump map are then used to perturb the normals for that

pixel. The new bumped-and-wiggled normal is then used during the subsequent color calculation, and

the end result is a flat surface which looks like it has bumps or depressions in it.

The proper terminology for the bump mapping technique we use is called *normal mapping. The reason

being is that we use a special texture called a normal map which is essentially a look up table for surface

normals. Each pixel in this normal map indicates the light direction for the corresponding pixel on the

texture color map. This technique works by using a normal specified at each pixel, instead of using the

normal specified at each vertex, which is then "interpolated" across the surface.

*NORMAL:

Each face in a mesh has a perpendicular unit normal vector. The

vector's direction is determined by the order in which the vertices

are defined and by whether the coordinate system is right- or left-

handed. The face normal points away from the front side of the face.

In Direct3D, only the front of a face is visible. A front face is one in

which vertices are defined in clockwise order.

We get the normal from the normal specified for each vertex, which is then interpolated across the face

for the pixel shader, where we will do our normal mapping.

Page 9: DocumentDX

Once color mapping textures are applied to the scenes then we use normal mapping & giving lighting to

give a more realistic approach to the scene with less cost & more feasibility.

The tools that create normal maps take the x, y, z coordinates and translate them to red, green, blue

pixels with the intensity of each color indicating the angle of the normal they represent. The normal of

our polygon surface is still calculated the same way as before. However the two other normals we need

to calculate require the vertex and texture coordinates for that polygon surface. These two normals are

called the tangent and binormal.

We will need to do some precalculation to determine the binormal and tangent vector using the normal

and texture coordinates. Also note that you should never do this inside the shader as it is fairly

expensive with all the floating point math involved.

bumpNormal = (bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal);

Framework:

Page 10: DocumentDX

Perfect Bump Mapping:

Displacement mapping can be used as a drop-in replacement for existing bump mapping techniques. Current techniques such as normal mapping create the illusion of bumpy surfaces through better pixel shading. All these techniques work only in particular cases, and are only partially convincing when they do work. Take the case of parallax occlusion mapping, a very advanced form of bump mapping. Though it produces the illusion of overlapping geometry, it only works on flat surfaces and only in the interior of the object.True displacement mapping has none of these problems and produces accurate results from all viewing angles.

Tessellation is a method of breaking down polygons into finer pieces. For example, if you take a square

and cut it across its diagonal, you’ve “tessellated” this square into two triangles. By itself, tessellation

does little to improve realism. For example, in a game, it doesn’t really matter if a square is rendered as

two triangles or two thousand triangles—tessellation only improves realism if the new triangles are put

to use in depicting new information.

The simplest and most popular way of putting the new triangles to use is a technique called

displacement mapping. A displacement map is a texture that stores height information. When applied to

a surface, it allows vertices on the surface to be shifted up or down based on the height information.

In essence—displacement mapping needs tessellation, and vice versa. & hence With DirectX 11,

tessellation and displacement mapping finally come together in a happy union , to add real life details to

the models & the game.

The source code is explained & attached along with this document.

Page 11: DocumentDX

References:

Direct3D shaderX, vertex & pixel shader by Wolfgang F.Engel

Learn vertex & pixel shader programming by James C.Lelterman

Beginning DirectX 9 by Wendy Jones

www.rastertek.com

http://www.nvidia.com/object/tessellation.html

http://notes.underscorediscovery.com/shaders-a-primer/

http://www.braynzarsoft.net/index.php?p=D3D11NORMMAP