78
CS361 Week 15 - Monday

Week 15 - Monday. What did we talk about last time? Future of graphics Hardware developments Game evolution Current research

Embed Size (px)

Citation preview

Page 1: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

CS361Week 15 - Monday

Page 2: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Last time

What did we talk about last time? Future of graphics

Hardware developments Game evolution Current research

Page 3: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Questions?

Page 4: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Project 4

Page 5: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Student Lecture: Overview of Material up to Exam 1

Page 6: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Week 1: Color and SharpDX

Page 7: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

RGB

We will primarily focus on the RGB system for representing color

With Red, Green, and Blue components, you can combine them to make most (but not all) visible colors

Combining colors is an additive process: With no colors, the background is black Adding colors never makes a darker color Pure Red added to pure Green added to

pure Blue makes White RGB is a good model for computer

screens

Page 8: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Luminance

If the R, G, B values happen to be the same, the color is a shade of gray 255, 255, 255 = White 128, 128, 128 = Gray 0, 0, 0 = Black

To convert a color to a shade of gray, use the following formula: Value = .3R + .59G + .11B

Based on the way the human eye perceives colors as light intensities

Page 9: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Brightness and Contrast

We can adjust the brightness of a picture by multiplying each pixel's R,G, and B value by a scalar b b [0,1) darkens b (1,) brightens

We can adjust the contrast of a picture by multiplying each pixel's R,G, and B value by a scalar c and then adding -128c + 128 to the value c [0,1) decreases contrast c (1,) increases contrast

After adjustments, values must be clamped to the range [0, 255] (or whatever the range is)

Page 10: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

HSV

HSV Hue (which color) Saturation (how colorful

the color is) Value (how bright the

color is) Hue is represented as

an angle between 0° and 360°

Saturation and value are often given between 0 and 1

Saturation in HSV is not the same as in HSL

Page 11: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

SharpDX basics

LoadContent() methodUpdate() methodDraw() methodTexture2D objects Sprites

Page 12: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Week 2: The GPU

Page 13: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Rendering

What do we have? Virtual camera (viewpoint) 3D objects Light sources Shading Textures

What do we want? 2D image

Page 14: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Graphics rendering pipeline For API design, practical top-down problem

solving, and hardware design, and efficiency, rendering is described as a pipeline

This pipeline contains three conceptual stages:

Produces

material to be

rendered

Application

Decides what, how, and

where to

render

Geometry

Renders the final image

Rasterizer

Page 15: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Application stage

The application stage is the stage completely controlled by the programmer

As the application develops, many changes of implementation may be done to improve performance

The output of the application stage are rendering primitives Points Lines Triangles

Page 16: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Important jobs of the application stage

Reading input Managing non-graphical output Texture animation Animation via transforms Collision detection Updating the state of the world in

general

Page 17: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Acceleration

The Application Stage also handles a lot of acceleration

Most of this acceleration is telling the renderer what NOT to render

Acceleration algorithms Hierarchical view frustum culling BSP trees Quadtrees Octrees

Page 18: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Geometry stage

The output of the Application Stage is polygons

The Geometry Stage processes these polygons using the following pipeline:

Model and View Transform

Vertex Shading Projection Clipping Screen

Mapping

Page 19: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Model Transform

Each 3D model has its own coordinate system called model space

When combining all the models in a scene together, the models must be converted from model space to world space

After that, we still have to account for the position of the camera

Page 20: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Model and View Transform

We transform the models into camera space or eye space with a view transform

Then, the camera will sit at (0,0,0), looking into negative z

Page 21: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Vertex Shading

Figuring out the effect of light on a material is called shading

This involves computing a (sometimes complex) shading equation at different points on an object

Typically, information is computed on a per-vertex basis and may include: Location Normals Colors

Page 22: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Projection

Projection transforms the view volume into a standardized unit cube

Vertices then have a 2D location and a z-value

There are two common forms of projection: Orthographic: Parallel

lines stay parallel, objects do not get smaller in the distance

Perspective: The farther away an object is, the smaller it appears

Page 23: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Clipping

Clipping process the polygons based on their location relative to the view volume

A polygon completely inside the view volume is unchanged A polygon completely outside the view volume is ignored

(not rendered) A polygon partially inside is clipped

New vertices on the boundary of the volume are created Since everything has been transformed into a unit cube,

dedicated hardware can do the clipping in exactly the same way, every time

Page 24: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Screen mapping

Screen-mapping transforms the x and y coordinates of each polygon from the unit cube to screen coordinates SharpDX conforms to the Windows standard of

pixel (0,0) being in the upper left of the screen OpenGL conforms to the Cartesian system

with pixel (0,0) in the lower left of the screen

Page 25: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Backface culling

Backface culling removes all polygons that are not facing toward the screen

A simple dot product is all that is needed This step is done in hardware in SharpDX and

OpenGL You just have to turn it on Beware: If you screw up your normals, polygons

could vanish

Page 26: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Rasterizer Stage

The goal of the Rasterizer Stage is to take all the transformed geometric data and set colors for all the pixels in the screen space

Doing so is called: Rasterization Scan Conversion

Note that the word pixel is actually short for "picture element"

Page 27: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

More pipelines

As you should expect, the Rasterizer Stage is also divided into a pipeline of several functional stages:

Triangle

Setup

Triangle

Traversal

Pixel Shadin

g

Merging

Page 28: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Triangle Setup and Traversal Setup

Data for each triangle is computed This could include normals

Traversal Each pixel whose center is overlapped by a

triangle must have a fragment generated for the part of the triangle that overlaps the pixel

The properties of this fragment are created by interpolating data from the vertices

These are done with fixed-operation (non-customizable) hardware

Page 29: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Pixel Shading

This is where the magic happens Given the data from the other

stages, per-pixel shading (coloring) happens here

This stage is programmable, allowing for many different shading effects to be applied

Perhaps the most important effect is texturing or texture mapping

Page 30: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Texturing

Texturing is gluing a (usually) 2D image onto a polygon To do so, we map texture coordinates onto polygon

coordinates Pixels in a texture are called texels This is fully supported in hardware Multiple textures can be applied in some cases

Page 31: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Merging

The final screen data containing the colors for each pixel is stored in the color buffer

The merging stage is responsible for merging the colors from each of the fragments from the pixel shading stage into a final color for a pixel

Deeply linked with merging is visibility: The final color of the pixel should be the one corresponding to a visible polygon (and not one behind it)

The Z-buffer is often used for this

Page 32: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Week 3: Programmable Shading

Page 33: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

More pipes!

Modern GPU's are generally responsible for the Geometry and Rasterizer Stages of the overall rendering pipeline

The following shows colored-coded functional stages inside those stages Red is fully programmable Purple is configurable Blue is not programmable at all

Vertex Shader

Geometry

ShaderClipping Screen

MappingTriangle Setup

Triangle Traversa

l

Pixel Shader Merger

Page 34: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Programmable Shaders

You can do all kinds of interesting things with programmable shading, but the technology is still evolving

Modern shader stages such as Shader Model 4.0 and 5.0 use a common-shader core

Strange as it may seem, this means that vertex, pixel, and geometry shaders use the same language

Page 35: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Vertex shader

Supported in hardware by all modern GPUs For each vertex, it modifies, creates, or ignores:

Color Normal Texture coordinates Position

It must also transform vertices from model space to homogeneous clip space

Vertices cannot be created or destroyed, and results cannot be passed from vertex to vertex Massive parallelism is possible

Page 36: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Geometry shader

Newest shader added to the family, and optional Comes right after the vertex shader Input is a single primitive Output is zero or more primitives The geometry shader can be used to:

Tessellate simple meshes into more complex ones Make limited copies of primitives

Stream output is possible

Page 37: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Pixel shader

Clipping and triangle set up is fixed in function Everything else in determining the final color of

the fragment is done here Because we aren't actually shading a full pixel, just a

particular fragment of a triangle that covers a pixel A lot of the work is based on the lighting model The pixel shader cannot look at neighboring pixels

Except that some information about gradient can be given

Multiple render targets means that many different colors for a single fragment can be made and stored in different buffers

Page 38: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Merging stage

Fragment colors are combined into the frame buffer

This is where stencil and Z-buffer operations happen

It's not fully programmable, but there are a number of settings that can be used Multiplication Addition Subtraction Min/max

Page 39: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Week 4: Linear Algebra

Page 40: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Vector operations

We will be interested in a number of operations on vectors, including: Addition Scalar multiplication Dot product Norm Cross product

Page 41: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Interpretations

A vector can either be a point in space or an arrow (direction and distance)

The norm of a vector is its distance from the origin (or the length of the arrow)

In R2 and R3, the dot product follows:

where is the smallest angle between u and v

φcosvuvu

Page 42: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Cross product

The cross product of two vectors finds a vector that is orthogonal to both

For 3D vectors u and v in an orthonormal basis, the cross product w is:

xyyx

zxxz

yzzy

z

y

x

vuvu

vuvu

vuvu

w

w

w

vuw

Page 43: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Cross product rules

Also: wu and wv u, v, and w form a right-handed system

)ba)ba

φ

wvwuwvu

uvvu

vuvuw

()((

sin

Page 44: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Matrix operations

We will be interested in a number of operations on matrices, including: Addition Scalar multiplication Transpose Trace Matrix-matrix multiplication Determinant Inverse

Page 45: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Matrix-matrix multiplication

Multiplication MN is legal only if M is p x q and N is q x r

Each row of M and each column of N are combined with a dot product and put in the corresponding row and column element

Page 46: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Determinant

The determinant is a measure of the magnitude of a square matrix

We'll focus on determinants for 2 x 2 and 3 x 3 matrices

100111001110

0100)det( mmmmmm

mmMM

222120

121110

020100

)det(

mmm

mmm

mmm

MM211200221001201102

211002201201221100

mmmmmmmmm

mmmmmmmmm

Page 47: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Adjoint

The adjoint of a matrix is a form useful for transforming surface normals

We can also use the adjoint when finding the inverse of a matrix

We need the subdeterminant dij to define the adjoint The adjoint A of an arbitrary sized matrix M is:

For a 3 x 3:

221202

211101

201000

)adj(

ddd

ddd

ddd

M

Page 48: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Multiplicative inverse of a matrix

For a square matrix M where |M| ≠ 0, there is a multiplicative inverse M-

1 such that MM-1 = I For cases up to 4 x 4, we can use the

adjoint:

Properties of the inverse: (M-1)T = (MT)-1

(MN)-1 = N-1M-1

)adj(11 MM

M

Page 49: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Orthogonal matrices

A square matrix is orthogonal if and only if its transpose is its inverse MMT = MTM = I

Lots of special things are true about an orthogonal matrix M |M| = ± 1 M-1 = MT

MT is also orthogonal ||Mu|| = ||u|| Mu Mv iff u v If M and N are orthogonal, so is MN

An orthogonal matrix is equivalent to an orthonormal basis of vectors lined up together

Page 50: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Homogeneous notation

We add an extra value to our vectors It's a 0 if it’s a direction It's a 1 if it's a point

Now we can do a rotation, scale, or shear with a matrix (with an extra row and column):

1000

0

0

0

222120

121110

020100

mmm

mmm

mmm

M

Page 51: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Translations

Then, we multiply by a translation matrix (which doesn't affect a direction)

A 3 x 3 matrix cannot translate a vector

1000

100

010

001

z

y

x

t

t

t

T

Page 52: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Lines

Explicit form (works for 2D and 3D lines) : r(t) = o + td o is a point on the line and d is its direction

vector Implicit form (2D lines only):

p is on L if and only if n • p + c = 0 If p and q are both points on L then we can

describe L with n • (p – q) = 0 Thus, n is perpendicular to L n = (-(py- qy),(px – qx)) = (a, b)

Page 53: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Planes

Once we are in 3D, we have to talk about planes as well

The explicit form of a plane is similar to a line: p(u,v) = o + us + vt o is a point on the plane s and t are vectors that span the plane s x t is the normal of the plane

Page 54: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Week 5: Transforms

Page 55: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Affine transforms

Adding a vector after a linear (3 x 3) transform makes an affine transform

Affine transforms can be stored in a 4 x 4 matrix using homogeneous notation

Affine transforms: Translation Rotation Scaling Reflection Shearing

Page 56: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

List of transforms

Notation Name Characteristics

T(t) Translation matrix

Moves a point (affine)

R Rotation matrix Rotates points (orthogonal and affine)

S(s) Scaling matrix Scales along x, y, an z axes according to s (affine)

Hij(s) Shear matrix Shears component i by factor s with respect to component j

E(h,p,r) Euler transform Orients by Euler angles head (yaw), pitch, and roll (orthogonal and affine)

Po(s) Orthographic projection

Parallel projects onto a plane or volume (affine)

Pp(s) Perspective projection

Project with perspective onto a plane or a volume

slerp(q,r,t) Slerp transform Interpolates quaternions q and r with parameter t

Page 57: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Translation

Move a point from one place to another by vector t = (tx, ty, tz)

We can represent this with translation matrix T

1000

100

010

001

),,()(z

y

x

zyx t

t

t

tttTtT

Page 58: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Rotation matrices

1000

0cossin0

0sincos0

0001

)(φφ

φφφxR

1000

0cos0sin

0010

0sin0cos

)(

yR

1000

0100

00cossin

00sincos

)(φφ

φφ

φzR

Page 59: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Scaling

Scaling is easy and can be done for all axes as the same time with matrix S

If sx = sy = sz, the scaling is called uniform or isotropic and nonuniform or anisotropic otherwise

1000

000

000

000

)(z

y

x

s

s

s

sS

Page 60: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Rotation around a point

Usually all the rotations are multiplied together before translations

But if you want to rotate around a point Translate so that that point lies at the

origin Perform rotations Translate back

Page 61: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Shearing

A shearing transform distorts one dimension in terms of another with parameter s

Thus, there are six shearing matrices Hxy(s), Hxz(s), Hyx(s), Hyz(s), Hzx(s), and Hzy(s)

Here's an example of Hxz(s):

1000

0100

0010

001

)(

s

sxzH

Page 62: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Rigid-body transforms

A rigid-body transform preserves lengths, angles, and handedness

We can write any rigid-body transform X as a rotation matrix R multiplied by a translation matrix T(t)

1000

)(222120

121110

020100

z

y

x

trrr

trrr

trrr

RtTX

Page 63: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Non-commutativity of transforms

This example from the book shows how the same sets of transforms, applied in different orders, can have different outcomes

Page 64: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Normal transforms

The matrix used to transform points will not always work on surface normals

Rotation is fine Uniform scaling can stretch the normal (which should be unit) Non-uniform scaling distorts the normal

Transforming by the transpose of the adjoint always gives the correct answer

In practice, the transpose of the inverse is usually used

Page 65: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Inverses

For normals and other things, we need to be able to compute inverses The inverse of a rigid body transform X is X-1

= (T(t)R)-1 = R-1T(t)-1 = RTT(-t) For a concatenation of simple transforms with

known parameters, the inverse can be done by inverting the parameters and reversing the order:▪ If M = T(t)R() then M-1 = R(-)T(-t)

For orthogonal matrices, M-1 = MT

If nothing is known, use the adjoint method

Page 66: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Euler transform

We can describe orientations from some default orientation using the Euler transform

The default is usually looking down the –z axis with "up" as positive y

The new orientation is: E(h, p, r) = Rz(r)Rx(p)Ry(h) h is head (or yaw), like shaking

your head "no" p is pitch, like nodding your head

back and forth r is roll… the third dimension

Page 67: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Quaternions

Quaternions are a compact way to represent orientations

Pros: Compact (only four values needed) Do not suffer from gimbal lock Are easy to interpolate between

Cons: Are confusing Use three imaginary numbers Have their own set of operations

Page 68: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Operations

Multiplication

Addition

Conjugate

Norm

Identity

),(ˆˆ vvwwvwvwvv rqqr rqrqrqrq

),(ˆˆ wwvv rq rqrq

),(ˆ*wv qqq

2222)̂( wzyx qqqqn q

)1,(ˆ 0i

Page 69: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Vertex blending

If we animate by moving rigid bodies around each other, joints won't look natural

To do so, we define bones and skin and have the rigid bone changes dictate blended changes in the skin

Page 70: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Morphing

Morphing interpolates between two complete 3D models Vertex correspondence▪ What if there is not a 1 to 1 correspondence between vertices?

Interpolation▪ How do we combine the two models?

If there's a 1 to 1 correspondence, we use parameter s[0,1] to indicate where we are between the models and then find the new location m based on the two locations p0 and p1

Morph targets is another technique that adds in weighted poses to a neutral model

10)1( ppm ss

Page 71: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Orthographic projections

An orthographic projection maintains the property that parallel lines are still parallel after projection

The most basic orthographic projection matrix simply removes all the z values

This projection is not ideal because z values are lost Things behind the camera are in front z-buffer algorithms don't work

1000

0000

0010

0001

0P

Page 72: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Canonical view volume

To maintain relative depths and allow for clipping, we usually set up a canonical view volume based on (l,r,b,t,n,f)

These letters simply refer to the six bounding planes of the cube Left Right Bottom Top Near Far

Here is the (OpenGL) matrix that translates all points and scales them into the canonical view volume

1000

200

02

0

002

0

nfnf

nf

btbt

bt

lrlr

lr

P

Page 73: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Perspective projection

A perspective projection does not preserve parallel lines

Lines that are farther from the camera will appear smaller

Thus, a view frustum must be normalized to a canonical view volume

Because points actually move (in x and y) based on their z distance, there is a distorting term in the w row of the projection matrix

Page 74: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Perspective projection matrix

Here is the SharpDX projection matrix

It is different from the OpenGL again because it only uses [0,1] for z

0100

00

02

0

002

0

nffn

nffbtbt

btn

lrlr

lrn

P

Page 75: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

IDEA Evaluations

Page 76: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Upcoming

Page 77: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Next time…

Review up to Exam 2

Page 78: Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

Reminders

Finish Project 4 Due on Friday before midnight