1 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
COMPUTER GRAPHICS
CS2401
STUDY MATERIAL
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
KARPAGA VINAYAGA COLLEGE OF ENGINEERING AND TECHNOLOGY
MADURANTAKAM
S.PRABHU ASSISTANT PROFESSOR
2 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
INDEX
SN TOPIC Page
1 Syllabus 3
2 OUTPUT PRIMITIVES 4
3 3D CONCEPTS 29
4 GRAPHICS PROGRAMMING 127
5 RENDERING 159
6 FRACTALS 182
7 QUESTION BANK 212
8 ROAD MAP 254
9 DIAGRAMS 271
3 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
CS2401 COMPUTER GRAPHICS
UNIT I 2D PRIMITIVES
output primitives β Line, Circle and Ellipse drawing algorithms - Attributes of output
primitives β Two dimensional Geometric transformation - Two dimensional viewing β
Line, Polygon, Curve and Text clipping algorithms.
UNIT II 3D CONCEPTS
Parallel and Perspective projections - Three dimensional object representation β
Polygons, Curved lines, Splines, Quadric Surfaces,- Visualization of data sets - 3D
transformations β Viewing -Visible surface identification.
UNIT III GRAPHICS PROGRAMMING
Color Models β RGB, YIQ, CMY, HSV β Animations β General Computer Animation,
Raster, Keyframe - Graphics programming using OPENGL β Basic graphics primitives β
Drawing three dimensional objects - Drawing three dimensional scenes
UNIT IV RENDERING
Introduction to Shading models β Flat and Smooth shading β Adding texture to faces β
Adding shadows of objects β Building a camera in a program β Creating shaded objects
β Rendering texture β Drawing Shadows.
UNIT V FRACTALS
Fractals and Self similarity β Peano curves β Creating image by iterated functions β
Mandelbrot sets β Julia Sets β Random Fractals β Overview of Ray Tracing β
Intersecting rays with other primitives β texture β Reflections and
Transparency β Boolean operations on Objects
TEXT BOOKS:
1. Donald Hearn, Pauline Baker, Computer Graphics β C Version, second edition,
Pearson Education,2004.
2. F.S. Hill, Computer Graphics using OPENGL, Second edition, Pearson Education,
2003.
4 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT I
OUTPUT PRIMITIVES
οΏ½ The picture can be described in several ways.
οΏ½ Picture may specify by the set of pixels in raster display.
οΏ½ Or we can describe the picture as a set of complex objects.
οΏ½ Such as trees and terrain or furniture and wall.
Output Primitives
οΏ½ Graphics programming packages provide functions to describe a scene in terms of
these basic geometric structures, referred to as output primitives.
οΏ½ To group the sets of output primitives into more complex structures.
οΏ½ Each output primitive is specified with input coordinate data and other information
about the way that object is to be displayed.
Simple geometric components
οΏ½ Points and straight line segments are the simplest geometric components of
pictures.
Additional output primitives
οΏ½ That can be used to construct a picture include
οΏ½ circles and other conic sections,
οΏ½ quadric surfaces,
οΏ½ spline curves and surfaces,
οΏ½ polygon color areas, and
οΏ½ character strings.
5 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
POINTS_LINES_INTRODUCTION
οΏ½ Shapes and colors of the objects can be described internally with pixel arrays or
with sets of basic geometric structures.
οΏ½ Such as
β’ straight line segments and
β’ polygon color areas.
οΏ½ The scene is then displayed either by loading the pixel arrays into the frame buffer
or by scan converting the basic geometric-structure specifications into pixel
patterns.
οΏ½ Typically, graphics programming packages provide functions to describe a scene
in terms of these basic geometric structures, referred to as output primitives,
οΏ½ And to group sets of output primitives into more complex structures.
οΏ½ Each output primitive is specified with input coordinate data and other information
about the way that object is to be displayed.
οΏ½ Points and straight line segments are the simplest geometric components of
pictures.
οΏ½ Additional output primitives that can be used to construct a picture include
β’ circles and other conic sections,
β’ quadric surfaces,
β’ spline curves and surfaces,
β’ polygon color areas, and
β’ character strings.
6 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
POINTS
οΏ½ Point plotting is accomplished by converting a single coordinate position furnished
by an application program into appropriate operations for the output device in use.
οΏ½ With a CRT monitor, for example, the electron beam is turned on to illuminate the
screen phosphor at the selected location.
οΏ½ How the electron beam is positioned depends on the display technology.
Random-scan system or Vector System
οΏ½ It stores point-plotting instructions in the display list, and coordinate values in
these instructions are converted to deflection voltages that position the electron
beam at the screen locations to be plotted during each refresh cycle.
Black-and-white raster system
οΏ½ A point is plotted by setting the bit value corresponding to a specified screen
position within the frame buffer to 1.
οΏ½ Then, as the electron beam sweeps across each horizontal scan line, it emits a burst
of electrons (plots a point) whenever a value of 1 is encountered in the frame
buffer.
RGB system,
οΏ½ The frame buffer is loaded with the codes for the intensities that are to be
displayed at the screen pixel positions.
7 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
LINES
οΏ½ Line drawing is accomplished by calculating intermediate positions along the line
path between two specified endpoint positions.
οΏ½ An output device is then directed to fill in these positions between the endpoints.
Analog Display Devices
οΏ½ For analog devices, such as a vector pen plotter or a random-scan display, a
straight line can be drawn smoothly from one endpoint to the other.
οΏ½ Linearly varying horizontal and vertical deflection voltages are generated that are
proportional to the required changes in the x and y directions to produce the
smooth line.
Digital Display Devices
οΏ½ Digital devices display a straight line segment by plotting discrete points between
the two endpoints.
οΏ½ Discrete coordinate positions along the line path are calculated from the equation
of the line.
Stair step Effect (jaggies)
οΏ½ For a raster video display, the line color (intensity) is then loaded into the frame
buffer at the corresponding pixel coordinates.
οΏ½ Reading from the frame buffer, the video controller then "plots" the screen pixels.
οΏ½ Screen locations are referenced with integer values.
οΏ½ So plotted positions may only approximate actual Line positions between two
specified endpoints.
8 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ For example, A computed line position of (10.48,20.51), would be converted to
pixel position (10,21).
οΏ½ Thus rounding of coordinate values to integers causes lines to be displayed with a
stair step appearance ("the jaggies"), as in the following figure,
How the pixel positions are referenced?
οΏ½ Pixcel positions referenced by scan line number and column number.
What is getpixel ( ) function?
οΏ½ Sometimes we want to be able to retrieve the current frame buffer intensity setting
for a specified location.
οΏ½ We accomplish this with the low-level function
getpixel (x, y )
9 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
LINE-DRAWING ALGORITHMS
οΏ½ The Cartesian slope-intercept equation for a straight line is
y= m.x + b
οΏ½ where m representing the slope of the line and
b representing the intercept.
οΏ½ Given that the two endpoints of a line segment are specified at position(x1,y1) and
(x2,y2), as shown in following Fig.
οΏ½ we can determine values for the slope m and y intercept b with the following
calculations:
οΏ½ Algorithms for displaying straight lines are based on the line equation 1 and the
calculations given in Eqn. 2 and 3.
οΏ½ For any given x interval βx along a line, we can compute the corresponding y
interval βy from Eqn 2
1
2
3
4
10 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Similarly, we can obtain the x interval βx corresponding to a specified βy as
οΏ½ These equations form the basis for determining deflection voltages in analog
devices.
οΏ½ For lines with slope magnitudes | m | < 1, βx can be set proportional to a small
horizontal deflection voltage and the corresponding vertical deflection is then set
proportional to βy as calculated from Eqn 4.
οΏ½ For lines whose slopes have magnitudes | m | > 1, βy can be set proportional to a
small vertical deflection voltage with the corresponding horizontal deflection
voltage set proportional to βx, calculated from Eqn 5.
οΏ½ For lines with m = 1, βx = βy and the horizontal and vertical deflections voltages
are equal.
οΏ½ In each case, a smooth line with slope m is generated between the specified
endpoints.
οΏ½ On raster systems, lines are plotted with pixels, and step sizes in the horizontal and
vertical directions are constrained by pixel separations.
οΏ½ That is, we must "sample" a line at discrete positions and determine the nearest
pixel to the line at each sampled position.
οΏ½ This scan conversion process for straight lines is shown as
5
11 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Above a near horizontal line with discrete sample positions along the x axis.
DDA Algorithm
οΏ½ The digital differential analyzer (DDA) is a scan-conversion line algorithm based
on calculating either βy or βx.
οΏ½ We sample the line at unit intervals in one coordinate and determine corresponding
integer values nearest the line path for the other coordinate.
οΏ½ Consider first a line with positive slope, as shown in Fig.
οΏ½ If the slope is less than or equal to 1, we sample at unit x intervals (βx = 1) and
compute each successive y value as
οΏ½ Subscript k takes integer values starting from 1, for the first point, and increases
12 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
by 1 until the final endpoint is reached.
οΏ½ Since m can be any real number between 0 and 1, the calculated y values must be
rounded to the nearest integer.
οΏ½ For lines with a positive slope greater than 1, we reverse the roles of x and y.
οΏ½ That is, we sample at unit y intervals (βy = 1) and calculate each succeeding x
value as
οΏ½ If this processing is reversed, so that the starting endpoint is at the right, then either
we have βx = - 1 and βy = -1
οΏ½ When the start endpoint is at the right (for the same slope), we set βx = -1.
οΏ½ Similarly, when the absolute value of a negative slope is greater than 1, we use
βy = -1.
Advantages
οΏ½ The DDA algorithm is a faster method for calculating pixel positions than older
methods.
οΏ½ It eliminates the multiplication, so that appropriate increments are applied in the x
or y direction to step to pixel positions along the line path.
13 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
BRESENHAM'S LINE ALGORITHM
οΏ½ An accurate and efficient raster line-generating algorithm, developed by Bresenham.
οΏ½ It use only incremental integer calculations that can be adapted to display circles and
other curves.
οΏ½ Figures illustrate sections of a display screen where straight line segments are to be
drawn.
οΏ½ The vertical axes show-scan-line positions.
οΏ½ The horizontal axes identify pixel columns.
οΏ½ Sampling at unit x intervals in these examples, we need to decide which of two
possible pixel positions is closer to the line path at each sample step.
οΏ½ Starting from the left endpoint shown in Fig a, we need to determine at the next
sample position whether to plot the pixel at position (11, 11) or the one at (11, 12).
οΏ½ Similarly, Fig b shows a negative slope-line path starting from the left endpoint at
pixel position (50, 50).
οΏ½ In this one, do we select the next pixel position as (51,51) or as (51,49).
οΏ½ These questions are answered with Bresenham's line algorithm by testing the sign
of an integer parameter, whose value is proportional to the difference between the
separations of the two pixel positions from the actual line path.
14 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ To illustrate Bresenham's approach, we first consider the scan-conversion process
for lines with positive slope less than 1.
οΏ½ Pixel positions along a line path are then determined by sampling at unit x intervals.
οΏ½ Following figure demonstrates the kth
step in this process.
οΏ½ Assuming we have determined that the pixel at (xk, yk) is to be displayed, we next
need to decide which pixel to plot in column xk+1.
οΏ½ Our choices are the pixels at positions (xk+1,yk) and (xk+1, yk+1).
οΏ½ They coordinate on the mathematical line at pixel column position xk + l is
calculated as
Then
And
15 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The difference between these two separations is
οΏ½ A decision parameter pk for the kth
step in the line algorithm.
οΏ½ It involves only integer calculations.
οΏ½ We accomplish this by substituting m = βy/βx,
οΏ½ where βy and βx are the vertical and horizontal separations of the endpoint
positions, and defining:
οΏ½ At step k + 1, the decision parameter is evaluated from
οΏ½ Subtracting from preceding equation, we have
οΏ½ at the starting pixel position (xo, yo) and with m evaluated as βy/βx
16 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
ALGORITHM
1. Input the two line endpoints and store the first end point in (x0,y0).
2. Load (x0, y0) into the frame buffer, (i.e) plot the first point.
3. Calculate constants βx, βy, 2βy and 2βy-2βx and obtain the starting value
for the decision parameters as
P=2βy-βx.
4. At each x, along the line, starting at k=0 per turn the following test.
If pk>0, the point to plot is (xk+1, yk) and
Pk+1=pk+2βy
Otherwise the next point to plot is (xk+1,yk+1)
Pk+1= pk+2βy-2βx.
5. Repeat step 4 βx times.
17 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
18 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
CIRCLE-DRAWING ALGORITHMS
οΏ½ A circle is defined as the set of points that are all at a given distance r from a
center position (x, y).
οΏ½ This distance relationship is expressed by the Pythagorean theorem in Cartesian
coordinates as
οΏ½ Another way to eliminate the unequal spacing shown in the above figure is to
calculate points along the circular boundary using polar coordinates r and Σ¨.
οΏ½ Expressing the circle equation in parametric polar form yields the pair of equations
1
19 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Computation can be reduced by considering the symmetry of circles.
οΏ½ The shape of the circle is similar in each quadrant.
οΏ½ We can generate the circle section in the second quadrant of the xy plane by noting
that the two circle sections are symmetric with respect to the y axis.
οΏ½ And circle sections in the third and fourth quadrants can be obtained from sections
in the first and second quadrants by considering symmetry about the x axis.
MIDPOINT CIRCLE ALGORITHM
οΏ½ First we can set up our algorithm to calculate pixel positions around a circle path
centered at the coordinate origin (0,0).
οΏ½ Then each calculated position (x, y) is moved to its proper screen position by
adding xc to x and yc to y.
οΏ½ Along the circle section from x = 0 to x = y in the first quadrant, the slope of the
curve varies from 0 to -1.
οΏ½ Therefore, we can take unit steps in the positive x direction over this octant and use
a decision parameter to determine which of the two possible y positions is closer to
the circle path at each step.
οΏ½ Positions in the other seven octants are then obtained by symmetry.
οΏ½ To apply the midpoint method, we define a circle function:
οΏ½ the relative position of any point ( x , y ) can be determined by checking the
2
3
20 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
sign of the circle function:
οΏ½ Thus, the circle function is the decision parameter in the midpoint algorithm, and
we can set up incremental calculations for this function as we did in the line
algorithm.
οΏ½ The above figure shows the midpoint between the two candidate pixels at
Sampling position xk + 1.
οΏ½ Our decision parameter is the circle function (equation 3 ) evaluated at the
midpoint between these two pixels:
4
21 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Successive decision parameters are obtained using incremental calculations.
οΏ½ We obtain a recursive expression for the next decision parameter by evaluating
the circle function at sampling position xk+1 + 1 = xk + 2:
οΏ½ Evaluation of the terms 2xk+1 and 2yk+1 can also be done incrementally as
οΏ½ The initial decision parameter is obtained by evaluating the circle function
at the start position (x0, y0) = (0, r ) :
22 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ If the radius r is specified as an integer, we can simply round po to
οΏ½ since all increments are integers.
ALGORITHM
1. Input radius r and circle center (xc, yc) and obtain the first point on the
circumference of a circle centered on the origin as
(x0,y0)=(0,r).
2. Calculate the initial value of the decision parameters as
p0=(5/4)-r.
3. At each xk position starting at k=0 perform the following test.
If pk<0, the next point along the circle centered on (0, 0) is (xk+1, yk) and
pk+1=pk+2xk+1+1
Otherwise, the next point along the circle is (xk+1, yk-1) and
pk+1=pk+2xk+1+1-2yk+1
Where 2xk+1=2xk+2 and 2yk+1=2yk+2
4. Determine symmetry in the other seven octants.
5. Move each calculated pixel position (x.y) onto the circular path centered on (x0, y0)
and plot the coordinate values
x=x+xc, y=y+yc
6. Repeat the step 3 through 5 until x>=y.
23 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Example:
To plot the pixel positions in first quadrant,
24 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
ELLIPSE-DRAWING ALGORITHMS
οΏ½ An ellipse is an elongated circle.
οΏ½ Therefore, elliptical curves can be generated by modifying circle-drawing
procedures to take into account the different dimensions of an ellipse along the
major and minor axes.
Properties of Ellipses
οΏ½ An ellipse is defined as the set of points such that the sum of the distances from
two fixted positions (foci) is the same for all points.
οΏ½ If the distances to the two foci from any point P = (x, y) on the ellipse are labeled
dl and d2, then the general equation of an ellipse can be stated as
οΏ½ we can rewrite the general ellipse equation in the form
οΏ½ where the coefficients A, B, C, D, E , and F are evaluated in terms of the focal
coordinates.
25 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Major Axes
οΏ½ The major axis is the straight line segment extending from one side of the ellipse
to the other through the foci.
Minor Axes
οΏ½ The minor axis spans the shorter dimension of the ellipse, bisecting the major axis
at the halfway position (ellipse center) between the two foci.
Polar coordinate
οΏ½ Using polar coordinates r and 0. we can also describe the ellipse in standard
position with the parametric equations:
Symmetry considerations
οΏ½ Symmetry considerations can be used to further reduce computations.
οΏ½ An ellipse in standard position is symmetric between quadrants, but unlike a circle,
it is not symmetric between the two octants of a quadrant.
οΏ½ Thus, we must calculate pixel positions along the elliptical arc throughout one
quadrant, then we obtain positions in the remaining three quadrants by symmetry
as in the diagram.
26 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Midpoint Ellipse Algorithm
οΏ½ we determine points ( x , y) for an ellipse in standard position centered on the
origin.
οΏ½ And then we shift the points so the ellipse is centered at ( x , y,).
οΏ½ To display the ellipse in nonstandard position, we could then rotate the ellipse
about its center coordinates to reorient the major and minor axes.
οΏ½ The midpoint ellipse method is applied throughout the first quadrant in
two parts.
οΏ½ Following figure shows the division of the first quadrant according to the
slope of an ellipse with rx < ry.
οΏ½ We define an ellipse function (xc, yc) = (0,0) as
οΏ½ which has the following properties:
1
27 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Thus, the ellipse function fellipse(x, y) serves as the decision parameter in the mid-
point algorithm.
οΏ½ At each sampling position, we select the next pixel along the ellipse path according
to the sign of the ellipse function evaluated at the midpoint between the two
candidate pixels.
οΏ½ The ellipse slope is calculated from Eqn 1 as
οΏ½ At the boundary between region 1 and region 2, dy/dx = - 1 and
οΏ½ Therefore, we move out of region 1 whenever
οΏ½ Following figure shows the midpoint between the two candidate pixels at sampling
position xk + 1 in the first region.
28 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Assuming position (xk, yk) has been selected at the previous step, we determine
the next position along the ellipse path by evaluating the decision parameter at this
midpoint:
οΏ½ At the next sampling position (xk+, + 1 = x, + 2), the decision parameter
for region 1 is evaluated as
29 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ In region 1, the initial value of the decision parameter is obtained by evaluating
the ellipse function at the start position ( x0 , y0) = (0 , r):
οΏ½ Over region 2, we sample at unit steps in the negative y direction, and the midpoint
is now taken between horizontal pixels at each step.
οΏ½ For this region, the decision parameter is evaluated as
30 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ To determine the relationship between successive decision parameters in region 2,
we evaluate the ellipse function at the next sampling step yk+1 - 1=yk - 2
οΏ½ When we enter region 2, the initial position (x0 , y0) is taken as the last position
selected in region 1 and the initial derision parameter in region 2 is then
ALGORITHM
1. Input radius rx, ry and ellipse center (xc, yc) and obtain the first point on the
circumference of a ellipse centered on the origin as
(x0,y0)=(0,r).
2. Calculate the initial value of the decision parameters in region 1 as
p0=ry2+rx
2ry+1/4 rx
2.
3. At each xk position in region 1, starting at k=0 perform the following test.
31 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
If pk<0, the next point along the circle centered on (0, 0) is (xk+1, yk) and
P1k+1=p1k+2 ry2xk+1+ ry
2
Otherwise, the next point along the circle is (xk+1, yk-1) and
pk+1=pk+2 ry2xk+1-2 rx
2yk+1+ ry
2
With
2 ry2xk+1=2 ry
2xk+2 and 2 rx
2yk+1=2 rx
2yk+2
And continue until 2 ry2
x>=2 rx2 y
4. Calculate the initial value of the decision parameters in region 2 using the last
point (x0,y0) calculated in region 1 as
P20= ry2
(x0+1/2)2 + rx
2 (y0-1)
2- rx
2 ry
2
5. At each yk position in region 2, starting at k=0, perform the following test.
If p2k>0, the next point along the circle centered on (0, 0) is (xk, yk-1) and
P2k+1=p2k-2 rx2yk+1+ rx
2
Otherwise, the next point along the circle is (xk+1, yk-1) and
P2k+1=p2k +2 ry2xk+1-2 rx
2yk+1+ rx
2
Using the same incremental calculations for x and y as in region 1.
6. Determine symmetry point in the other three quadrants.
7. Move each calculated pixel position(x,y) onto the elliptical path centered on (xc,
yc) and plot the coordinate values
x=x+xc, y=y+yc
32 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
33 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
A plot of the selected positions around the ellipse boundary within the first
quadrant is shown in Fig. 3-23.
ATTRIBUTES OF OUTPUT PRIMITIVES
οΏ½ Any parameter that affects the way a primitive is to be displayed is referred to as
an attribute parameter.
οΏ½ Some attribute parameters, such as
β’ color and
β’ size
οΏ½ Which determine the fundamental characteristics of a primitive.
οΏ½ Others specify how the primitive is to be displayed under special conditions.
οΏ½ For example, lines can be
β’ dotted
β’ or dashed,
β’ fat or thin, and
β’ blue or orange.
34 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
LINE ATTRIBUTES
οΏ½ Basic attributes of a straight line segment are its
β’ type,
β’ its width, and
β’ its color.
οΏ½ In some graphics packages, lines can also be displayed using selected pen or brush
options
Line Type
οΏ½ Possible selections for the line-type attribute include
β’ solid lines,
β’ dashed lines,
β’ and dotted lines
οΏ½ We modify a line drawing algorithm to generate such lines by setting the length
and spacing of displayed solid sections along the line path.
οΏ½ To set line type attributes in a program, a user invokes the function
setLineType(lt);
οΏ½ where parameter lt is assigned a positive integer value of 1,2,3, or 4 to generate
lines that are, solid, dashed, dotted, or dash-dotted respectively.
35 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Solid Line
Dotted Line
Dashed Line
Dash-Dotted Line
Line Width
οΏ½ Implementation of line-width options depends on the capabilities of the output
device.
οΏ½ A heavy line on video monitor could bc displayed as adjacent parallel lines.
οΏ½ Where as a pen plotter mght require pen changes.
οΏ½ Line-width command is used to set the current line-width value in the attribute list.
οΏ½ This value is then used by line-drawing algorithms to Control the thIckness of
lines
οΏ½ We set the line-wdth attribute with the command:
SetLinewidthScaleFactor(lw);
36 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ βlwβ is assigned a positive number to indicate the relative width of the line to be
displayed.
οΏ½ A value of 1 specifies a standard-width line.
οΏ½ Value greater then 1 produce lines thicker than the standard.
οΏ½ For raster implementation, a standard-width line is generated with single pixels at
each sample position, as in the Bresenham algorithm.
οΏ½ Other-width lines are displayed by plotting additional pixels along adjacent
parallel line paths.
οΏ½ Other methods for producing thick Lines include displaying the line as a filled
rectangle or generating the line with a selected pen or brush pattern,
Pen and Brush Options
οΏ½ With some packages, lines can be displayed with pen or brush selections.
οΏ½ Options in this category include
β’ shape,
β’ size, and
β’ pattern.
οΏ½ Some possible pen or brush shapes are given in folwing figure.
37 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Line Color
οΏ½ When a system provides color (or intensity) options, a parameter giving the current
color index is included in the list of system-attribute values.
οΏ½ A polyline routine displays a line in the current color by setting this color value in
the frame buffer at pixel locations along the line path using the setpixel procedure.
οΏ½ The number of color choices depends on the number of bits available per pixel in
the frame buffer.
οΏ½ The function is
SetPolylineColorIndex(lc)
οΏ½ βlcβ represents integer values represents the color parameter.
38 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
CURVE ATTRIBUTES
οΏ½ Parameters for curve attributes are the same as those for line segments.
οΏ½ We can display curves with varying colors, widths, dot-dash patterns, and
available pen or brush options.
οΏ½ Methods for adapting curve-drawing algorithms to accommodate attribute
selections are similar to those for line drawing.
οΏ½ Method for displaying thick curves is to fill in the area between two parallel curve
paths, whose separation distance is equal to the desired width.
COLOR AND GRAYSCALE LEVELS
οΏ½ Various color and intensity-level options can be made available to a user,
depending of a particular system.
οΏ½ Options are numerically coded with values ranging from 0 through the positive
integers.
οΏ½ For CRT monitors, these color codes are then converted to intensitylevel settings
for the electron beams.
οΏ½ color-information can be stored in the frame buffer in two ways:
β’ We can store color codes directly in the frame buffer, or
β’ we can put the color codes in a separate table and use pixel values as an
index into this table
Direct storage scheme
οΏ½ With the direct storage scheme, whenever a particular color code is specified in an
application program, the corresponding binary value is placed in the frame buffer
for each-component pixel in the output primitives to be displayed in that color.
39 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ A minimum number of colors can be provided in this scheme with 3 bits of storage
per pixel, as shown in Table.
οΏ½ Each of the three bit positions is used to control the intensity level (either on or
off) of the corresponding electron gun in an RGB monitor.
β’ The leftmost bit controls the red gun,
β’ the middle bit controls the green gun, and
β’ the rightmost bit controls the blue gun
οΏ½ Adding more bits per pixel to the frame buffer increases the number of color
choices.
οΏ½ With 6 bits per pixel, 64 color values are available for each screen pixel.
οΏ½ With a Resolution of 1024 by 1024, a full-color (24bit per pixel) RGB system
needs 3 mega bytes of storage for the frame buffer.
40 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Color tables are an alternate means for providing extended color capabilities,
without requiring large frame buffers.
οΏ½ In particular, often use color tables to reduce frame-buffer storage requirements.
Grayscale
οΏ½ With monitors that have no color capability, color functions can be used in an
application program to set the shades of gray, or grayscale, for displayed
primitives.
οΏ½ Numeric values over the range from 0 to 1 can be used to specify grayscale levels,
which are then converted to appropriate binary codes for storage in the raster.
οΏ½ This allows the intensity settings to be easily adapted to systems with differing
grayscale capabilities.
οΏ½ Lists the specifications for intensity codes for a four-level grayscale system.
οΏ½ In this example, any intensity input value near 0.33 would be stored as the binary
value 01 in the frame buffer, and pixels with this value would be displayed as dark
gray.
οΏ½ With 3 bits per pixel, we can accommodate 8 gray levels;
οΏ½ With 8 bits per pixel would give us 256 shades of gray.
41 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
AREA-FILL ATTRIBUTES
οΏ½ Options for filling a defined region include a choice between a solid color or a
patterned fill.
οΏ½ These fill options can be applied to polygon regions or to areas defined with
curved boundaries.
οΏ½ In addition, areas can be painted using various brush styles, colors, and
transparency parameters.
Fill Styles
οΏ½ Areas are displayed with three basic fill styles:
β’ hollow with a color border,
β’ filled with a solid color, or
β’ filled with a specified pattern or design.
οΏ½ A basic fill style is selected with the function
οΏ½ Another value for fill style is hatch, which is used to fill an area with selected
hatching patterns-parallel lines or crossed lines.
42 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
CHARACTER ATTRIBUTES
οΏ½ The appearance of displayed characters is controlled by attributes such as
β’ font,
β’ size,
β’ color, and orientation.
οΏ½ Attributes can be set both for entire character strings (text) and for individual
characters defined as marker symbols.
Text Attribute
οΏ½ There are a great many text options that can be made available to graphics
programmers.
οΏ½ First of all, there is the choice of font (or typeface).
οΏ½ which is a set of characters with a particular design style such as
β’ Arial,
β’ Courier,
β’ Impact,
β’ TimesNewRoman, and various special symbol groups.
οΏ½ The characters in a selected font can also be displayed with assorted underlining
styles
β’ Bold face
β’ Underline
43 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β’ Italics
οΏ½ The corresponding function for setting font is
οΏ½ SetTextFont();
οΏ½ Color settings for ,displayed text are stored m the system attribute list.
οΏ½ SetTextColorIndex(tc)
οΏ½ Where βtcβ specifies the color code.
οΏ½ We can adjust text size by scaling the overall dimensions (height and width) of
characters or by scaling only the character width.
44 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
2D TRANSFORMATION
οΏ½ The basic geometric transformations are
β’ translation,
β’ rotation, and
β’ scaling.
οΏ½ Other transformations that are often applied to objects include
β’ reflection and
β’ shear.
Translation
οΏ½ A translation is applied to an object by repositioning it along a straight-line path
from one coordinate location to another.
οΏ½ We translate a two-dimensional point by adding translation distances, tx and ty, to
the original coordinate position (x, y) to move the point to a new position ( x ' , y').
οΏ½ The translation distance pair (tx,ty) is called a translation vector or shift vector.
45 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ We can express the translation above equations as a single matrix equation by
using
column vectors to represent coordinate positions and the translation vector.
οΏ½ We can express the translation equations as a single matrix equation by using
column vectors to represent coordinate positions and the translation vector:
οΏ½ This allows us to write the two-dimensional translation equations in the matrix
form:
οΏ½ Translation is a rigid-body transformation that moves objects without deformation.
οΏ½ That is, every point on the object is translated by the same amount.
οΏ½ A straight Line segment is translated by applying the transformation equation to
each of the line endpoints and redrawing the line between the new endpoint
positions.
οΏ½ Polygons are translated by adding the translation vector to the coordinate position
of each vertex and regenerating the polygon using the new set of vertex
coordinates and the current attribute settings.
οΏ½ Following figure illustrates the application of a specified translation vector to
move an object from one position to another.
46 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Similar methods are used to translate curved objects.
οΏ½ To change the position of a circle or ellipse, we translate the center coordinates
and redraw the figure in the new location.
οΏ½ We translate other curves (for example, splines) by displacing the coordinate
positions defining the objects, then we reconstruct the curve paths using the
translated coordinate points.
ROTATION
οΏ½ A two-dimensional rotation is applied to an object by repositioning it along a
circular path in the xy plane.
οΏ½ To generate a rotation, we specify a rotation angle Σ© and the position (x1,y1) of the
rotation point (or pivot point) about which the object is to be rotated.
47 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Positive values for the rotation angle define counterclockwise rotations about the
pivot point, as in Fig, and negative values rotate objects in the clockwise direction.
οΏ½ This transformation can also be described as a rotation about a rotation axis that is
perpendicular to the xy plane and passes through the pivot point.
οΏ½ We first determine the transformation equations for rotation of a point position P
when the pivot point is at the coordinate origin.
οΏ½ The angular and coordinate relationships of the original and transformed point
positions are shown in Fig.
48 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ In this figure, r is the constant distance of the point from the origin, angle Ρ is the
original angular position of the point from the horizontal, and Ρ is the rotation
angle.
οΏ½ Using standard trigonometric identities, we can express the transformed
coordinates in terms of angles Σ© and Ρ as
οΏ½ The original coordinates of the point in polar coordinates are
οΏ½ Substitute equation 2 in 1,
οΏ½ we can write the rotation equations in the matrix form:
where the rotation matrix is
οΏ½ When coordinate positions are represented as row vectors instead of column
vectors, the matrix product in rotation equation 4 is transposed so that the
transformed row co ordinate vector [x' y'] is calculated as,
1
2
3
4
49 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
SCALING
A scaling transformation alters the size of an object.
οΏ½ This operation can be carried out for polygons by multiplying the coordinate
values (x, y) of each vertex by scaling factors sx, and sy, to produce the
transformed coordinates (x', y'):
x' = x. sx , y' = y.sy
οΏ½ Scaling factor sx, scales objects in the x direction, while sy scales in the y direction.
οΏ½ The transformation equations 5 can also be written in the matrix form:
Or
οΏ½ Where S is the 2 by 2 scaling matrix in Eq. 6.
οΏ½ Any positive numeric values can be assigned to the scaling factors sx, and sy.
οΏ½ Values less than 1 reduce the size of objects; values greater than 1 produce an
enlargement.
οΏ½ Specifying a value of 1 for both sx, and sy, leaves the size of objects unchanged.
οΏ½ When sx, and sy, are assigned the same value, a uniform scaling is produced that
maintains relative object proportions.
οΏ½ Unique values for sx, and sy, result in a differential scaling.
5
6
7
50 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Following figure shows the changing a square (a) into a rectangle (b) with scaling
factors sx, = 2 and sy =1.
οΏ½ Following figure illustrates scaling a line by assigning the value 0.5 to both sx and
sy, in Eqn 6.
οΏ½ Both the line length and the distance from the origin are reduced by a factor of 1 /2
MATRIX REPRESENTATIONS AND HOMOGENEOUS COORDINATES
οΏ½ Many graphics applications involve sequences of geometric transformations.
οΏ½ An animation, for example, might require an object to be translated and rotated at
each increment of the motion.
51 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ In design and picture construction applications, we perform
β’ translations,
β’ rotations, and
β’ scaling
to fit the picture components into their proper positions.
οΏ½ Here we consider how the matrix representations can be used so that such
transformation
sequences can be efficiently processed.
οΏ½ The basic transformations can be expressed in the general matrix form
οΏ½ With coordinate positions P and P' represented as column vectors.
οΏ½ Matrix M1 is a 2 by 2 array containing multiplicative factors, and M2, is a two-
element column matrix containing translational terms.
οΏ½ For translation, MI is the identity matrix.
COMPOSITE TRANSFORMATIONS
οΏ½ With the matrix representations of the previous section, we can set up a matrix for
any sequence of transformations as a composite transformation matrix by
calculating the matrix product of the individual transformations.
οΏ½ Forming products of transformation matrices is often referred to as a
concatenation, or composition, of matrices.
1
52 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
HOMOGENEOUS COORDINATES
οΏ½ The term homogeneous is used in mathematics to refer to the effect of this
representation on Cartesian equations.
οΏ½ when a Cartesian point (x, y) is converted to a homogeneous representation (xh, yh,
h), equations containing x and y, such as f(x, y) = 0.
οΏ½ Expressing positions in homogeneous coordinates allows us to represent all
geometric transformation equations as matrix multiplications.
οΏ½ Coordinates are represented with three-element column vectors, and
transformation operations are written as 3 by 3 matrices.
οΏ½ For translation, we have
οΏ½ which we can write in the abbreviated form
οΏ½ with T(tx,ty) as the 3 by 3 translation matrix in the eqn 2.
οΏ½ Similarly, rotation transformation equations about the coordinate origin are now
written as
Or as
2
3
53 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Finally, a scaling transformation relative to the coordinate origin is now expressed
as the matrix multiplication
Or as
OTHER TRANSFORMATIONS
οΏ½ Basic transformations such as translation, rotation, and scaling are included in
most graphics packages.
οΏ½ Some packages provide a few additional transformations that are useful in certain
applications.
οΏ½ Two such transformations are
β’ reflection and
β’ shear.
REFLECTION
οΏ½ A reflection is a transformation that produces a mirror image of an object.
οΏ½ The mirror image for a two-dimensional reflection is generated relative to an axis
of reflection by rotating the object 180" about the reflection axis.
54 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ We can choose an axis of reflection in the xy plane or perpendicular to the xy
plane.
οΏ½ When the reflection axis is a line in the xy plane, the rotation path about this axis is
in a plane perpendicular to the xy plane.
οΏ½ For reflection axes that are perpendicular to the xy plane, the rotation path is in the
xy plane.
οΏ½ Following are examples of some common reflections.
οΏ½ Reflection about the line y = 0, the x axis, is accomplished with the transformation
matrix
οΏ½ This transformation keeps x values the same, but "flips" the y values of coordinate
positions.
οΏ½ The resulting orientation of an object after it has been reflected about the x axis is
shown in Fig.
55 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ A reflection about the y axis flips x coordinates while keeping y coordinates the
same.
οΏ½ The matrix for this transformation is
οΏ½ Following illustrates the change in position of an object that has been reflected
about the line x = 0,
οΏ½ We flip both the x and y coordinates of a point by reflecting relative to an axis that
is perpendicular to the xy plane and that passes through the coordinate origin.
οΏ½ This transformation, referred to as a reflection relative to the coordinate origin, has
the matrix representation:
οΏ½ An example of reflection about the origin is shown in Fig.
56 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
SHEAR
οΏ½ A transformation that distorts the shape of an object such that the transformed
shape appears as if the object were composed of internal layers that had been
caused to slide over each other is called a shear.
οΏ½ Two common shearing transformations are those that shift coordinate x values and
those that shift y values.
οΏ½ An x-direction shear relative to the x axis is produced with the transformation
matrix
57 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ which transforms coordinate positions as
οΏ½ Any real number can be assigned to the shear parameter shx.
οΏ½ A coordinate position (x, y) is then shifted horizontally by an amount proportional
to its distance (y value) from the x axis (y = 0).
οΏ½ Setting shx to 2, for example, changes the square in following figure into a
parallelogram.
οΏ½ Negative values for shx, shift coordinate positions to the left.
οΏ½ We can generate x-direction shears relative to other reference lines with
οΏ½ A y-direction shear relative to the line x = x,,+ is generated with the trans-
formation matrix
58 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
TRANSFORMATION FUNCTIONS
οΏ½ Separate functions are convenient for simple transformation operations, and a
composite function can provide method for specifying complex transformation
sequences.
οΏ½ Individual commands for generating the basic transformation matrices are
translate (trans-ateVector, matrixTranslate)
rotate (theta, matrixRotate)
scale (scaleVector, matrixScale)
composeMatrix (matrix2, matrix1, matrixOut)
οΏ½ Each of these functions produces a 3 by 3 transformation matrix that can then be
used to transform coordinate positions expressed as homogeneous column vectors.
οΏ½ Parameter βtranslateVectorβ is a pointer to the pair of translation distances tx and ty.
οΏ½ Similarly, parameter βscaleVectorβ specifies the pair of scaling values sx and sy.
οΏ½ Rotate and scale matrices (matrixTranslate and matrix-Scale) transform with
respect to the coordinate origin.
οΏ½ A composite transfornation matrix to perform a combination scaling, rotation, and
translation is produced with the function
buildTransformationMatrix (referencepoint, translatevector, theta, scalevector,
matrix)
59 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
TWO DIMENSIONAL VIEWING
οΏ½ A graphics package allows a user to specify which part of a defined picture is to be
display& and where that part is to be placed on the display device.
οΏ½ Transformations from world to device coordinates involve translation, rotation,
and scaling operations, as well as procedures for deleting those parts of the picture
that are outside the limits of a selected display area.
οΏ½ A world-coordinate area selected for display is called a window.
οΏ½ An area on a display device to which a window is mapped is called a viewport.
οΏ½ The window defines what is to be viewed; the viewport defines where it is to be
displayed.
οΏ½ In general, the mapping of a part of a world-coordinate scene to device coordinates
is referred to as a viewing transformation.
οΏ½ Sometimes the two-dimensional viewing transformation is simply referred to as
the window-to-viewport transformation or the windowing transformation.
οΏ½ Following figure illustrates the mapping of a picture section that falls within a
rectangular window onto a designated rectangular viewport.
60 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Viewing-Transformation
οΏ½ Some graphics packages that provide window and viewport operations allow only
standard rectangles.
οΏ½ But a more general approach is to allow the rectangular window to haw any
orientation.
οΏ½ In this case, we carry out the viewing transformation in several steps, as indicated
in Fig.
οΏ½ First, we construct the scene in world coordinates using the output primitives and
attributes.
61 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Next. to obtain a particular orientation for the window, we can set up a two-
dimensional viewing-coordinate system in the world-coordinate plane, and define
a window in the viewing-coordinate system.
οΏ½ The viewing coordinate reference frame is used to provide a method for setting up
arbitrary
οΏ½ Orientations for rectangular windows.
οΏ½ Once the viewing reference frame is established, we can transform descriptions in
world coordinates to viewing coordinates.
οΏ½ We then define a viewport in normalized coordinates (in the range from 0 to 1 )
and map the viewing-coordinate description of the scene to normalized
coordinates.
οΏ½ At the final step, all parts of the picture that he outside the viewport are clipped,
and the contents of the viewport are transferred to device coordinates.
οΏ½ Following figure i1lustratt.s a rotated viewing-coordinate reference frame and the
mapping to normalized coordinates.
62 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
WINDOW-TO-VIEWPORT COORDINATE TRANSFORMATION
οΏ½ Once object descriptions have been transferred to the viewing reference frame, we
choose the window extents in viewing coordinates and select the viewport limits in
normalized coordinates.
οΏ½ Object descriptions are then transferred to normalized device coordinates.
οΏ½ We do this using a transformation that maintains the same relative placement of
objects in normalized space as they had in viewing coordinates.
οΏ½ If a coordinate position is at the center of the viewing window, for instance, it will
be displayed at the center of the viewport.
οΏ½ Following figure illustrates the window-to-viewport mapping.
οΏ½ A point at position (xw, yw) in the window is mapped into position (xv, yv) in the
associated viewport.
οΏ½ To maintain the same relative placement in the viewport as in the window, we
require that
63 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Solving these expressions for the viewport position (xv, yv), we have
οΏ½ where the scaling factors are
οΏ½ Above equations can also be derived with a set of transformations that converts the
window area into the viewport area.
οΏ½ This conversion is performed with the following sequence of transformations:
1. Perform a scaling transformation using a fixed-point position of (xw,yw) that
scales the window area to the size of the viewport.
2. Translate the scaled window area to the position of the viewport.
64 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
CLIPPING OPERATIONS
οΏ½ Generally, any procedure that identifies those portions of a picture that are either
inside or outside of a specified region of space is referred to as a clipping
algorithm, or simply clipping.
οΏ½ The region against which an object is to clipped is called a clip window.
οΏ½ For the viewing transformation, we want to display only those picture parts that are
within the window area.
οΏ½ Everything outside the window is discarded.
οΏ½ Clipping algorithms can be applied in world coordinates, so that only the contents
of the window interior are mapped to device coordinates.
οΏ½ Alternatively, the complete world-coordinate picture can be mapped first to device
coordinates, or normalized device coordinates, then clipped against the viewport
boundaries.
οΏ½ we consider algorithms for clipping the following primitive types
β’ Point Clipping
β’ Line Clipping (straight-line segments)
β’ Area Clipping (polygons)
β’ Curve Clipping
β’ Text Clipping
65 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
LINE CLIPPING
οΏ½ Following figure illustrates possible relationships between line positions and a
standard
rectangular clipping region.
οΏ½ A line clipping procedure involves several parts.
οΏ½ First, we can test a given line segment to determine whether it lies completely
inside the clipping window.
οΏ½ If it does not, we try to determine whether it lies completely outside the window.
οΏ½ Finally, if we cannot identify a line as completely inside or completely outside, we
must perform intersection calculations with one or more clipping boundaries.
οΏ½ We process lines through the "inside-outside'' tests by checking the line endpoints.
οΏ½ A line with both endpoints inside all clipping boundaries, such as the line from P1,
to P2, is saved.
οΏ½ A line with both endpoints outside any one of the clip boundaries (line P3P4 in
above Fig.) is outside the window.
66 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ All other lines cross one or more clipping boundaries, and may require calculation of
multiple intersection points.
οΏ½ To minimize calculations, we try to devise clipping algorithms that can efficiently
identify outside lines and reduce intersection calculations.
οΏ½ For a line segment with endpoints (x1, y1) and (x2, y2) and one or both endpoints
outside the clipping rectangle.
x = x1 + u(x2 - x1)
y = y1 + u(y2 - y1) 0β€uβ€1
οΏ½ The parametric representation could be used to determine values of parameter u for
intersections with the clipping boundary coordinates.
οΏ½ If the value of u for an intersection with a rectangle boundary edge is outside the
range 0 to 1, the line does not enter the interior of the window at that boundary.
οΏ½ If the value of u is within the range from 0 to 1, the line segment does indeed cross
into the clipping area.
οΏ½ This method can be applied to each clipping boundary edge in turn to determine
whether any part of the line segment is to be displayed.
οΏ½ Line segments that are parallel to window edges can be handled as specia1 cases.
οΏ½ Clipping line segments with these parametric tests requires a good deal of computation,
and faster approaches to clipping are possible.
οΏ½ A number of efficient line clippers have been developed.
67 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
COHEN-SUTHERLAND LINE CLIPPING
οΏ½ This is one of the oldest and most popular line-clipping procedures.
οΏ½ Generally, the method speeds up the processing of line segments by performing
initial tests
that reduce the number of intersections that must he calculated.
οΏ½ Every line end point in a picture is assigned a four-digit binary code, called a
region code,
οΏ½ That identifies the location of the point relative to the boundaries of the clipping
rectangle.
οΏ½ Regions are set up in reference to the boundaries as shown in Following fig.
οΏ½ Each bit position in the region code is used to indicate one of the four relative
coordinate
positions of the point with respect to the clip window:
β’ to the left,
β’ right,
β’ top, or
β’ bottom.
οΏ½ By numbering the bit positions in the region code as 1 through 4 from right to left,
the co ordinate regions can be correlated with the bit positions as
68 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
bit 1: left
bit 2: right
bit 3: below
bit 4: above
οΏ½ A value of 1 in any bit position indicates that the point is in that relative position;
οΏ½ otherwise, the bit position is set to 0.
οΏ½ If a point is within the clipping rectangle, the region code is 0000.
οΏ½ A point that is below and to the left of the rectangle has a region code of 0101.
οΏ½ Bit values in the region code are determined by comparing endpoint coordinate
values (x, y) to the clip boundaries.
οΏ½ Bit 1 is set to 1 if xwmin .
οΏ½ The other three bit values can be determined using similar comparisons.
οΏ½ For languages in which bit manipulation is possible, region-code bit values can be
determined
with the following two steps:
(1) Calculate differences between endpoint coordinates and clipping boundaries.
(2) Use the resultant sign bit of each difference calculation to set the corresponding
value in
the region code.
β’ Bit 1 is the sign bit of x-xwmin;
β’ Bit 2 is the sign bit of xwmax-x;
β’ bit 3 is the sign bit of y-ywmin;
β’ bit 4 is the sign bit of ywmax-y;
οΏ½ Once we have established region codes for all line endpoints, we can quickly
determine which lines are completely inside the clip window and which are clearly
outside.
69 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Any lines that are completely contained within the window boundaries have a
region code of 0000 for both endpoints, and we accept these lines.
οΏ½ Any lines that have a 1 in the same bit position in the region codes for each
endpoint are completely outside the clipping rectangle, and we reject these lines.
οΏ½ We would discard the line that has a region code of 1001 for one endpoint and a
code of 0101 for the other endpoint.
οΏ½ Both endpoints of this line are left of the clipping rectangle, as indicated by the 1
in the first bit position of each region code.
οΏ½ A method that can be used to test lines for total clipping is to perform the logical
and operation with both region codes.
οΏ½ If the result is not 0000, the line is completely outside the clipping region.
οΏ½ Lines that cannot be identified as completely inside or completely outside a clip
window by these tests are checked for intersection with the window boundaries.
οΏ½ As shown in figure, such lines may or may not cross into the window interior.
70 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ We begin the clipping process for a line by comparing an outside endpoint to a
clipping boundary to determine how much of the line can be discarded.
οΏ½ Then the remaining part of the Line is checked against the other boundaries, and
we continue until either the line is totally discarded or a section is found inside the
window.
οΏ½ We set up our algorithm to check line endpoints against clipping boundaries in the
order left, right, bottom, top.
POLYGON CLIPPING
οΏ½ To clip polygons, we need to modify the line-clipping procedures.
οΏ½ A polygon boundary processed with a line clipper may be displayed as a series of
unconnected line segments depending on the orientation of the polygon to the
clipping window.
οΏ½ What we really want to display is a bounded area after clipping, as in Fig.
71 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ For polygon clipping, we require an algorithm that will generate one or more
closed areas that are then scan converted for the appropriate area fill.
οΏ½ The output of a polygon clipper should be a sequence of vertices that defines the
clipped polygon boundaries.
Sutherland-Hodgeman Polygon Clipping
οΏ½ We can correctly clip a polygon by processing the polygon boundary as a whole
against each window edge.
οΏ½ This could be accomplished by processing all polygon vertices against each clip
rectangle boundary in turn.
οΏ½ Beginning with the initial set of polygon vertices, we could first clip the polygon
against the left rectangle boundary to produce a new sequence of vertices.
οΏ½ The new set of vertices could then k successively passed to a right boundary
clipper, a bottom boundary clipper, and a top boundary clipper, as in Fig.
72 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ At each step, a new sequence of output vertices is generated and passed to the next
window boundary clipper.
οΏ½ There are four possible cases when processing vertices in sequence around the
perimeter of a polygon.
οΏ½ As each pair of adjacent polygon vertices is passed to a window boundary clipper,
we make the following tests:
1. If the first vertex is outside the window boundary and the second vertex is
inside, both the intersection point of the polygon edge with the window
boundary and the second vertex are added to the output vertex list.
2. If both input vertices are inside the window boundary, only the second vertex is
added to the output vertex list.
3. If the first vertex is inside the window boundary and the second vertex is
outside, only the edge intersection with the window boundary is added to the
output vertex list.
4. If both input vertices are outside the window boundary, nothing is added to the
output list.
οΏ½ These four cases are illustrated in following figure for successive pairs of polygon
vertices.
73 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Once all vertices have been processed for one clip window boundary, the output
11st of vertices is clipped against the next window boundary.
οΏ½ We illustrate this method by processing the area in following figure against the left
window boundary.
οΏ½ Vertices 1 and 2 are found to be on the outside of the boundary.
οΏ½ Moving along to vertex 3, which is inside, we calculate intersection and save both
the intersection point and vertex 3.
οΏ½ Vertices 4 and 5 are determined to be inside, and they also are saved.
οΏ½ Moving along to vertex 6 from 5, we need to find the intersection and it is saved.
οΏ½ Using the five saved points, we would repeat the process for the next window
boundary.
74 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
CURVE CLIPPING
οΏ½ Areas with curved boundaries can be clipped with methods similar to those
discussed in the line clipping.
οΏ½ Curve-clipping procedures will involve nonlinear equations, however, and this
requires more processing than for objects with linear boundaries.
οΏ½ The bounding rectangle for a circle or other curved object can be used first to test
for overlap with a rectangular clip window.
οΏ½ If the bounding rectangle for the object is completely inside the window, we save
the object.
οΏ½ If the rectangle is determined to be completely outside the window, we discard the
object.
οΏ½ In either case, there is no further computation necessary.
οΏ½ But if the bounding rectangle test fails, we can look for other computation saving
approaches.
οΏ½ For a circle, we can use the coordinate extents of individual quadrants and then
octants for preliminary testing before calculating curve-window intersections.
οΏ½ For an ellipse, we can test the coordinate extents of individual quadrants.
οΏ½ Following figure illustrates circle clipping against a rectangular window.
75 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Similar procedures can be applied when clipping a curved object against a general
polygon clip region.
οΏ½ On the first pass, we can clip the bounding rectangle of the object against the
bounding rectangle of the clip region.
οΏ½ If the two regions overlap, we will need to solve the simultaneous line-curve
equations to obtain the clipping intersection points.
TEXT CLIPPING
There are several techniques that can be used to provide text clipping in a graphics
package.
The clipping technique used will depend on the methods used to generate characters
and the requirements of a particular application.
The simplest method for processing character strings relative to a window boundary is
to use the all-or-none string-clipping strategy shown in Fig.
οΏ½ If all of the string is inside a clip window, we keep it.
οΏ½ Otherwise, the string is discarded.
οΏ½ This procedure is implemented by considering a bounding rectangle around the
text pattern.
76 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The boundary positions of the rectangle are then compared to the window
boundaries, and the string is rejected if there is any overlap.
οΏ½ This method produces the fastest text clipping.
οΏ½ An alternative to rejecting an entire character string that overlaps a window
boundary is to use the all-or-none character-clipping strategy.
οΏ½ Here we discard only those characters that are not completely inside the window.
In this case, the boundary limits of individual characters are compared to the window.
οΏ½ Any character that either overlaps or is outside a window boundary is clipped.
οΏ½ A final method for handling text clipping is to clip the components of individual
characters.
οΏ½ We now treat characters in much the same way that we treated lines.
οΏ½ If an individual character overlaps a clip window boundary, we clip off the parts of
the character that are outside the window.
77 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Outline character fonts formed with line segments can be processed in this way
using a line clipping algorithm.
οΏ½ Characters defined with bit maps would be clipped by comparing the relative
position of the individual pixels in the character grid patterns to the clipping
boundaries.
EXTERIOR CLIPPING
οΏ½ we have considered only procedures for clipping a picture to the interior of a region
by eliminating everything outside the clipping region.
οΏ½ What is saved by these procedures is inside the region.
οΏ½ In some cases, we want to do the reverse, that is, we want to clip a picture to the
exterior of a specified region.
οΏ½ The picture parts to be saved are those that are outside the region.
οΏ½ This is referred to as exterior clipping.
οΏ½ A typical example of the application of exterior clipping is in multiple window
systems.
οΏ½ To correctly display the screen windows, we often need to apply both internal and
external clipping.
οΏ½ Following figure illustrates a multiple window display.
78 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Objects within a window are clipped to the interior of that window.
οΏ½ When other higher-priority windows overlap these objects, the objects are also
clipped to the exterior of the overlapping windows.
οΏ½ Exterior clipping is used also in other applications that require overlapping
pictures.
οΏ½ Examples here include the design of page layouts in advertising or publishing
applications or for adding labels or design patterns to a picture.
οΏ½ The technique can also be used for combining graphs, maps, or schematics.
οΏ½ For these applications, we can use exterior clipping to provide a space for an insert
into a larger picture.
79 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT II
3D CONCEPTS
οΏ½ To obtain a display of a three-dimensional scene that has been modeled in world
coordinates.
οΏ½ we must first set up a coordinate reference for the "camera".
οΏ½ This coordinate reference defines the position and orientation for the plane of the
camera film.
Which is the plane we want to use to display a view of the objects in the scene?
οΏ½ Object descriptions are then transferred to the camera reference coordinates and
projected onto the selected display plane.
οΏ½ We can then display the objects in wireframe (outline) form, as in Fig,
PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Or we can apply lighting and surface render
surfaces.
Parallel Projection
οΏ½ One method for generating a view of a solid object is to project points on the
object surface along parallel
οΏ½ By selecting different viewing
object onto the display plane to obtain different two
object, as in Fig.
οΏ½ In a parallel projection, parallel lines in the world
parallel
lines on the two-dimensional
PREPARED BY S.PRABHU AP/CSE KVCET
apply lighting and surface rendering techniques to shade the visible
One method for generating a view of a solid object is to project points on the
surface along parallel lines onto the display plane.
By selecting different viewing positions, we can project visible points on the
object onto the display plane to obtain different two-dimensional views of the
parallel lines in the world-coordinate scene project into
dimensional display plane.
80
ing techniques to shade the visible
One method for generating a view of a solid object is to project points on the
project visible points on the
dimensional views of the
coordinate scene project into
81 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ This technique is used in engineering and architectural drawings to represent an
object with a set of views that maintain relative proportions of the object.
οΏ½ The appearance of the solid object can then be reconstructured from the major
views.
Perspective Projection
Perspective : βThe appearance of things relative to one another as determined by their
distance from the viewerβ
οΏ½ Another method for generating a view of a three-dimensional scene is to project points
to the display plane along converging paths.
οΏ½ This causes objects farther from the viewing position to be displayed smaller than
objects of the same size that are nearer to the viewing position.
οΏ½ In a perspective projection, parallel lines in a scene that are not parallel to the display
plane are projected into converging lines.
οΏ½ Scenes displayed using perspective projections appear more realistic, since this is the
way that our eyes and a camera lens form images.
οΏ½ In the perspective projection view shown in Fig.
82 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Parallel lines appear to converge to a distant point in the background, and distant objects
appear smaller than objects closer to the viewing position.
Depth Cueing
οΏ½ Depth information is important so that we can easily identify, for a particular
viewing direction, which is the front and which is the back of displayed objects.
οΏ½ Following figure illustrates the ambiguity that can result when a wireframe object
is displayed without depth information.
οΏ½ The wireframe representation of the pyramid in
(a) Contains no depth information to indicate whether the viewing direction is
(b) Downward from a position above the apex or
(c) Upward from a position below the base.
οΏ½ There are several ways in which we can include depth information in the two-
dimensional representation of solid objects.
οΏ½ A simple method for indicating depth with wireframe displays is to vary the intensity of
objects according to their distance from the viewing position.
οΏ½ Following figure shows a wireframe object displayed with depth cueing.
83 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The lines closest to the viewing position are displayed with the highest intensities,
and lines farther away are displayed with decreasing intensities.
PROJECTIONS
οΏ½ Once world-coordinate descriptions of the objects in a scene are converted to
viewing coordinates, we can project the three-dimensional objects onto the two
dimensional view plane.
οΏ½ There are two basic projection methods.
β’ Parallel Projection
β’ Perspective Projection
Parallel Projection
οΏ½ In a parallel projection, coordinate positions are transformed to the view plane
along parallel Line.
84 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ A parallel projection preserves relative proportions of objects.
οΏ½ This is the method used in drafting to produce scale drawings of three-dimensional
objects.
οΏ½ Accurate views of the various sides of an object are obtained with a parallel projection.
οΏ½ But this does not give us a realistic representation of the appearance of a 3D
dimensional object.
Orthographic parallel projection.
οΏ½ We can specify a parallel projection with a projection vector that defines the
direction for the projection lines.
οΏ½ When the projection is perpendicular to the view plane, we have an orthographic
parallel projection.
οΏ½ Otherwise, we have an oblique parallel projection.
οΏ½ Following figure illustrates the two types of parallel projections.
85 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Some graphics packages, such as GL on Silicon Graphics workstations, do not
provide for oblique projections.
οΏ½ In this package, for example, a parallel projection is specified by simply giving the
boundary edges of a rectangular parallelepiped.
οΏ½ Orthographic projections are most often used to produce the front, side, and
top view of an object, a s shown in Fig.
86 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Front, side, and rear orthographic projections of an object are called elevations.
οΏ½ And a top orthographic projection is called plan view.
οΏ½ Engineering and architectural drawings commonly employ these orthographic
projections, because lengths and angles are accurately depicted and can be
measured from the drawings.
Perspective Projection
οΏ½ For a perspective projection, object positions are transformed to the view plane
along lines that converge to a point called the projection reference point (or center
of projection).
οΏ½ A perspective projection, on the other hand, produces realistic views but does not
preserve relative proportions.
οΏ½ Projections of distant objects are smaller than the projections of objects of the
same size that are closer to the projection plane
87 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The projected view of an object is determined by calculating the intersection of the
projection lines with the view plane.
οΏ½ To obtain a perspective projection of a three-dimensional object, we transform
points along projection lines that meet at the projection reference point.
οΏ½ Suppose we set the projection reference point at position zprp along the zv axis, and
we place the view plane at as shown in Fig.
88 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ We can write equations describing coordinate positions along this perspective
projection line in parametric form as
οΏ½ Parameter u takes values from 0 to 1.
οΏ½ Coordinate position (x', y', z') represents any point along the projection line.
οΏ½ When u = 0, we are at position P = (x , y, z).
οΏ½ At the other end of the line, u = 1 and we have the projection reference point
coordinates (0, 0, zprp).
οΏ½ On the view plane, z' = zprp and we can solve the z' equation for parameter u at this
position along the projection line:
89 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
3D REPRESENTATION
οΏ½ Representation schemes for solid objects are often divided into two broad
categories,
1. Boundary representations
2. Space-partitioning representation
Boundary representations
Boundary representations (B-reps) describe a three-dimensional object as a set of
surfaces that separate the object interior from the environment.
Typical examples of boundary representations are polygon facets and spline patches.
Space-partitioning representation
Space-partitioning representations are used to describe interior properties, by
partitioning the spatial region containing an object into a set of small, non
overlapping, contiguous solids (usually cubes).
A common space-partitioning description for a three-dimensional object is an octree
representation.
POLYGON SURFACES
οΏ½ The most commonly used boundary representation for a three-dimensional
graphics object is a set of surface polygons that enclose the object interior.
οΏ½ Many graphics systems store all object descriptions as sets of surface polygons.
οΏ½ This simplifies and speeds up the surface rendering and display of objects, since all
surfaces are described with linear equations.
οΏ½ For this reason, polygon descriptions are often referred to as "standard graphics
objects."
90 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ In some cases, a polygonal representation is the only one available, but many
packages allow objects to be described with other schemes, such as spline surfaces,
that are then converted to polygonal representations for processing.
οΏ½ A polygon representation for a polyhedron precisely defines the surface features of
the object.
οΏ½ But for other objects, surfaces are tesselated (or tiled) to produce the polygon-
mesh approximation.
οΏ½ Following figure shows Wireframe representation of a cylinder with back (hidden
lines removed).
οΏ½ Such representations are common in design and solid-modeling applications,
since the wireframe outline can be displayed quickly to give a general
indication of the surface structure.
οΏ½ Realistic renderings are produced by interpolating shading patterns across the
polygon surfaces to eliminate or reduce the presence of polygon edge
boundaries.
οΏ½ And the polygon-mesh approximation to a curved surface can be improved by
dividing the surface into smaller polygon facets.
91 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Polygon Tables
οΏ½ We specify a polygon surface with a set of vertex coordinates and associated
attribute
parameters.
οΏ½ An information for each polygon are placed into tables that are to be used in the
subsequent processing, display, and manipulation of the objects in a scene.
οΏ½ Polygon data tables can be organized into two groups:
1. geometric tables and
2. attribute tables.
Geometric tables
οΏ½ It contain vertex coordinates and parameters to identify the spatial orientation of
the polygon surfaces.
Attribute tables
οΏ½ It includes parameters specifying the degree of transparency of the object and its
surface reflectivity and texture characteristics.
οΏ½ A convenient organization for storing geometric data is to create three lists:
1. a vertex table,
2. an edge table, and
3. a polygon table.
Vertex table
οΏ½ Coordinate values for each vertex in the object are stored in the vertex table.
92 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Edge table
οΏ½ The edge table contains pointers back into the vertex table to identify the vertices
for each polygon edge.
Polygon table
οΏ½ The polygon table contains pointers back into the edge table to identify the edges
for
each polygon.
οΏ½ This scheme is illustrated in Fig for two adjacent polygons on an object surface.
tesselated : Fit together exactly, of identical shapes
93 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ In addition, individual objects and their component polygon faces can be assigned
object and facet identifiers for easy reference.
Plane Equations
οΏ½ To produce a display of a three-dimensional object, we must process the input data
representation for the object through several procedures.
οΏ½ These processing steps include transformation of the modeling and world-coordinate
descriptions to viewing coordinates, then to device coordinates; identification of visible
surfaces; and the application of surface-rendering procedures.
οΏ½ For some of these processes, we need information about the spatial orientation of
the individual surface components or the object.
οΏ½ This information is obtained from the vertex coordinate values and the equations
that describe the polygon planes.
οΏ½ The equation for a plane surface can be expressed in the form
οΏ½ where (x, y, z ) in any point on the plane, and the coefficients A, B, C, and D are
οΏ½ constants describing the spatial properties of the plane.
οΏ½ We can obtain the values of A, B, C, and D by solving a set of three plane
equations.
οΏ½ To solve the following set of simultaneous linear plane equations for the ratios
A/D, B/D,and C/D:
AX + BY + CZ + D = 0
94 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The solution for this set of equations can be obtained in determinant form, using
Cramer's rule, as
οΏ½ Expanding the determinants, we can write the calculations for the plane
coefficients in the form
POLYGON MESHES
οΏ½ Some graphics packages provide several polygon functions for modeling objects.
οΏ½ A single plane surface can be specified with a function such as βfillAreaβ.
οΏ½ But when object surfaces are to be tiled, it is more convenient to specify the
surface facets with a mesh function.
95 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Triangle strip
οΏ½ One type of polygon mesh is the triangle strip.
οΏ½ This function produces n - 2 connected triangles, .as shown in above figure.
Quadrilateral mesh
οΏ½ Another similar function is the quadrilateral mesh.
οΏ½ which generates a mesh of (n - 1) by (m - 1) quadrilaterals, given the coordinates
for an n by m array of vertices.
οΏ½ Above figure shows 20 vertices forming a mesh of 12 quadrilaterals.
Problem
οΏ½ When polygons are specified with more than three vertices, it is possible that the
vertices may not all Lie in one plane.
96 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ This can be due to numerical errors or errors in selecting coordinate positions for
the vertices.
Solution
οΏ½ One way to handle this situation is simply to divide the polygons into triangles.
οΏ½ Another approach that is sometimes taken is to approximate the plane parameters
A, B, and C.
οΏ½ We can do this with averaging methods or we can project the polygon onto the
coordinate planes.
οΏ½ Using the projection method, we take
A proportional to the area of the polygon projection on the yz plane,
B proportional to the projection area on the xz plane, and
C proportional to the projection area on the xy plane.
CURVED LINES AND SURFACES
οΏ½ Displays of three dimensional curved lines and surfaces can be generated from an
input set of mathematical functions defining the objects or from a set of user
specified data points.
οΏ½ When functions are specified, a package can project the defining equations for a
curve to the display plane and plot pixel positions along the path of the projected
function.
οΏ½ For surfaces, a functional description is often tesselated to produce a polygon-
mesh approximation to the surface.
97 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Usually, this is done with triangular polygon patches to ensure that all vertices of
any polygon are in one plane.
οΏ½ Polygons specified with four or more vertices may not have all vertices in a single
plane.
οΏ½ Curve and surface equations can be expressed in either a parametric or a non
parametric form.
QUADRIC SUKFACES
οΏ½ A frequently used class of objects are the quadric surfaces, which are described
with second-degree equations (quadratics).
οΏ½ They include
β’ spheres,
β’ ellipsoids,
β’ tori,
β’ paraboloids, and
β’ hyperboloids.
οΏ½ Quadric surfaces, particularly spheres and ellipsoids, are common elements of
graphics scenes, and they are often available in graphics packages as primitives
from which more complex objects can be constructed.
SPHERE
οΏ½ In Cartesian coordinates, a spherical surface with radius r centered on the
coordinate origin is defined as the set of points (x, y, z) that satisfy the equation
98 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ We can also describe the spherical surface in parametric form, using latitude and
longitude angles.
οΏ½ The above figure shows the Parametric coordinate position (r, ΞΈ,Ρ) on the surface
of a sphere with radius r.
ELLIPSOID
οΏ½ An ellipsoidal surface can be described as an extension of a spherical surface,
where the radii in three mutually perpendicular directions can have different
values.
2
99 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The Cartesian representation for points over the surface of an ellipsoid centered on
the origin is
οΏ½ And a parametric representation for the ellipsoid in terms of the latitude angle ΞΈ
and the longitude angle Ρ in eqn 2.
SPLINE
οΏ½ A spline is a flexible strip used to produce a smooth curve through a designated set
of points.
οΏ½ Several small weights are distributed along the length of the strip to hold it in
position on the drafting table as the curve is drawn.
οΏ½ The term spline curve originally referred to a curve drawn in this manner.
οΏ½ In computer graphics, the term spline curve refers to any composite curve formed
with polynomial sections satisfying specified continuity conditions at the boundary
of the pieces.
100 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Splines are used in graphics applications to design curve and surface shapes, to
digitize drawings for computer storage, and to specify the animation paths for the
objects or the camera in a scene.
οΏ½ Typical CAD applications for splines include the design of automobile bodies,
aircraft and spacecraft surfaces, and ship hulls.
οΏ½ The above figure shows the set of six control points interpolated with piecewise
continuous polynomial.
Spline Specifications
οΏ½ There are three equivalent methods for specifying a particular spline
representation:
1. We can state the set of boundary conditions that are imposed on the spline; or
2. We can state the matrix that characterizes the spline; or
3. We can state the set of blending functions (or basis functions) that determine
how specified geometric constraints on the curve are combined to calculate
positions along the curve path.
101 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
VISUALIZATION OF DATA SETS
οΏ½ The use of graphical methods as an aid in scientific and engineering analysis is
commonly referred to as scientific visualization.
οΏ½ This involves the visualization of data sets and processes that may be difficult or
impossible to analyze without graphical methods.
οΏ½ For example, visualization techniques are needed to deal with the output of high-
volume data sources such as
β’ supercomputers,
β’ satellite, and spacecraft scanners,
β’ radio-astronomy telescopes, and
β’ medical scanners.
οΏ½ Similar methods employed by commerce, industry, and other nonscientific areas
are sometimes referred to as business visualization.
οΏ½ Data sets are classified according to their spatial distribution and according to data
type.
οΏ½ Two-dimensional data sets have values distributed over a surface, and three-
dimensional data sets have values distributed over the interior of
β’ a cube,
β’ a sphere, or
β’ some other region of space.
οΏ½ Data types include
β’ scalars,
β’ vectors,
β’ tensors, and
β’ multivariate data.
102 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Visual Representations for Scalar Fields
οΏ½ A scalar quantity is one that has a single value.
οΏ½ Scalar data sets contain values that may be distributed in time, as well as over
spatial positions.
οΏ½ Also, the data values may be functions of other scalar parameters.
οΏ½ Some examples of physical scalar quantities are
β’ energy,
β’ density,
β’ mass,
β’ temperature,
β’ pressure,
β’ charge,
β’ resistance,
β’ reflectivity, and
β’ frequency.
οΏ½ A common method for visualizing a scalar data set is to use graphs or charts that
show the distribution of data values.
οΏ½ If the data are distributed over a surface, we could plot the data values as vertical
bars rising up from the surface, or we can interpolate the data values to display a
smooth surface.
Pseudo-color methods
οΏ½ Pseudo-color methods are also used to distinguish different values in a scalar data
set, and color-coding techniques can be combined with graph and chart methods.
οΏ½ To color code a scalar data set, we choose a range of color and map the range of
data values to the color range.
103 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ For example, blue could be assigned to the lowest scalar value, and red could be
assigned to the highest value.
οΏ½ Following figure gives an example of a color-coded surface plot.
οΏ½ Color coding a data set can be tricky, because some color combinations can lead to
misinterpretations of the data.
οΏ½ Contour plots are used to display isolines (lines of constant scalar value) for a
data set distributed over a surface.
οΏ½ The isolines are spaced at some convenient interval to show the range and
variation of the data values over the region of space.
οΏ½ The isolines are usually plotted as straight-line sections across each cell, as
illustrated in Fig.
104 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Visual Representations for Vector Fields
οΏ½ A vector quantity V in three-dimensional space has three scalar values ( Vx , Vy,
Vz) one for each coordinate direction, and a two-dimensional vector has two
components (Vx, Vy).
οΏ½ Another way to describe a vector quantity is by giving its magnitude |V| and its
direction as a unit vector u.
οΏ½ As with scalars, vector quantities may be functions of position, time, and other
parameters.
οΏ½ Some examples of physical vector quantities are
β’ velocity,
β’ acceleration,
β’ force,
β’ electric fields,
β’ magnetic fields,
β’ gravitational fields, and
β’ electric current.
οΏ½ One way to visualize a vector field is to plot each data point as a small arrow that
shows the magnitude and direction of the vector.
οΏ½ This method is most often used with cross-sectional slices, as in Fig.
105 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Magnitudes for the vector values can be shown by varying the lengths of the
arrows, or we can make all arrows the same size, but make the arrows different
colors according to a selected color coding for the vector magnitudes.
οΏ½ We can also represent vector values by plotting field lines or streamlines.
οΏ½ Field lines are commonly used for electric, magnetic, and gravitational fields.
οΏ½ The magnitude of the vector values is indicated by the spacing between field lines, and
the direction is the tangent to the field, as shown in Fig
οΏ½ Streamlines can be displayed as wide arrows.
106 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Visual Representations for Tensor Fields
οΏ½ A tensor quantity in three-dimensional space has nine components and can be
represented with a 3 by 3 matrix.
οΏ½ Actually, this representation is used for a second-order tensor, and higher-order
tensors do occur in some applications, particularly general relativity.
οΏ½ Some examples of physical, second-order tensors are stress and strain in a material
subjected to external forces, conductivity (or resistivity) of an electrical conductor,
and the metric tensor, which gives the properties of a particular coordinate space.
οΏ½ The stress tensor in Cartesian coordinates, for example, can be represented as
οΏ½ Tensor quantities are frequently encountered in anisotropic materials, which have
different properties in different directions.
Visual Representations for Multivariate Data Fields
οΏ½ In some applications, at each grid position over some region of space, we may
have multiple data values.
οΏ½ Which can be a mixture of scalar, vector, and even tensor values.
οΏ½ As an example, for a fluid-flow problem, we may have fluid velocity, temperature,
and density values at each three-dimensional position.
οΏ½ Thus, we have five scalar values to display at each position, and the situation is
similar to displaying a tensor field.
107 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ A method for displaying multivariate data fields is to construct graphical objects,
sometimes referred to as glyphs, with multiple parts.
οΏ½ Each part of a glyph represents a physical quantity.
οΏ½ The size and color of each part can be used to display information about scalar
magnitudes.
οΏ½ To give directional information for a vector field, we can use a wedge, a cone, or
some other pointing shape for the glyph part representing the vector.
οΏ½ An example of the visualization of a multivariate data field using a glyph structure
at selected grid positions is shown in
108 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
3D TRANSFORMATION
οΏ½ Methods for geometric transformations and object modeling in three dimensions
are extended from two-dimensional methods by including considerations for the z
coordinate.
TRANSLATION
οΏ½ In a three-dimensional homogeneous coordinate representation, a point is
translated
from position P = (x, y, z) to position P' = (x', y', z') with the matrix operation
Or
1
109 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Parameters tx, ty, and tz, specifying translation distances for the coordinate directions x,
y, and z, are assigned any real values.
οΏ½ The matrix representation in Eq.1 is equivalent to the three equations
οΏ½ An object is translated in three dimensions by transforming each of the defining
points of the object.
οΏ½ For an object represented as a set of polygon surfaces, we translate each vertex of
each surface and redraw the polygon facets in the new position.
οΏ½ We obtain the inverse of the translation matrix in Eq.1 by negating the translation
distances tx, ty, and tz.
οΏ½ This produces a translation in the opposite direction, and the product of a translation
matrix and its inverse produces the identity matrix.
ROTATION
οΏ½ To generate a rotation transformation for an object, we must designate an axis of
rotation (about which the object is to be rotated) and the amount of angular
rotation.
οΏ½ Unlike two-dimensional applications, where all transformations are carried out in
the xy plane, a three-dimensional rotation can be specified around any line in
space.
οΏ½ Following figures illustrate Positive rotation directions about the coordinate axes
are
counterclockwise, when looking toward the origin from a positive coordinate
position on each axis.
110 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Coordinate-Axes Rotations
οΏ½ The two-dimensional z-axis rotation equations are easily extended to three
dimensions:
οΏ½ Parameter ΞΈ specifies the rotation angle.
οΏ½ In homogeneous coordinate form, the three-dimensional z-axis rotation equations are
expressed as
οΏ½ which we can write more compactly as
οΏ½ Following figure illustrates rotation of an object about the z axis.
1
2
111 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Transformation equations for rotations about the other two coordinate axes can be obtained
with a cyclic permutation of the coordinate parameters x, y and in Eqs.1.
οΏ½ That is, we use the replacements
οΏ½ as illustrated in following fig.
οΏ½ Substituting permutations 3 in Eqs. 1, we get the equations for an x-axis rotation:
οΏ½ Which can be written in the homogeneous coordinate form
3
112 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Cyclic permutation of the Cartesian-coordinate axes to produce the three sets of
coordinate axis rotation equations.
SCALING
οΏ½ The matrix expression tor the scaling transformation of a position P = (x, y, z)
relative
to the coordinate origin can be written as
Or
οΏ½ Where scaling parameters sx, sy, and sz are assigned any positive values.
οΏ½ Explicit expressions for the coordinate transformations for scaling relative to the
origin are
οΏ½ Scaling an object with transformation Eqn1 changes the size of the object and
repositions the object relative to the coordinate origin.
οΏ½ Also, if the transformation parameters are not all equal, relative dimensions in the
object are changed.
1
2
113 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ We preserve the original shape of an object with a uniform scaling (sx =sy = sz).
οΏ½ The result of scaling an object uniformly with each scaling parameter set to 2 is
shown in Fig.
οΏ½ Scaling with respect to a selected fixed position (xf, yf, zf) can be represented with
the following transformation sequence:
1. Translate the fixed point to the origin.
2. Scale the object relative to the coordinate origin using Eq1.
3. Translate the fixed point back to its original position.
οΏ½ This sequence of transformations is demonstrated in following fig.
114 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The matrix representation for an arbitrary fixed-point scaling can then be
expressed as the concatenation of these translate-scale-translate transformations as
οΏ½ We form the inverse scaling matrix for either Eqn1 or Eqn3 by replacing the
scaling parameters
sx, sy and sz with their reciprocals.
οΏ½ The inverse matrix generates an opposite scaling transformation, so the
concatenation of any
scaling matrix and its inverse produces the identity matrix.
OTHER TRANSFORMATIONS
οΏ½ In addition to translation, rotation, and scaling, there are various additional
transformations
that are often useful in three-dimensional graphics applications.
οΏ½ Two of these are
β’ reflection and
β’ shear.
3
115 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
REFLECTIONS
οΏ½ A three-dimensional reflection can be performed relative to a selected reflection
axis or with respect to a selected reflection plane.
οΏ½ In general, three-dimensional reflection matrices are set up similarly to those for
two dimensions.
οΏ½ Reflections relative to a given axis are equivalent to 1800 rotations about that axis.
οΏ½ Reflections with respect to a plane are equivalent to 180' rotations in four-
dimensional space.
οΏ½ When the reflection plane is a coordinate plane (either xy, xz, or yz), we can think
of the transformation as a conversion between Left-handed and right-handed
systems.
οΏ½ An example of a reflection that converts coordinate specifications from a right-
handed system
to a left-handed system (or vice versa) is shown in Fig.
οΏ½ This transformation changes the sign of the z coordinates, leaving the x and y-
coordinate
values unchanged.
οΏ½ The matrix representation for this reflection of points relative to the xy plane is
116 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Transformation matrices for inverting x and y values are defined similarly, as
reflections relative
to the yz plane and xz plane, respectively.
οΏ½ Reflections about other planes can be obtained as a combination of rotations and
coordinate-plane reflections.
SHEARS
οΏ½ Shearing transformations can he used to modify object shapes.
οΏ½ They are also useful in three-dimensional viewing for obtaining general projection
transformations.
οΏ½ In two dimensions, we discussed transformations relative to the x or y axes to
produce distortions in the shapes of objects.
οΏ½ In three dimensions, we can also generate shears relative to the z axis.
οΏ½ As an example of three-dimensional shearing. the following transformation
produces a z-axis shear:
οΏ½ Parameters a and b can be assigned any real values.
οΏ½ The effect of this transformation matrix is to alter x- and y-coordinate values by an
amount
that is proportional to the z value, while leaving the z coordinate unchanged.
117 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Boundaries of planes that are perpendicular to the z axis are thus shifted by an
amount proportional to z.
οΏ½ An example of the effect of this shearing matrix on a unit cube is shown in Fig, for
shearing values a = b =1.
οΏ½ Shearing matrices for the x axis and y axis are defined similarly.
VIEWING PIPELINE
οΏ½ The steps for computer generation of a view of a three-dimensional scene are
somewhat analogous to the processes involved in taking a photograph.
οΏ½ To take a snapshot, we first need to position the camera at a particular point in
space.
οΏ½ Then we need to decide on the camera orientation (in Fig).
118 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Finally, when we snap the shutter, the scene is cropped to the size of the "window"
(aperture) of the camera, and light from the visible surfaces is projected onto the
camera film.
οΏ½ Following figure shows the general processing steps for modeling and converting a
world-coordinate description of a scene to device coordinates.
οΏ½ Once the scene has been modeled, world-coordinate positions are converted to
viewing coordinates.
οΏ½ The viewing-coordinate system is used in graphics packages as a reference for
specifying the observer viewing position and the position of the projection plane,
which we can think of in analogy with the camera film plane.
οΏ½ Next, projection operations are performed to convert the viewing-coordinate
description of the scene to coordinate positions on the projection plane, which will
then be mapped to the output device.
119 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Objects outside the specified viewing limits are clipped h m further consideration,
and the remaining objects are processed through visible-surface identification and
surface-rendering procedures to produce the display within the device viewport.
VIEWING COORDINATES
οΏ½ Generating a view of an object in three dimensions is similar to photographing the
object.
οΏ½ We can walk around and take its picture from any angle, at various distances, and
with varying camera orientations.
οΏ½ Whatever appears in the viewfinder is projected onto the flat film surface.
οΏ½ The type and size of the camera lens determines which parts of the scene appear in
the final picture.
οΏ½ These ideas are incorporated into three dimensional graphics packages so that
views of
a scene can be generated, given the spatial position, orientation, and aperture size
of the "camera".
οΏ½ To obtain a series of views of a scene, we can keep the view reference point fixed
and change the direction of N, as shown in Fig.
120 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ This corresponds to generating views as we move around the viewing-coordinate
origin.
οΏ½ In interactive applications, the normal vector N is the viewing parameter that is
most often changed.
οΏ½ By changing only the direction of N, we can view a scene from any direction
except along the line of V.
οΏ½ To obtain either of the two possible views along the line of V, we would need to
change the direction of V.
Transformation from World to Viewing Coordinates
1. Translate the view reference point to the origin of the world-coordinate system.
121 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
2. Apply rotations to align the xv, yv, and zv axes with the world xw, yw, and zw axes,
respectively.
οΏ½ If the view reference point is specified at world position (xo yo, zo), this point is
translated to the world origin with the matrix transformation
122 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
VISIBLE SURFACE DETECTION
οΏ½ A major consideration in the generation of realistic graphics displays is identifying
those parts of a scene that are visible from a chosen viewing position.
οΏ½ There are many approaches we can take to solve this problem, and numerous
algorithms have used for different types of applications.
οΏ½ Some methods require more memory, some involve more processing time, and
some apply only to special types of objects.
οΏ½ The various algorithms are referred to as visible-surface detection methods.
οΏ½ Sometimes these methods are also referred to as hidden-surface elimination
methods.
1. Back-face detection
2. Depth-buffer method
3. A-buffer method
4. Scan-line method
5. Depth-sorting method
6. BSP-tree method
7. Area-subdivision b1ethod
8. Octree methods
9. Ray-casting method
10. Curved surfaces
11. wireframe methods
123 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
BACK-FACE DETECTION
οΏ½ A fast and simple object-space method for identifying the back faces of a
polyhedron is based on the "inside-outside" tests.
οΏ½ A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C, and
D if
οΏ½ When an inside point is along the line of sight to the surface, the polygon must be
a back face (we are inside that face and cannot see the front of it from our viewing
position).
οΏ½ We can simplify this test by considering the normal vector N to a polygon surface,
which has Cartesian components (A, B, C).
οΏ½ In general, if V is a vector in the viewing direction from the eye (or "camera")
position, as shown in Fig,
οΏ½ then this polygon is a back face if
124 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
DEPTH-BUFFER METHOD
οΏ½ A commonly used image-space approach to detecting visible surfaces is the depth-
buffer method.
οΏ½ Which compares surface depths at each pixel position on the projection plane?
οΏ½ This procedure is also referred to as the z-buffer method.
οΏ½ Since object depth is usually measured from the view plane along the z axis of a
viewing system.
οΏ½ Each surface of a scene is processed separately, one point at a time across the
surface.
οΏ½ The method is usually applied to scenes containing only polygon surfaces, because
depth values can be computed very quickly and the method is easy to implement.
οΏ½ But the method can be applied to non planar surfaces.
οΏ½ With object descriptions converted to projection coordinates, each (x, y, z )
position on a polygon surface corresponds to the orthographic projection point (x,
y) on the view plane.
οΏ½ Therefore, for each pixel position (x, y) on the view plane, object depths can be
compared by comparing z values.
οΏ½ Following figure shows three surfaces at varying distances along the orthographic
projection line from position (x, y) in a view plane taken as the xv, yv plane.
οΏ½ Surface S1, is closest at this position, so its surface intensity value at (x, y) is saved
οΏ½ As implied by the name of this method, two buffer areas are required.
οΏ½ A depth buffer is used to store depth values for each (x, y) position as surfaces are
οΏ½ processed, and the refresh buffer stores the intensity values for each position.
125 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Initially, all positions in the depth buffer are set to 0 (minimum depth), and the
refresh
οΏ½ buffer is initialized to the background intensity.
οΏ½ Each surface listed in the polygon tables is then processed, one scan line at a time,
calculating the depth (z value) at each (x, y) pixel position.
οΏ½ The calculated depth is compared to the value previously stored in the depth buffer
at that position.
οΏ½ If the calculated depth is p a t e r than the value stored in the depth buffer, the new
depth value is stored, and the surface intensity at that position is determined and in
the same xy location in the refresh buffer.
οΏ½ We summarize the steps of a depth-buffer algorithm as follows:
126 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
A-Buffer
οΏ½ An extension of the ideas in the depth-buffer method is the A-buffer method.
οΏ½ The A- buffer method represents an antialiased, area-averaged, accumulation-
buffer method.
οΏ½ A drawback of the depth-buffer method is that it can only find one visible surface
at each pixel position.
οΏ½ In other words, it deals only with opaque surfaces and cannot accumulate intensity
values for more than one surface, as is necessary if transparent surfaces are to be
displayed .
οΏ½ The A-buffer method expands the depth buffer so that each position in the buffer
can reference a linked list of surfaces.
οΏ½ Thus, more than one surface intensity can be taken into consideration at each pixel
position, and object edges can be ant aliased.
127 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT III
COLOR MODELS
οΏ½ A color model is a method for explaining the properties or behavior of color within
some particular context.
Chromaticity diagram:
οΏ½ Chromaticity diagram is a convenient space coordinator representation of all the
colors and the mixture of colors.
How colors are represented here:
οΏ½ The various colors are represented along the perimeter of the curve.
οΏ½ The corner representing the 3 primary colors.
Uses of Chromaticity Diagram
οΏ½ Comparing color gamuts for different sets of primaries.
οΏ½ Identifying complementary colors.
οΏ½ Determining dominant wavelength and purity of a given color.
128 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Hue
οΏ½ This is the predominant spectral color of the received light.
οΏ½ The color itself is its hue or tint.
οΏ½ Green leaves have a green hue, red apple has a red hue.
Saturations:
οΏ½ This is the spectral purity of the color light.
οΏ½ Saturated colors are vivid, intense, deep
RGB COLOR MODEL
οΏ½ In this color model, the three primaries Red, Green and Blue are used.
οΏ½ Here color is expressed as,
CΛ = RR + CG + BB
οΏ½ We can represent this model in unit cube as shown in following figure,
129 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The origin represents black
οΏ½ The vertex with coordinates (1, 1, 1) is white.
οΏ½ The magenta vertex is obtained by adding red and blue.
οΏ½ The yellow vertex is obtained by adding green and red and so on.
Additive model
οΏ½ Intensities of the primary colors are added to produce other colors.
οΏ½ Each color point within the bounds of the cube can be represented as the triple
(R, G, B)
οΏ½ Where values for R, G, and B are assigned in the range from 0 to 1.
οΏ½ Shades of gray are represented along the main diagonal of the cube from the
origin (black) to the white vertex.
YIQ COLOR MODEL
οΏ½ In the YIQ color model, luminance (brightness) information is contained in the Y
parameter, while chromaticity information (hue and purity) is incorporated into the
1 and Q parameters.
οΏ½ A combination of red, green, and blue intensities are chosen for the Y parameter to
yield the standard luminosity curve.
οΏ½ Since Y contains the luminance information, black-and-white television monitors
use only the Y signal.
οΏ½ The largest bandwidth in the NTSC video signal (about 4 MHz) is assigned to the
Y information.
οΏ½ Parameter I contains orange-cyan hue information that provides the flesh-tone
shading, and occupies a bandwidth of approximately 1.5 MHz.
130 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Parameter Q carries green-magenta hue information in a bandwidth of about 0.6
MHz.
οΏ½ An RGB signal can be converted to a television signal using an NTSC encoder,
which converts RGB values to YIQ values.
οΏ½ Then modulates and superimposes the I and Q information on the Y signal.
οΏ½ The conversion from RGB values to YIQ values is accomplished with the
transformation.
NTSC signals
οΏ½ An NTSC video signal can be converted to an RGB signal using an NTSC
decoder.
οΏ½ Which separates the video signal into the YIQ components, then converts to RGB
values.
οΏ½ We convert from YIQ space to RGB space with the inverse matrix transformation
RGB into YIQ
οΏ½ An RGB signal can be converted to a television signal using an NTSC encoder
which converts RGB values to YIQ values.
οΏ½ This conversion from RGB values to YIQ values is accomplished with the
transformation.
YIQ into RGB
οΏ½ An NTSC video signal can be converted to an RGB signal using an NTSC
decoder.
131 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Which separates the video signal into the YIQ components, then converts to
RGB values.
οΏ½ We can convert from YIQ space to RGB space with the inverse matrix
transformation.
CMY COLOR MODEL
οΏ½ In this model cyan, magenta, and yellow (CMY) are used as a primary colors.
οΏ½ This model is useful for describing color output to hard-copy devices.
Video Monitors Vs Printers, Plotters:
οΏ½ Video monitors produce a color pattern by combining light from the screen
phosphors.
οΏ½ Whereas, hard-copy devices such as plotters produce a color picture by coating a
paper with color pigments.
Subtractive process
οΏ½ It is a subtractive process.
οΏ½ As we have noted, cyan can be formed by adding green and blue light.
οΏ½ Therefore, when white light is reflected from cyan-colored ink, the reflected light
must have no red component.
οΏ½ That is, red light is absorbed, or subtracted, by the ink.
οΏ½ Similarly, magenta ink subtracts the green component from incident light, and
yellow subtracts the blue component.
132 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ A unit cube representation for the CMY model is illustrated in Fig.
οΏ½ In the CMY model, point (1, 1, 1) represents black,
οΏ½ Because all components of the incident light are subtracted.
οΏ½ The origin represents white light.
οΏ½ Equal amounts of each of the primary colors produce grays, along the main
diagonal of the cube.
οΏ½ A combination of cyan and magenta ink produces blue light.
οΏ½ Because the red and green components of the incident light are absorbed.
οΏ½ Other color combinations are obtained by a similar subtractive process.
Printing Process
οΏ½ The printing process often used with the CMY model generates a color point with
a collection of four ink dots, (like RGB monitor uses a collection of three phosphor
dots).
β’ Three dots are used for each of the primary colors (cyan, magenta, and
yellow).
β’ And one dot is black.
133 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ A black dot is included because the combination of cyan, magenta, and yellow
inks typically produce dark gray instead of black.
Conversion of RGB into CMY
οΏ½ We can express the conversion from an RGB representation to a CMY
representation
with the matrix transformation
οΏ½ Where the white is represented in the RGB system as the unit column vector.
Conversion of CMY into RGB
οΏ½ Similarly, we convert from a CMY color representation to an RGB representation
with the matrix transformation
οΏ½ Where black is represented In the CMY system as the unit column vector.
134 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
HSV COLOR MODEL
οΏ½ Instead of a set of color primaries, the HSV model uses color descriptions that
have a more intuitive appeal to a user.
οΏ½ To give a color specification, a user selects a spectral color and the amounts of
white and black that are to be added to obtain different shades, tints, and tones.
οΏ½ Color parameters in this model are hue ( H ), saturation(S), and value (V).
οΏ½ The three-dimensional representation of the HSV model is derived from the RGB
cube.
οΏ½ If we imagine viewing the cube along the diagonal from the white vertex to the
origin (black).
οΏ½ We see an outline of the cube that has the hexagon shape shown in Fig.
οΏ½ The boundary of the hexagon represents the various hues, and it is used as the top
of the HSV hexcone.
135 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ In the hexcone, saturation is measured along a horizontal axis.
οΏ½ Value is along a vertical axis through the center of the hexcone.
οΏ½ Hue is represented as an angle about the vertical axis, ranging from 0" at red
through 360".
οΏ½ Vertices of the hexcone are separated by 60" intervals.
οΏ½ Yellow is at 600, green at 120
0, and cyan opposite red at H = 180
0.
οΏ½ Complementary colors are 1800 apart.
136 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
ANIMATION
οΏ½ Computer animation generally refers to any time sequence of visual changes in a
scene.
οΏ½ In addition to changing object position with translations or rotations, a computer-
generated animation could display time variations in object size, color,
transparency, or surface texture.
οΏ½ Computer animations can also be generated by changing camera parameters, such
as position, orientation, and focal length.
οΏ½ And we can produce computer animations by changing lighting effects or other
parameters and procedures associated with illumination and rendering.
DESIGN OF ANIMATION SEQUENCES
οΏ½ In general, an animation sequence is designed with the following steps:
1. Storyboard layout
2. Object definitions
3. Key-frame specifications
4. Generation of in-between frames
Storyboard Layout
οΏ½ The storyboard is an outline of the action.
οΏ½ It defines the motion sequence as a set of basic events that are to take place.
οΏ½ Depending on the type of animation to be produced, the storyboard could consist
of a set of rough sketches or it could be a list of the basic ideas for the motion.
137 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Object Definition
οΏ½ An object definition is given for each participant in the action.
οΏ½ Objects can be defined in terms of basic shapes, such as polygons or splines.
οΏ½ In addition, the associated movements for each object are specified along with the
shape.
Keyframe
οΏ½ A keyframe is a detailed drawing of the scene at a certain time in the animation
sequence.
οΏ½ Within each key frame, each object is positioned according to the time for that
frame.
οΏ½ Some key frames are chosen at extreme positions in the action.
οΏ½ Others are spaced so that the time interval between key frames is not too great.
οΏ½ More key frames are specified for intricate motions than for simple, slowly
varying motions.
Generation of in-between frames
οΏ½ In-betweens are the intermediate frames between the key frames.
οΏ½ The number of in-betweens needed is determined by the media to be used to
display the animation.
οΏ½ Film requires 24 frames per second, and graphics terminals are refreshed at the rate
of 30 to 60 frames per second.
οΏ½ Typically, time intervals for the motion are set up so that there are from three to
five in-betweens for each pair of key frames.
138 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
RASTER ANIMATIONS οΏ½ On raster systems, we can generate real-time animation in limited applications
using raster operations.
οΏ½ Two dimensional rotations in multiples of 90" are also simple to perform,
although we can rotate rectangular blocks of pixels through arbitrary angles using
antialiasing procedures.
οΏ½ To rotate a block of pixels, we need to determine the percent of area coverage for
those pixels that overlap the rotated block.
οΏ½ Sequences of raster operations can be executed to produce real-time animation of
either two-dimensional or three-dimensional objects, as long as we restrict the
animation to motions in the projection plane.
οΏ½ Then no viewing or visible surface algorithms need be invoked.
οΏ½ We can also animate objects along two-dimensional motion paths using the
color -table transformations.
οΏ½ Here we predefine the object at successive positions along the motion path, and set
the successive blocks of pixel values to color-table entries.
οΏ½ We set the pixels at the first position of the object to "on" values, and we set the
pixels at the other object positions to the background color.
οΏ½ The animation is then accomplished by changing the color-table values so that the
object is "on" at successively positions along the animation path as the preceding
position is set to the background intensity (Fig).
139 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
KEY-FRAME SYSTEMS
οΏ½ We generate each set of in-betweens from the specification of two (or more) key
frames.
οΏ½ Motion paths can be given with a kinematic dm-ripti011 as a set of spline curves,
or the motions can be physically based by specifying the force acting on the
objects to be animated.
οΏ½ For complex scenes, we can separate the frames into individual components or
objects called cels (celluloid transparencies), an acronym from cartoon animation.
οΏ½ Given the animation paths, we can interpolate the positions of individual objects
between any two times.
οΏ½ With complex object transformations, the shapes of objects may change over time.
οΏ½ Examples are clothes, facial features, magnified detail, evolving shapes, exploding
or disintegrating objects, and transforming one object into another object.
οΏ½ If all surfaces are described with polygon meshes, then the number of edges per
polygon can change from one frame to the next.
οΏ½ Thus, the total number of line segments can be different in different frames.
MORPHING
οΏ½ Transformation of object shapes from one form to another is called morphing,
οΏ½ Which is a shortened form of metamorphosis.
οΏ½ Morphing methods can he applied to any motion or transition involving a change
in shape.
οΏ½ Given two key frames for an object transformation, we first adjust the object
specification in one of the frames so that the number of polygon edges (or the
number of vertices) is the same for the two frames.
140 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ This preprocessing step is illustrated in Fig.
οΏ½ A straight-line segment in key frame k 15 transformed into
οΏ½ Two line segments in kev frame k t 1. Since key frame k + 1 has an extra vertex,
οΏ½ n'e add n veytex bctr\.rtw \wtices 1 and 2 in kcv frame k to balance the number of
οΏ½ vertices (and edges) In the two key frames. Using linear interpolation to generate
οΏ½ the in-betweens. wc trmsition the added vertex in key frclme k into vertex 3'
οΏ½ along the straight-linv path shown in Fis. 16-7. An eianlple ol a triangle linearly
οΏ½ cxp"11ding into ,I quad~.~lateral is given In Fig. 16-8. Figures 16-9 and 16-10
show
οΏ½ examples uf morphing 111 television advertising
141 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
OPENGL
(Open Graphics Library)
OpenGL is the precccmier environment for developing portable, interacting 2D
and 3D Graphics applications.
Advantages:
οΏ½ OPENGL is truly open, vendor, neutral, multiplatform graphics standard.
οΏ½ Stable.
οΏ½ Reliable and portable
οΏ½ Scalable
οΏ½ Easy to use.
οΏ½ Well documented.
142 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Features:
οΏ½ It supports 3D transformation.
οΏ½ It supports different color model
οΏ½ It supports lighting (flat shading, Gouraud shading, Pong shading)
οΏ½ It supports rendering.
οΏ½ It supports different modeling
οΏ½ It supports other special effects (atmosphere form, Ξ±-blending, motion blur)
OPNGL OPERATION:
GLUT: (OpenGL utility took kit);
οΏ½ It is a window system independent to build for writing OPENGL programs.
οΏ½ It implements a simple windowing application programming interface. [API]
for OPENGL.
οΏ½ GLUT provides a portable API as one can write a single OPENGL program that
works across all PC and workstation OS platforms.
143 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Sample Program:
Void main (int argc, char ** argc)
{
glutInit (&argc, argv);
glutInitDisplay Mode (Glut-Single Glut-RGB);
glutInitWindowsize (640, 480);
glutInitWindowPosition (100, 150);
glutCreateWindow (βmy first attemptβ);
glutDisplayFunc (myDisplay);
glutReshapeFunc (myReshape);
glutMouseFunc (myMouse);
glutKeyboardFunc (myKeyboard);
myInit ();
glut mainloop ();
}
οΏ½ glutInit (&argc, argv):
οΏ½ It initializes the OPENGL Utility Toolkit. Its arguments are the standard
ones for parsing information about the command line.
οΏ½ glutInitDisplay Mode (Glut-Single Glut-RGB):
οΏ½ This function specifies how the display should be initialized.
οΏ½ The argument indicates a single displayed buffer with RGB color model.
144 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ glutInitWindowsize (640, 480):
οΏ½ This function specifies that this screen window should initially be 640
pixels wide by 480 pixels height.
οΏ½ glutInitWindowPosition (100, 150):
οΏ½ This function specifies that the windows upper left corner should be
positioned on the screen 100 pixels over from the left edge and 150
pixels down from the top.
οΏ½ glutCreateWindow (βmy first attemptβ):
οΏ½ This function actually opens and displays the screen window, putting
the title βmy first attemptβ.
οΏ½ glutDisplayFunc (myDisplay):
οΏ½ Whenever the system determines that a window should be redrawn on
the screen.
οΏ½ glutReshapeFunc (myReshape):
οΏ½ Screen windows can be reshaped by the user, usually by dragging a
corner of the window to a new position with the mouse.
οΏ½ glutMouseFunc (myMouse):
οΏ½ When one of the mouse button is pressed or released a mouse event as
issued.
οΏ½ My mouse is registered as a function to be called when a mouse event
occurs.
145 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ glutKeyboardFunc (myKeyboard):
οΏ½ This command register the function myKeyboard(): with a event of
pressing or releasing some key on the keyboard.
BASIC GRAPHICS PRIMITIVES
οΏ½ OPENGL provide tools for drawing all of the output primitives such as points,
lines, polygons, and polylines.
οΏ½ They are defined by one or more vertices.
οΏ½ To draw objects in OPENGL, you pass it a list of vertices.
οΏ½ The list occurs between the two OPENGL function calls glBegin () and glEnd ().
glBegin (GL_POINTS);
glVertex 2i (100, 50);
glVertex 2i (100, 130);
glVertex 2i (150, 130);
glEnd ();
Format of OPENGL commands:
146 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
OPENGL Data Types:
Suffix Datatype Typical
Char++Type
OPENGL TypeName
B 8 bit integer Signed char GLbyte
S 16 bit integer short GLshort
I 32 bit integer int or long Glint, GLsize i
F 32 bit floating point float GLfloat, GLclamp F
D 64 bit floating point double Gldouble, GLclamped
Ub 8 bit unsigned number unsigned char GLUbyte, GLUdean
Us 16 bit unsigned number unsigned short GLUshort
Ui 32 bit unsigned number unsigned int or
unsigned long
GLUnit, GLenum,
GLbitfield.
οΏ½ The size of a point can be set with glpointsize() which takes one floating point
argument. The color of a drawing can be specified using
Glcolor3f(red,green,blue);
οΏ½ Where the values of red, green and blue vary between 0.0 and 1.0.
οΏ½ To draw a line between (40, 100) and (202, 9) we use
glBegin (GL_lines);
givertex 2i (40, 100);
glvertex 2i (202, 96);
glEnd ();
οΏ½ Polyline is a collection of line segment joining end to end.
οΏ½ In OPENGL, a polyline is called a line strip and is drawn by specifying the
vertices in turn, between glBegin (GL_LINE_STRIP) and glEnd ().
147 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
glBegin (GL_LINE_STRIP)
glvertex 2i (20, 10)
glvertex 2i (50, 10)
glvertex 2i (20, 80)
glvertex 2i (50, 80)
glEnd ();
glFlush ();
οΏ½ Line can be drawn using moveto () and lineto () also.
οΏ½ To drawn and aligned rectangle:
glRecti(Glint x1,Glint y1, Glint x2, Glint y2);
Other Graphics Primitives in OPENGL:
148 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
GL_TRIANGLES: Takes the listed vertices three at a time and draws a separate
triangle for each.
GL_QUADS: Takes the vertices four at a time and draws a separate quadrilateral for
each.
GL_TRIANGLES_STRIP: Draws a series of triangles based as tripe of vertices: V0,
V1, V2 then V2, V1, V3 then V2, V3, V4
GL_TRIANGLES_FAN: Draws a series of connected triangles based on tripes of
vertices: V0, V1, V2 then V0, V2, V3 then V0, V3, V4
GL_QUAD_STRIP: Draws a series of quadrilaterals based on four somes of vertices:
first V0, V1, V3, V2 then V2, V3, V4, V5 then V4, V5, V7, V6, etc.
Example:
οΏ½ The following code fragment specifies a 3D polygon to be drawn, in this case a
simple square.
οΏ½ Note that in this case the same square could have been drawn using the
GL_QUADS and GL_QUAD_STRIP primitives.
GLfloat p1[3] = {0, 0, 1};
GLfloat p2[3] = {1, 0, 1};
GLfloat p3[3] = {1, 1, 1};
GLfloat p4[3] = {0, 1, 1};
149 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
glBegin (GL_POLYGON);
glvertex3fv (p1);
glvertex3fv (p2);
glvertex3fv (p3);
glvertex3fv (p4);
glEnd ();
DRAWING 3D SCENES WITH OPENGL
Viewing process & Graphics Pipeline
οΏ½ All of our 2D drawing so far has actually used a special case of 3D viewing, based
on a simple βparallel projection:.
οΏ½ We have been use the βcameraβ as shown below,
οΏ½ The βeyeβ that is viewing the scene looks along the z-axis at the βwindowβ
150 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The view volume of the camera is a rectangular parallel piped.
οΏ½ Whose 4 side walls are determined by the border of the window.
οΏ½ Other two walls are determined by a near plane and far plane.
οΏ½ Points lying inside the view volume are projected onto the window along lines
parallel to the z-axis.
οΏ½ Ignore the 2 component of those points, so that the 3D point (x1,y1,z1) projected to
(x1,y1,0).
οΏ½ Points lying outside the view volume are clipped off.
οΏ½ A separate view port transformation maps the projected points from the window to
viewport on display device.
οΏ½ Following figure show a camera immersed in a scene.
οΏ½ The scene consist of a block
οΏ½ The image produced by the camera is also shown.
οΏ½ The graphics pipeline implemented by OpenGL does its major work through
matrix transformations
οΏ½ The important three matrices are
i. Model view matrix
ii. Projection matrix
iii. Viewport matrix
151 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Model View matrix:
οΏ½ It basically provides what we have calling the CT,
οΏ½ It combines two effects.
β’ sequence of modelling transformations applied to objects
β’ transformation that orients and positions the camera in space.
οΏ½ The model view matrix is a single matrix in the actual pipeline.
οΏ½ The modelling matrix is applied and then the viewing matrix.
οΏ½ So the model matrix is in fact the produced VM.
VM = Viewing matrix.
M = modelling matrix.
Projection matrix:
οΏ½ It scales and shifts each vertex n a particular way.
οΏ½ So that all those vertices that inside the view volume will inside a standard cube.
οΏ½ The projection matrix effectively squashes the view volume into the cube centred
at the origin.
οΏ½ The projection matrix also reverse the sense of the z-axis.
οΏ½ So that increasing values of z, increasing values of depth of a point from the eye.
οΏ½ The following figure shows how the block is transformed into a different block.
152 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Clipping is now performed, which eliminates the portion of the block that lies
outside the standard cube.
Viewport matrix:
οΏ½ Finally viewport matrix maps the surviving portion of the block into a β3D
viewportβ.
οΏ½ This matrix maps the standard cube into a block shape.
οΏ½ Whose x & y values extend across the viewport.
οΏ½ Whose 2 component extends from 0 to 1
οΏ½ That can be described in following figure.
153 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
DRAWING THREE DIMENSIONAL OBJECTS
1. 3D Viewing Pipeline
οΏ½ The world co-ordinate selected for display is called a window.
οΏ½ An area on a display device to which a window is mapped is called viewport.
οΏ½ The window defines what is to be viewed; the viewport defines where it is to be
displayed.
οΏ½ Viewports are typically defined within the unit square.
οΏ½ This provides a means for separating the viewing and other transformations from
specific output-device requirements.
οΏ½ So that the graphics packages is largely device-independent.
OPENGL functions for setting up transformations
Modeling Transformation (Model View Matrix) β glTranslatef ()
β glRotatef ()
β glScalef ()
154 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Viewing Transformation (Model View Matrix) β gluLookAt ()
Projection Transformation (Model View Matrix) β glFrustum ()
β gluPerspective ()
β glortho ()
β gluOrtho2D ()
Viewing Transformation β glViewport ()
To apply transformations in 2D case, OPENGL uses:
οΏ½ glscaled (sx, sy, 1:0):
β Postmultiply CT by a matrix that performs a scaling by sx in x and by
sy in y;
β put the result back into CT (current transformation). No scaling in z
in done.
οΏ½ glTranslated(dx, dy, 0):
β Postmultiply CT by a matrix that performs a translation by dx in x
and by dy in y;
β put the result back into CT (current transformation). No translation in
z in done.
οΏ½ glRotated (angle, 0, 0, 1):
β Postmultiply CT by a matrix that performs a rotation through angle
degrees about the z-axis. Put the result back into CT.
155 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β To initialize the CT to the identity transformation OPENGL provides
glLoadIdentity().
3D Viewing β Model View Matrix
Code:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Viewing Transform
gluLookAt (eyex, eyez, lookAt x, lookAt y, lookAt z, up X, up Y, up Z);
// Model Trnasform
glTrnaslatef (del x, del y, del z);
glRotatef (angle, i, j, k);
glScalef (mult x, mult y, mult z);
156 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT IV
INTRODUCTION TO SHADING MODEL
οΏ½ A shading model dictates how light is scattered or reflected from a surface.
οΏ½ A shading model frequently used in graphics in two types of light sources.
β’ Point light sources
β’ Ambient light
οΏ½ These light sources βshineβ on the various surfaces of the objects.
οΏ½ The incident light interact with the surface in three different ways.
β’ Some is absorbed by the surface and converted into heat.
β’ Some is reflected from the surface.
β’ Some is transmitted into the interior of the object, as in the case of piece of the
glass.
Black body
οΏ½ If all the incident light is absorbed, the object appears black and is known as black
body.
οΏ½ We focus on the part of the light that is reflected or scattered form the surface.
οΏ½ Some amount of these reflected light travels and reach the eyes, causing the object
to be seen.
οΏ½ There are two types of reflection of incident light.
β’ Diffuse scattering
β’ Specular reflection
157 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
DIFFUSE SCATTERING
οΏ½ It occurs when some of the incident light penetrates the surface slightly and is re-
radiated uniformly in all directions.
οΏ½ Scattered light interacts strongly with the surface, so its color is usually affected by
the nature of the material out of which the surface is made.
Computing the diffuse component
οΏ½ Suppose that light falls from a point source onto one side of the facet.
οΏ½ A fraction of light is reradiated diffusely in all directions from that side.
οΏ½ Some fraction of the reradiated part reaches the eye, with the intensity denoted by Id.
οΏ½ The following figure(a) shows the cross section, a point source illuminating a facet S.
οΏ½ In figure (b), the facet is turned partially away from the light source through an
angle ΞΈ.
Lambertβs Law
οΏ½ The area subtended is now only the fraction cos (ΞΈ).
οΏ½ So the brightness of S is reduced by that same fraction.
158 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ This relationship between brightness and surface orientation is often called
Lambertβs law.
οΏ½ For the internsity of the diffuse component, we can adopt the expression.
Id = Ispd * s.m/ |s||m|
οΏ½ Where Is is the intensity of the light source.
οΏ½ pd is the diffuse reflection coefficient
οΏ½ the following figure shows how the spheres appears when it reflects diffuse light,
for six different reflection coefficient.
SPECULAR REFLECTION
οΏ½ Real objects do not scatter objects uniformly in all directions.
οΏ½ So specular component is added to the shading model.
οΏ½ Specular reflection causes highlights, which can add significantly to the realism of
a picture when objects are shiny.
159 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
FLAT SHADING
οΏ½ When a face is flat (like the roof of a barn) and the light sources are quite distant.
οΏ½ The diffuse light component varies little over different points on the roof.
οΏ½ In such cases it is reasonable to use the same color for every pixel covered by the
face.
οΏ½ Flat shading is established in OpenGL by using the command.
glShadeModel(GL_FLAT);
οΏ½ The following figure shows a buckyball and sphere rendered by means of flat
shading.
οΏ½ The individual faces are clearly visible on both objects.
οΏ½ Edges between faces actually appear more pronounced that they would be on an
actual physical object, due to the phenomenon in the eye known as lateral
inhibition.
οΏ½ Specular highlights are rendered poorly with flat shading.
160 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Because an entire face is filled with a color that was computed at only one vertex.
SMOOTH SHADING
οΏ½ Smooth shading attempts to de-emphasize edges between faces by computing
colors at more points on each face.
οΏ½ The two principle types of smooth shadings are
1. Gouraud Shading
2. Phong Shading
οΏ½ OpenGL does only Gouraud shading.
GOURAUD SHADING
οΏ½ Computationally speaking, Gouraud shading is modestly more expensive than flat
shading.
οΏ½ Gouraud shading is established in OpenGL with the use of the function.
o glShadeModel(GL_SMOOTH);
οΏ½ The following figure shows a buckyball and a sphere rendered by means of
Gouraud shading.
οΏ½ The buckyball looks the same as when it was rendered with flat shading.
161 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Because the same color is associated with each vertex of a face, so interpolation
changes nothing.
οΏ½ But the sphere looks much smoother.
οΏ½ The edges of the faces are replaced by a smoothly varying color across the object.
οΏ½ The following figure suggests how Gouraud Shading reveals the βunderlyingβ
surface approximated by the mesh.
οΏ½ The polygonal surfaceβs shown in coss section, with vertices V1, V2 etc.
οΏ½ The imaginary smooth surface is suggested as well
οΏ½ Properly computed vertex normals m1, m2 etc, perpendicular to this imaginary
surface, so that normals of correct shading will be used.
οΏ½ The color is then made to vary smoothly between vertices.
οΏ½ Gouraud shading does not picture highlights well.
οΏ½ Highlights are better reproduced by using Phong shading.
162 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
PHONG SHADING
οΏ½ Greater realism can be achieved.
οΏ½ Particularly with regard to highlights on shiny objects.
οΏ½ This is done by approximations of the normal vector to the face at each pixel.
οΏ½ This type of shading is called Phong Shading.
οΏ½ When computing Phong Shading, we find the normal vector at each point on the
face of the objects.
οΏ½ And we apply the shading model there to find the color.
οΏ½ We compute the normal vector at each pixel by interpolating the normal vectors at
the vertices of the polygon.
οΏ½ Following figure shows a projected face, with normal vectors m1, m2, m3 and m4
indicated at the four vertices.
οΏ½ For the scanline 1/s the vector mleft and mright are found by linear interpolation.
οΏ½ For instance
163 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ This interpolated vector must be normalized to unit length before it is used in the
shading formula
οΏ½ Once mleft and mright are known, they are interpolated to form a normal vector at
each x along the scan line.
οΏ½ Following fig. Shows an object rendered by using Gouraud Shading and the same
object is rendered by using Phong Shading.
οΏ½ In Phong Shading, the direction of normal vector varies smoothly from point to
point and more closely approximates.
οΏ½ The production of specular highlights is much more faithful than with Gouraud
Shading.
οΏ½ It produces more realistic rendering.
Drawback:
οΏ½ Phong Shading is relatively slow speed.
οΏ½ More computation is required per pixel.
οΏ½ Phong shading can take six to eight times longer than Gouraud Shading.
164 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Why OpenGL is not setup to do Phong Shading?
οΏ½ Because it applied the shading model once per vertex right after the modelview
transformation
οΏ½ Normal vector information is not passed to the rendering stage following the
perspective transformation and division.
ADDING TEXTURE TO FACES:
οΏ½ The realism of an image is greatly enhanced by adding surface texture to the
various faces of a mesh object.
οΏ½ Basic function is
texture (s, t)
οΏ½ This function produces a color or intensity value for each value of s and t
between 0 and 1.
165 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
TYPES:
οΏ½ There are numerous sources of textures.
οΏ½ The most common textures are
β’ Bitmap textures
β’ Procedural texture
BITMAP TEXTURES:
οΏ½ Textures are often formed from bit representation of images such as digitized
photo, clip art or image can be previously in some program.
TEXELS:
οΏ½ Texels formed from bitmap consists of an array, say textr[c][r] of color values
often called Texels.
οΏ½ If the array has c columns and r rows, the indices c and r varies from 0 to c-1
and 0 to r-1respectively.
PROCEDURAL TEXTURE:
οΏ½ Alternatively we can define a texture by mathematical function or procedure
example, the following spherical shape
οΏ½ It can be generated by the function
166 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
float fakeshape (float s, float t)
{
float r = sqrt((s-0.5) * (s-0.5) + (t-0.5) * (t-0.5))
if (r<0.3)
return 1-4/0.3 //sphere intensity
else
return 0.2 //dark background
}
PASTING THE TEXTURE ON TO A FLAT SURFACE:
οΏ½ Since texture space itself is flat, it is simplest to paste texture on to a flat
surface.
οΏ½ Example,
To define a quadrilateral phase and to position a texture on it,
four texture co-ordinates and 4-3D points are passed to OPENGL function.
glBegin (GL_QUADS)
glTexCoord2f (0.0, 0.0);
glvertex3f (1.0, 2.5, 1.5);
glTexCoord2f (0.0, 0.6); glvertex3f (1.0, 3.7, 1.5);
glTexCoord2f (0.8, 0.6); glvertex3f (2.0, 3.7, 1.5);
glTexCoord2f (0.8, 0.0); glvertex3f (2.0, 2.5, 1.5);
glEnd
167 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
MAPPING A SQUARE TO RECTANGLE:
οΏ½ The above figure shows the common case in which the four corners of the
texture square are associated with 4-corners of a rectangle.
οΏ½ Producing repeated structure
168 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The above figure shows the use of texture co-ordinates that tile the texture,
making it repeat.
ADDING SHADOWS OF OBJECTS:
οΏ½ Shadows make an image much more realistic, it shows how the objects are
positioned with respect to each other.
οΏ½ Following figure shows cube and sphere with and without shadows.
169 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Shadows are absent in figure A, so it is impossible to see how far above the
plane, the cube and the sphere are floating.
οΏ½ Shadow in figure B, give useful hints as the positions of the objects.
οΏ½ Generally a shadow conveys lot of information.
170 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ To compute the shape of the shadow is cost.
οΏ½ In the above figure, the shape of the shadow is determined by the projections
of each of the phases of the box on the plane of the floor.
οΏ½ This provides the key for drawing the shadows.
SHADOW BUFFER:
οΏ½ Different methods for drawing shadow uses a variant of the depth buffer, that
performs the removal of hidden surfaces.
οΏ½ In this method, an auxiliary second depth buffer called shadow buffer, is
employed for each light sources. This recovers lot of memory.
οΏ½ The rendering of shadow is done by two stages.
i. Loading the buffer.
ii. Rendering the scene.
BUILDING A CAMERA IN A PROGRAM
οΏ½ In order to have fine control ever camera movements, we create and manipulate
our own camera in a program.
οΏ½ We create a βcameraβ class that knows how to do all the things a camera does.
οΏ½ Doing this is very simple and the payoff is high.
οΏ½ In a program, we create a βcameraβ object called, say βcamβ and adjust it with
functions as following,
cam.set(eye,look,up); //initialize the camera
cam.slide(-1,0,-2); //slide the camera forward and to the left
cam.roll(30); //roll it through 300.
cam.yaw(20); //yaw it through 200.
.
171 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
.
etc
.
.
οΏ½ The following program shows the basic definition of the camera class.
Class camera
{
private:
point3 eye;
vector3 u,v,n;
double viewAngle, aspect, nearDist, farDist;
void setModeViewMatrix();
public:
camera();
void set(point3 eye, point3 look, vecotr3 up);
void roll(float angle);
void pitch(float angle);
void yaw(float angle);
void slide(float delu, float delv, float deln);
void setshape(float vAng, float asp, float nearD, float farD);
};
οΏ½ Here point3 and vecotr3 are the basic data types.
οΏ½ The utility routine setModelViewMatrix() communicates the model view matrix to
OpenGL
172 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ It is used only by member functions of the class and needs to be called after each
change is made to the cameraβs position.
οΏ½ The following program shows the possible implementation of this routine.
Void camera :: setModelViewMatrix(void)
{
float m[16];
vector3 eVec(eye.x, eye.y, eye.z);
m[0]=u.x; m[4]=u.y; m[8]=u.2; m[12]=-eVec.dot(u);
m[1]=v.x; m[5]=v.y; m[9]=v.2; m[13]=-eVec.dot(v);
m[2]=n.x; m[6]=n.y; m[10]=n.2; m[14]=-eVec.dot(v);
m[3]= 0; m[7]= 0; m[11]= 0; m[15]= 1.0;
glMatrixMode(GL_MODELVIEW);
glLoadMatrix(m);
i. It can βslidβ in three dimensions
ii. It can be rotated about any of three co-ordinate axes.
Sliding the Camera:
οΏ½ Sliding a camera means to move it along of its own axes.
οΏ½ That is, in the u, v or n direction, without rotating it.
movement along βnβ : forward or backward
movement along βuβ : left or right
movement along βvβ : up or down
173 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Rotating the Camera:
οΏ½ We want to roll, pitch or yaw the camera.
οΏ½ Each of these involves a rotation of the camera about one of its own axes.
οΏ½ To roll the camera is to rotate it about its won n-axis.
οΏ½ This means that both the directions u and v must be rotated, as shown in figure.
οΏ½ We form two new axes u' and v' that lie in the same plane as u and v.
οΏ½ The functions pictch() and yaw() are implemented in a similar fashion.
CREATING SHADED OBJECTS
οΏ½ Shading is a process used in drawing for depicting levels of darkness on paper by
applying media more densely or with a darker shade for darker areas, and less
densely or with a lighter shade for lighter areas.
174 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ There are various techniques of shading including cross hatching where
perpendicular lines of varying closeness are drawn in a grid pattern to shade an
area.
οΏ½ The closer the lines are together, the darker the area appears.
οΏ½ Likewise, the farther apart the lines are, the lighter the area appears.
οΏ½ Light patterns, such as objects having light and shaded areas, help when creating
the illusion of depth on paper.
175 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Fly a camera through space looking at various polygonal mesh objects.
οΏ½ It Include ambient diffuse and specular light components.
οΏ½ Provide a keystroke that switches flat and smooth shading.
SHADING METHODS
1. Circulism:
β This is a very popular shading method among artists.
β The idea is to draw very tiny circles that overlap and intertwine.
β Building up tone can be tedious but the results are worth it.
β This shading method is great for rendering a realistic skin texture.
β Use a light touch and build up tone.
2. Blended circulism:
β Graphite is scribbled onto the paper just as in the last method.
β Using a blending stump, the graphite is blended in small circular motions.
β This shading method is also great for skin textures.
3. Dark Blacks:
β If a user wants dark blacks, try using charcoal.
β For a dark tone with apply the graphite to the paper.
β Any time user dealing with dark tones and graphite, there will be a shine
that results.
176 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β This happens because the tooth of the paper absorbs the graphite quickly
and there are extra layers left on top. Glare/shine is a reality when working
with graphite.
4. Loose Cross Hatching:
β It is simple and effective.
β It is very looking too.
β The basic idea of crosshatching is to overlap lines.
β Start by drawing a set of diagonal lines next to each other.
β Then rotate the drawing 90 degrees and draw another set of diagonal lines
that overlap the first set.
β This can be repeated numerous times to build up tone. Crosshatching can be
as tight or as loose.
5. Tight Cross Hatching:
β Using the ideas from loose crosshatching, this shading method takes it a
little further.
β Tone is built up through repetition and a soft touch.
β This shading methods works really well for animal.
β Itβs not perfect and some paper tooth will show through.
6. Powder Shading:
β Powder shading is a sketching shading method.
β In this style, the stumping powder and paper stumps are used to draw a
picture.
β The stumping powder is smooth and doesnβt have any shiny particles.
177 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β The poster created with powder shading looks more beautiful than the
original. The paper to be used should have small grains on it so that the
powder remains on the paper.
RENDERING THE TEXTURE:
οΏ½ Rendering in a face βFβ is similar to Gauraud shading proceeds across the face
fixed by pixel.
οΏ½ For each pixel it must determine the corresponding texture-coordinates (s, t)
and set the pixel to the proper color.
οΏ½ Following figure shows the camera taking a snapshot of a face F with texture
pasted on it and the rendering in progress.
οΏ½ The scan line y is being filled from xleft β xright.
PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ For each x, along this scan line it must compute the correct position on the
face (p(xs, ys)) from that obtain the correct position (s*, t*) within the texture.
οΏ½ The following diagram show that incremental calculation
οΏ½ Make one of the objects in the scene a flat planar surface, on which is seen
shadows of other objects.
PREPARED BY S.PRABHU AP/CSE KVCET
For each x, along this scan line it must compute the correct position on the
)) from that obtain the correct position (s*, t*) within the texture.
The following diagram show that incremental calculation texture co
DRAWING SHADOWS
Make one of the objects in the scene a flat planar surface, on which is seen
178
For each x, along this scan line it must compute the correct position on the
)) from that obtain the correct position (s*, t*) within the texture.
texture co-ordinates.
Make one of the objects in the scene a flat planar surface, on which is seen
179 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ A simple way of drawing a drop shadow of a rectangular object is to draw a gray
or black area underneath and offset from the object.
οΏ½ In general, a drop shadow is a copy in black or gray of the object, drawn in a
slightly different position. Realism may be increased by:
i. Darkening the colors of the pixels where the shadow casts instead of
making them gray. This can be done with alpha blending the shadow with
the area it is cast on.
ii. Softening the edges of the shadow. This can be done by adding Gaussian
blur to the shadowβs alpha channel before blending.
οΏ½ Shadows are one of the most important visual cues that we have for understanding
the spatial relationships between objects.
οΏ½ Unfortunately, even modern computer graphics technology has a difficult time
drawing realistic shadows at an interactive frame rate.
οΏ½ One trick that you can use is to pre-render shadows and then apply them to the
scene as a textured polygon.
οΏ½ This allows the creation of soft shadows and allows the computer to maintain a
high frame rate while drawing shadows.
Step 1: Activate and position the shadows
β First, activate the shadows and position them using Sketch Upβs Shadows toolbar.
Step 2: Draw the Shadows Only
β Next, to render the shadows without the geometry.
β To do this, create two pages in Sketch Up.
β Put the objects in the scene in a different layer than Layer0.
180 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β So that toggles the visibility of the layer containing the objects.
β Have the first page draw the shadows and show the layer containing the objects
in the scene.
β Have the second page hide the layer containing the objects in the scene.
β When the user moves from the first page to the second page, the objects will
disappear, leaving the shadows only.
Step 3: Draw the Shadows from Above
β Next, position the camera to view the shadows from directly above so that we can
use the resulting image to draw the shadows onto a ground plane polygon.
Step 4: Soften the Shadows
β The shadows that are rendered by Sketch Up always have hard edges.
β In order to make the shadows look more realistic, we can soften the shadows
using software such as Photoshop or Gimp that includes an image blur tool.
β When you create the shadow image, you can use the βalphaβ channel of the
image to make portions of the image transparent.
Step 5: Create a new Shadow Material
β Next is to create a new material that uses the soft shadow image from the
previous step as a texture.
β If the image that we created in the previous step has an alpha channel, then the
alpha channel will be used to carve out transparent areas in the shadow
material.
181 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Step 6: Apply the material to a ground polygon
β Last, create a ground polygon that underlies the objects in the scene and apply
the shadow material to it.
β This will create a semi transparent polygon where dark patches are the shadow
areas. Since the shadows are pre-computed, you should turn off the Shadow
option in Sketch up.
β In computer graphics, shading refers to the process of altering a color based on
its lights and its distance from lights to create a photorealistic effect.
β Shading is performed during rendering process.
Shadow Mapping:
οΏ½ Shadow mapping is just one of many different ways of producing shadows in our
graphics applications.
οΏ½ Shadow mapping is an image space technique, working automatically with objects
created.
Advantages:
β’ No knowledge or processing of the scene geometry is required.
β’ Only a single texture is required to hold shadowing information for each light.
β’ Avoids the high fill requirement of shadow volumes.
Disadvantages:
β’ Aliasing, especially when using small shadow maps.
β’ The scene geometry must be rendered once per light in order to generate the
shadow map for a spotlight.
182 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT V
FRACTALS & SELF SIMILARITY
Fractal:
οΏ½ is a rough or fragmented geometric shape that can be split into parts.
οΏ½ Each of which is a reduced copy of whole.
οΏ½ Such a property is called Self Similarity.
Self Similarity:
οΏ½ A self similar object is exactly or approximately similar to a part of itself.
οΏ½ Self Similarity is a typical property of fractals.
οΏ½ Computers are particularly good at repetition.
οΏ½ They will do something again and again without complaint.
οΏ½ Recursion often makes a difficult geometric task extremely simple.
οΏ½ Among other things, it lets one decompose o refine shapes into ever smaller ones,
conceptually ad infinitum.
[ ad infinitum = infinity; continue forever, without limit ]
Self- Similar Curves:
οΏ½ Many of the curves and pictures has the property called Self-Similar.
οΏ½ Some curves are exactly self similar
οΏ½ And some curves are Statistically Self Similar
Exactly Self Similar:
οΏ½ If a region is enlarged, the enlargement looks exactly like the original.
Statistically Self Similar:
οΏ½ The wiggles and irregularities in the curve are the same βon the averageβ.
183 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Example:
οΏ½ Nature provides examples that mimic statistical self similarity.
οΏ½ The classic example is coast line.
Mandelbrot:
οΏ½ The mathematician βBenit Mandelbrotβ brought together and popularized investigations
into the nature and self-similarity.
οΏ½ He called various forms of self-similar curves as fractals
οΏ½ A line is one dimensional and plane is two dimensional.
οΏ½ But there are βcreaturesβ in between them.
οΏ½ We shall define curves that are infinite in length yet lie inside a finite rectangle.
Stirred: being excited or provoked to the expression of an emotion.
Koch Curve:
οΏ½ Very complex curves can be furnished recursively by repeatedly βrefiningβ a simple
curve.
οΏ½ The simplest example perhaps is the Koch curve.
οΏ½ That is discovered by mathematician Helge Von Koch.
οΏ½ This curve stirred great interest in the mathematical world because it produces an
infinitely long line within a region of finite area.
οΏ½ Successive generation of Koch curves are denoted by K0, K1, K2........
TWO GENERATIONS OF THE KOCH CURVE
οΏ½ The Zeroth generation shape K0 is just a horizontal line of length unity.
184 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The curve K1 is shown in the above figure.
οΏ½ To create K1, divide the line K0
οΏ½ And replace the middle section with a triangular βbumpβ having sides of length 1/3.
οΏ½ The total length of the line is evidently 4/3.
οΏ½ The second order curve K2 is formed by building a bump on each of the four line
segments of K1.
οΏ½ In this process, each segment is increased in length by a factor of 4/3.
οΏ½ So the total length of the curve is 4/3 larger than that of the previous generation.
οΏ½ Thus K1 has total length (4/3)i.
οΏ½ Which increases as i increases.
οΏ½ As i tends to infinity, the length of the curve becomes infinite.
Koch Snowflake:
THE FIRST FEW GENERATIONS OF THE KOCH SNOWLAKE
οΏ½ It is formed out of three Koch Curves joined together.
οΏ½ The perimeter of the ith
generation shape Si is the three times the length of a simple Koch
Curve, and S0 is 3(4/3)i
οΏ½ The following figure shows third, fourth and fifth generation of Koch snowflake.
185 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
KOCH SNOFLAKE, S3, S4, AND S5
PEANO CURVES (OR) SPACE-FILLING CURVES
οΏ½ Peano curves are the fractal like structures that are drawn through a recursive process.
οΏ½ Some of the curves shown in below, are space-filling curves or peano curves.
186 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Such curves have a fractal dimension of 2.
οΏ½ They completely fill a region of space.
Example:
οΏ½ The two most famous peano curves are Hilbert and Sierpeniski curves.
οΏ½ Some low order Hilbert curves are shown below
CREATING AN IMAGES ITERATED FUNCTIONS
οΏ½ Another way to approach infinity is to apply a transformation to a picture again
and again and examine what results.
οΏ½ This technique provides another fascinating way to create fractal shapes.
οΏ½ This idea has been developed by βBransleyβ, in which an image can be
represented by a handful of numbers.
Experimental Copier
οΏ½ We take an initial image I0
οΏ½ That can be put it through a special βphotocopierβ that produces a new image
as I1 shown in fig.
187 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Making New βCopiesβ From Old
οΏ½ It is not just a simple copy of I0
οΏ½ Rather it is a superposition of several reduced versions of I0.
οΏ½ We then take I1 and feed it back into the copier again, to produce image I2.
οΏ½ We repeat this process forever, obtaining a sequence of images I0, I1, I2--- called
the Orbit of I0.
Sierpinski Copier
οΏ½ Consider a specific example of a copier that we might call the supercopier or S-
Copier.
οΏ½ It superimposes three smaller versions of whatever image is fed into it.
188 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ That shows what one pass through S-Copier produces when the input is the
letter F.
οΏ½ These three are smaller images could just as well overlap.
οΏ½ The following fig. Shows the first few iterate that the S-Copier produces.
The First part of the orbit of I0 for the S-copier
οΏ½ The figure suggests that the iterates converge to the Sierpinski triangle.
οΏ½ At each iteration the individual component FS become one-half as large and they
triple in number.
οΏ½ As more and more iterations are made, the FS approach dots in size, and these dots
are arranged in Sierpinski triangle.
189 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The final image does not depend on the shape of the F at all, but only the nature of
the Super copier.
How the S-Copier make the images?
οΏ½ It contains 3 lenses.
οΏ½ Each of which reduces the input image to one-half its size.
οΏ½ And move it to a new position.
οΏ½ These three reduced and shifted images and superposed on the printed output.
οΏ½ Scaling and shifting are easily done by affine transformations.
MANDELBROT SETS
οΏ½ The Mandelbrot set is a mathematical set of points, whose boundary generates a
distinctive and easily recognisable two dimensional fractal shape.
οΏ½ The set is closely related to the Julia Set.
οΏ½ It generates similarly complex shapes.
οΏ½ This is named after the mathematician Benoit Mandelbrot.
Iteration Theory
οΏ½ Julia and Mandelbrot sets arises from a branch of analysis known as iteration
theory (or dynamical systems theory)
οΏ½ This theory asks what happens when one iterates a function endlessly.
Mandelbrot Sets and Iterated Function Systems
οΏ½ A view of the Mandelbrot Set is shown in following figure.
190 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ It is the black inner portion.
οΏ½ It appears to consist of a cardioids along with a number of wart like circles glued
to it.
οΏ½ In actuality, its border is astoundingly complicated.
οΏ½ Its complexity can be explored by zooming in on a portion of the border and
computing a close-up view.
οΏ½ In this theory the zooming can be repeated forever.
οΏ½ The border is βinfinitely complexβ
οΏ½ In fact, it is a fractal curve.
οΏ½ Each point in the figure is shaded or colored according to the outcome of an
experiment run on an IFS.
οΏ½ The IFS of internet is shown in fig.
191 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
The Iterated function system for Julia and Mandelbrot sets
οΏ½ It causes the particularly simple function
where C is some constant.
οΏ½ That is the system produces each βoutputβ by squaring its input and adding C.
οΏ½ We assume that the process begins with the starting value S.
οΏ½ So the system generates the sequence of values or Orbits.
d1 = (S)2 + C
d2 = ((S)2 + C)
2 + C
d3 = (((S)2 + C)
2 + C)
2 + C
d4 = ((((S)2 + C)
2 + C)
2 + C)
2 + C
οΏ½ The orbit depends on two ingredients.
i. Starting point S
ii. Given value of C.
F(z) = Z2 + C
192 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
JULIA SETS
οΏ½ The Mandelbrot Set and Julia Sets are extremely complicated sets of points in the
complex plane.
οΏ½ There is a different Julia Set denoted Jc, for each value of C.
οΏ½ A close related variation is the Filled-in Julia Set denoted by KC.
οΏ½ Which is easier to define.
οΏ½ The Filled-in Julia Set KC
οΏ½ Consider the iterated function system.
οΏ½ Now we set to C is fixed chosen value.
οΏ½ And examine what happens for different starting points.
Drawing Filled-in Julia Sets
οΏ½ Process of drawing a Filled-in Julia Set is almost identical to that for the
Mandelbrot set.
193 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ We again choose a window in the complex plane and associate pixels with points
in the window.
οΏ½ However, pixels correspond to different values of the different points.
οΏ½ A single value of C is chosen, and then the orbit for each pixel position is
examined to see whether it explodes.
RANDOM FRACTALS
οΏ½ Fractal shapes are completely deterministic.
οΏ½ They are completely predictable (even though they are very complicated)
οΏ½ In graphics the term fractal has become widely associated randomly generated
curves and surfaces that exhibit a degree of self similarity.
οΏ½ These curves are used to produce βNaturalisticβ shapes for representing objects
such as ragged mountains, grass and fire.
Fractalizing a segment
οΏ½ The simplest random fractal is formed by recursively roughening or fractaling a lie
segment.
οΏ½ At each step, each line segment is replaced with a random amount.
C
Random amount Replace S with this elbow
A S M B
L
194 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The above figure shows this process applied to the line segment βSβ having the end
points A & B.
οΏ½ S is replaced by two line segments A to C and from C to B.
οΏ½ For fractal curve, point C is randomly chosen along the perpendicular bisector L of S.
Stages of Fractalization
οΏ½ There are three stages of fractalization.
First Stage:
οΏ½ The midpoint of AB is perturbed to form point C.
Note:- [Perturbed: in mathematical method, that give approximate solutions to
problems that cannot be solved exactly.]
Second Stage:
οΏ½ Each of the two segments has its midpoint perturbed to form points D and E.
195 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Third Stage:
οΏ½ At final stage, new points F...........I are added.
Calculation of fractalization in a program
C
Random amount Replace S with this elbow
A S M B
L
196 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Line L passes through the midpoint M of segment S and is perpendicular to it.
οΏ½ And point C along L has the parametric form
C(t) = M + (B-A)β₯ t
Where M = (A+B)/2
οΏ½ The most fractal curves, t is modelled as a Gaussian random variable with mean
and some standard derivation.
οΏ½ The following runtime fract() shown below
οΏ½ That generates curves that approximation of actual fractals.
οΏ½ This routine recursively replaces each segment in a random elbow.
fract (Point2 A, Point2 B, double stdDev)
{
//generate Fractal curve from A to B
double xDiff = A.x β B .x. yDiff = A.y β B.y;
Point2 C;
if(xDiff * XDiff + YDiff * yDiff < minLenSq)
cvs.lintTo(B.x, B.y);
else
{
stdDev *= factor;
double t = 0;
for (int i=0; i<12; i++)
t += rand()/32768.0;
t = (t-6) * stdDev;
197 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
C.x = 0.5 * (A.x + B.x) β t * (B.y β A.y);
C.y = 0.5 * (A.y + B.y) + t * (B.x β A.x);
fract(A, C, stdDev);
fract(C, B, stdDev);
}
}
Drawing a Fractal Curve
double MinLenSq, factor; //global variables
void drawfractal(Point2 A, Point2 B)
{
double beta, stdDev;
factor = pow(2.0,(1.0-beta).2.0);
cvs.moveTo(A);
fract(A, B, stdDev);
}
OVERVIEW OF RAY TRACING
οΏ½ Ray Tracing is a technique for generating an image by tracing the path of light
through pixels in an image plane.
οΏ½ And simulating the effect of its encounters with virtual objects.
οΏ½ This technique is used to produce a very high degree of visual realism.
οΏ½ Usually higher than the βScanline renderingβ
οΏ½ But greater computational cost.
198 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Introduction:
οΏ½ Ray tracing (ray casting) provides related but even more powerful approach to
rendering scenes.
οΏ½ The following figure gives the basic idea
οΏ½ A buffer as a simple array of pixels positioned in space, with eye looking it into
the scene.
οΏ½ The general question is what does the eye see through this pixel?
οΏ½ A ray of light is arriving at eye through the pixel from some point p in the scene.
οΏ½ The colour of the pixel is determined by the light emanates along the ray from
point p in the scene.
199 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Reverse Process
οΏ½ In reality the process is revered.
οΏ½ A ray is case form the eye through the centre of pixel and out into the scene.
οΏ½ Its path is traced to see what object it hits first and at what point
οΏ½ This process automatically solves the hidden surface problems.
οΏ½ The first surface hit by the ray is the closest object to the eye.
οΏ½ For the description of light source in the scene, the studying model is applied to the
point first hit.
οΏ½ And components of light are computed.
οΏ½ The resulting colour is then displayed in the pixel.
Features of Ray Tracing:
οΏ½ Some of the interesting visual effects are easily incorporated
β’ Shadowing
β’ Reflection
β’ Refraction
οΏ½ That provides dazzling realism that are difficult to create by any other method.
οΏ½ It ability to work comfortably with richer geometric primitives such as
β’ Spheres
β’ Cones and
β’ Cylinders.
200 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
OVERVIEW OF THE RAY-TRACING PROCESS
οΏ½ The following code segment shows the basic step in ray tracer.
define the objects and light sources in the scene
setup the camera
for (int r = 0; r < nRows; r++)
for (int c = 0; c < nCols; c++)
{
1. Build the rc-th ray
2. Find all intersections of the rc-th ray with objects in the scene.
3. Identify the intersection that lies closest to, and in front of the ye
4. Compute the βhit pointβ where the ray hits this object, and the normal
vector at that point.
5. Find the color of the light returning to the eye along the ray from the
point of intersection
6. Place the color in the rc-th pixel.
}
οΏ½ The scene to be traced by rays through geometric objects and light sources.
οΏ½ A typical scene may contain
β’ Spheres,
β’ Cones,
β’ Boxes,
β’ Cylinders etc...
οΏ½ The objects are described in some fashion stored as object in camera
201 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Computing hit point:
οΏ½ When objects have been tested, the object with the smallest is the closest to the
location of the hit point on the objectβs found.
Computing Color:
οΏ½ The colour of the light that receiving by object, in the direction of the eye is
computed and stored in the pixel.
οΏ½ The following figure shows simple scene consisting of some cylinders, spheres and
cones.
οΏ½ The snow man consists, mainly spheres.
οΏ½ Two light sources are also shown
Object List:
οΏ½ Descriptions of all the objects are stored in an object list.
οΏ½ This is a linked list of descriptive records as shown below.
οΏ½ The ray that is shown intersects a sphere, cylinder and two cones.
οΏ½ All the other objects are missed.
202 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The object with the small hit time, a cylinder in the scene is identified.
β The hit spot Phit is easily identified by the ray equation.
Phit = eye + dirr,c thit (hit spot)
INTERSECTING OF A RAY WITH AN OBJECT
οΏ½ Consider the following code
Scene scn; //create a scene
Scn.read(βmyScene.datβ); //read the SDL Scene file.
οΏ½ Objects in the scene are created and placed in a list.
οΏ½ Each object is an instance of generic shape such as sphere or cone.
οΏ½ The following figure shows some o the generic shapes we shall be ray tracing.
Some common generic shapes used in ray tracing
οΏ½ Implicit form of generic sphere is
F (x,y,z) = x2 + y
2 + z
2 β 1
203 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ For convenience use the notation F(p)
F (p) = |p|2 β 1
Generic Cylinder
F (x,y,z) = x2 + y
2 β 1 for 0<z<1
ADDING SURFACE TEXTURE
οΏ½ Computer generated images can be made much more lively and realistic by
painting textures on various surfaces.
οΏ½ The following figure shows a ray-traced scene with several examples of textures.
οΏ½ OpenGL is used to render each face.
οΏ½ For each face F, a pair of texture co-ordinates is attached to each vertex of face.
οΏ½ Then openGL βpaintedβ each pixel inside the face by using the colour of the
corresponding point within a texture image.
οΏ½ Two principle types of texture can be used.
i. Solid texture
ii. Image texture
Solid Texture
οΏ½ Solid texture is sometimes called β3D textureβ.
οΏ½ The object is considered to be carved out of a block of solid material that itself has
texturing.
οΏ½ The ray tracer reveals the colour of the texture at each point on the surface of the
object.
οΏ½ The texture is represented by a function texture (x,y,z) that produces an (r,g,b)
colour value at every point in space.
204 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Example:
οΏ½ Imagine a 3D checker bound made up of alternating red and black cubes stacked
up throughout all of space.
οΏ½ We position one of the βcubeletsβ with a vertex (0,0,0) and the size S=(S.x,S.y,S.x)
οΏ½ All other cubes have his same size (a width of S.x, aheight of S.y etc)
οΏ½ That are placed adjacent to one another in all three dimensions.
οΏ½ It is easy to write an expression for such a checkerboard texture.
Jump(x,y,z) = ((int) (A+x/S.x) + (int) (A+y/S.y) + (int) (A+z/S.z))%2
οΏ½ The following figure shows a generic sphere and generic cube βcomposedβ of
material with this solid texture
Ray tracing of some objects with checkerboard solid texture
205 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ The colour of the material is the colour the texture
οΏ½ Notice that the sphere and the cube are clearly made up of solid βcubeletsβ.
REFLECTIONS & TRANSPARENCY
οΏ½ One of the great strength of the ray-tracing method is the ease with which it can
handle both reflection and refraction of light.
οΏ½ This allow to build scenes of exquisite realism, containing
β’ Mirrors
β’ Fishbowls
β’ Lenses etc.,
οΏ½ There can be multiple reflections in which light bounces off several shiny
surfaces before reaching the eye.
οΏ½ These processes require the spawning and tracing of additional rays.
οΏ½ The following figure shows a ray emanating from the eye,
β In the direction dir
β And hitting a surface at the point Ph.
206 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ When the surface is mirror like or transparent (or both) the light I that reaches the
eye may have five components
I = Iamb + Ispec + Irefl + Itran
where
Iamb : ambient component
Idiff : diffuse component
Ispe : specular component
Irefl : reflected light component arising from the light IR.
Itran : Transmitted light components, arising from the light IT.
οΏ½ The first three are Familiar
β’ Ambient
β’ Diffuse
β’ Specular contributions
οΏ½ The diffuse and specular parts arise light sources in the environment that are at Ph.
οΏ½ The following figure shows how the number of contributions of light grows at
each contact point.
207 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ I is the sum of three components
β’ Reflected component R1
β’ Transmitted component T1
β’ Local component L1.
Local component:
οΏ½ Is simply the sum of the usual ambient, diffuse and specular reflections at Pn.
οΏ½ Local components depend only on actual light sources.
οΏ½ They are not computed on the basis of casting secondary rays.
208 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Figure (b) abstracts the various light components into a βtreeβ of light
contributions.
οΏ½ The transmitted components arriving on the lift branches.
οΏ½ The reflected components arriving on the right branches.
οΏ½ At each node a local component must also be added, but for simplicity it is not
shown.
The refraction of light:
οΏ½ When a ray of light strikes a transparent object, a portion of the ray penetrates the
object, as shown in fig.
οΏ½ The ray will change direction from dir to t if the speed of light is different in
medium 1 than in medium 2.
οΏ½ If the angle of incidence of the ray is οΏ½ 1.
οΏ½ Snellβs lay states that the angle of refraction οΏ½ 2 will be
209 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Where C1 is speed of light in medium 1.
οΏ½ C2 is speed of light in medium 2.
οΏ½ Only the ration C2/C1 is important.
οΏ½ It is often called the Index of refraction of medium 2 with respect to medium 1.
οΏ½ If οΏ½ 1 equals zero, light hitting an interface at right angles in not bent.
BOOLEAN OPERATIONS ON OBJECTS
οΏ½ According to CSG, complex shapes are defined by set operations (also called
Boolean operations) on simpler shapes.
οΏ½ Objects such as lenses a hallow fishbowls are easily formed by combining the
generic shapes.
οΏ½ Such objects are variously called compound objects (or) Boolean objects (or) CSG
objects.
οΏ½ The ray tracing method extends in a very organized way to compound objects.
οΏ½ It is one of the great strengths of ray tracing that it fits so naturally with CSG
models.
οΏ½ We look at the examples of three Boolean operators.
β’ Union
β’ Intersection
β’ Difference
οΏ½ The following shows the compound objects built from spheres.
210 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Fig (a) is a lens shape constructed as the intersection of two spheres.
οΏ½ That is a point is in the lens if and only if it lies in both spheres.
οΏ½ Symbolically βLβ is the intersection of the spheres S1 and S2, can be written as
L = S1 οΏ½ S2
οΏ½ Fig(b) shows a bowl, constructed using the difference operation.
οΏ½ Applying the difference operation is analogous to removing material to cutting or
carving.
οΏ½ The bowl is specified by
B = (S1 β S2) β C
οΏ½ The solid globe S1 is βhollowed outβ by removing all the points of the inner sphere S2.
οΏ½ The top is then opened by removing all points in the cone C.
211 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNION OF FOUR PRIMITIVES
οΏ½ A point is in the union of two sets A and B denoted AοΏ½B, if it is in A or B or in
both.
οΏ½ The following fig. Shows a rocket constructed as the union of two cones and two
cylinders.
οΏ½ That is, R = C1 οΏ½ C2 οΏ½ C3 οΏ½ C4
οΏ½ Cone C1 rests on cylinder C2.
οΏ½ Cone C3 is partially embedded in C2 and rests on the fatter cylinder C4.
212 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
QUESTION BANK
UNIT I
2D PRIMITIVES
PART A
1.Define Output Primitives
οΏ½ Graphics programming packages provide functions to describe a scene in terms of
these basic geometric structures, referred to as output primitives.
2.What are Simple geometric components?
οΏ½ Points and straight line segments are the simplest geometric components of
pictures.
3.What are Additional output primitives?
οΏ½ That can be used to construct a picture include
οΏ½ circles and other conic sections,
οΏ½ quadric surfaces,
οΏ½ spline curves and surfaces,
οΏ½ polygon color areas, and
οΏ½ character strings.
4.Define Random-scan system or Vector System.
οΏ½ It stores point-plotting instructions in the display list, and coordinate values in
these instructions are converted to deflection voltages that position the electron
beam at the screen locations to be plotted during each refresh cycle.
6.How the straight line is drawn in Analog Display Devices
213 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ For analog devices, such as a vector pen plotter or a random-scan display, a
straight line can be drawn smoothly from one endpoint to the other.
οΏ½ Linearly varying horizontal and vertical deflection voltages are generated that are
proportional to the required changes in the x and y directions to produce the
smooth line.
7.How the straight line is drawn in Digital Display Devices?
οΏ½ Digital devices display a straight line segment by plotting discrete points between
the two endpoints.
οΏ½ Discrete coordinate positions along the line path are calculated from the equation
of the line.
8.What is Stair step Effect (jaggies)?
οΏ½ For a raster video display, the line color (intensity) is then loaded into the frame
buffer at the corresponding pixel coordinates.
οΏ½ Reading from the frame buffer, the video controller then "plots" the screen pixels.
οΏ½ Screen locations are referenced with integer values.
οΏ½ So plotted positions may only approximate actual Line positions between two
specified endpoints.
214 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
9.How the pixel positions are referenced?
οΏ½ Pixcel positions referenced by scan line number and column number.
10.What is getpixel ( ) function?
οΏ½ Sometimes we want to be able to retrieve the current frame buffer intensity setting
for a specified location.
οΏ½ We accomplish this with the low-level function
getpixel (x, y )
11.What are Line Equations?
slope-intercept equation
y= m.x + b
where
215 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
12.What are Circle Equations?
General form :
Circle Equation in polar form
Circle midpoint method equation
.
13.What are Ellipse equations?
General Ellipse equation
Ellipse equation in polar form
Ellipse midpoint method equation
14.Define Ellipse. Or Properties or Ellipse.
οΏ½ An ellipse is an elongated circle.
οΏ½ An ellipse is defined as the set of points such that the sum of the distances from
two fixted positions (foci) is the same for all points.
216 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
15.What are Major and Minor axes in Ellipse?
Major Axes
οΏ½ The major axis is the straight line segment extending from one side of the ellipse
to the other through the foci.
Minor Axes
οΏ½ The minor axis spans the shorter dimension of the ellipse, bisecting the major axis
at the halfway position (ellipse center) between the two foci.
16.What is Attribute parameter?
οΏ½ Any parameter that affects the way a primitive is to be displayed is referred to as
an attribute parameter.
οΏ½ Some attribute parameters, such as
β’ color and
β’ size
17.What are the basic attributes of line?
οΏ½ Basic attributes of a straight line segment are its
β’ type,
β’ its width, and
β’ its color.
18.What are the Line type attribute?
οΏ½ Line-type attributes are
Solid Line
Dotted Line
Dashed Line
Dash-Dotted Line
217 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
19.Define Direct storage scheme.
οΏ½ With the direct storage scheme, whenever a particular color code is specified in an
application program, the corresponding binary value is placed in the frame buffer
for each-component pixel in the output primitives to be displayed in that color.
οΏ½ A minimum number of colors can be provided in this scheme with 3 bits of storage
per pixel
20.Define Grayscale.
οΏ½ With monitors that have no color capability, color functions can be used in an
application program to set the shades of gray, or grayscale, for displayed
primitives.
οΏ½ Numeric values over the range from 0 to 1 can be used to specify grayscale levels,
which are then converted to appropriate binary codes for storage in the raster.
οΏ½ This allows the intensity settings to be easily adapted to systems with differing
grayscale capabilities.
21.Tabulate the four level grayscale system
218 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
22. What are Area-Fill Attributes?
οΏ½ Options for filling a defined region include a choice between a solid color or a
patterned fill.
οΏ½ These fill options can be applied to polygon regions or to areas defined with
curved boundaries.
οΏ½ In addition, areas can be painted using various brush styles, colors, and
transparency parameters.
23.What are the types of fill styles?
Fill Styles
οΏ½ Areas are displayed with three basic fill styles:
β’ hollow with a color border,
β’ filled with a solid color, or
β’ filled with a specified pattern or design.
24.What are Character Attributes?
οΏ½ The appearance of displayed characters is controlled by attributes such as
β’ font,
β’ size,
β’ color, and orientation.
25.List out the styles of the characters.
οΏ½ The characters in a selected font can also be displayed with assorted underlining
styles
219 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β’ Bold face
β’ Underline
β’ Italics
26.What are the transformations available in 2D?
οΏ½ The basic geometric transformations are
β’ translation,
β’ rotation, and
β’ scaling.
οΏ½ Other transformations that are often applied to objects include
β’ reflection and
β’ shear.
27.What is Translation?
οΏ½ A translation is applied to an object by repositioning it along a straight-line path
from one coordinate location to another.
220 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
28.What is Rotation?
οΏ½ A two-dimensional rotation is applied to an object by repositioning it along a
circular path in the xy plane.
οΏ½ To generate a rotation, we specify a rotation angle Σ© and the position (x1,y1) of the
rotation point (or pivot point) about which the object is to be rotated.
29.What is Scaling?
A scaling transformation alters the size of an object.
οΏ½ This operation can be carried out for polygons by multiplying the coordinate
values (x, y) of each vertex by scaling factors sx, and sy, to produce the
transformed coordinates (x', y'):
30.What is differential scaling?
οΏ½ When sx, and sy, are assigned the same value, a uniform scaling is produced that
maintains relative object proportions.
οΏ½ Unique values for sx, and sy, result in a differential scaling.
221 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
31.What is Reflection?
οΏ½ A reflection is a transformation that produces a mirror image of an object.
οΏ½ The mirror image for a two-dimensional reflection is generated relative to an axis
of reflection by rotating the object 180" about the reflection axis.
32.What is Shear?
οΏ½ A transformation that distorts the shape of an object such that the transformed
shape appears as if the object were composed of internal layers that had been
caused to slide over each other is called a shear.
οΏ½ Two common shearing transformations are those that shift coordinate x values and
those that shift y values.
PART B
1.Explain in detail Line Drawing algorithms with example.
2.Explain in detail Circle Drawing algorithms with example.
3.Explain in detail Ellipse Drawing algorithms .
222 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
4.Exolain in detail about attributes of output primitives.
5.What are the 2D transformations available? Explain any two.
6. Exolain in detail about 2D viewing.
7.What is Line Clipping? Explain in detail Coheh-Southerland Line clipping.
8.What is polygon clipping? Explain in detail about Sutherland-Hodgeman Polygon
Clipping.
UNIT II
3D CONCEPTS
PART A
1.What is mean by Perspective?
Perspective : βThe appearance of things relative to one another as determined by their
distance from the viewerβ
2.What is Depth Cueing?
οΏ½ Depth information is important so that we can easily identify, for a particular
viewing direction, which is the front and which is the back of displayed objects.
3.What are the types of Projections?
οΏ½ There are two basic projection methods.
β’ Parallel Projection
β’ Perspective Projection
4.What is Parallel Projection?
223 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ In a parallel projection, coordinate positions are transformed to the view plane
along parallel Line.
A parallel projection preserves relative proportions of objects
5.What is Perspective Projection?
οΏ½ For a perspective projection, object positions are transformed to the view plane
along lines that converge to a point called the projection reference point (or center
of projection).
οΏ½ A perspective projection, on the other hand, produces realistic views but does not
preserve relative proportions.
6.What are the types of Parallel projections?
There are two types if parallel projections,
1. Orthographic parallel projection.
2. Oblique parallel projection
224 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
7.Differenciate Parallel projection and Perspective projection.
Sn Parallel Projection Perspective Projection
1 It preserves relative proportions of
objects.
It does not preserves relative proportions
of objects.
2 It does not give us a realistic
representation 3D dimensional objects
It gives us a realistic representation 3D
dimensional objects
3 coordinate positions are transformed
to the view plane along parallel Line.
object positions are transformed to the
view plane along lines that converge to a
point called the projection reference
point
4. All objects are same size. Projections of distant objects are
smaller, and same size for closer
objects.
8.What is projection reference point?
οΏ½ For a perspective projection, object positions are transformed to the view plane
along lines that converge to a point called the projection reference point (or center
of projection).
9.What are the types of 3D representations?
οΏ½ Representation schemes for solid objects are often divided into two broad
categories,
3. Boundary representations
4. Space-partitioning representation
10.What are Boundary representations?
οΏ½ Boundary representations (B-reps) describe a three-dimensional object as a set of
surfaces that separate the object interior from the environment.
225 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Typical examples of boundary representations are polygon facets and spline
patches.
11. What is Space-partitioning representation?
οΏ½ Space-partitioning representations are used to describe interior properties, by
partitioning the spatial region containing an object into a set of small, non
overlapping, contiguous solids (usually cubes).
οΏ½ A common space-partitioning description for a three-dimensional object is an octree
representation.
12.What is Polygon Table?
οΏ½ We specify a polygon surface with a set of vertex coordinates and associated
attribute parameters.
οΏ½ An information for each polygon are placed into tables that are to be used in the
subsequent processing, display, and manipulation of the objects in a scene.
13.What are the types of Polygon tables?
οΏ½ Polygon data tables can be organized into two groups:
3. geometric tables and
4. attribute tables.
14.What is Geometric table?
οΏ½ It contain vertex coordinates and parameters to identify the spatial orientation of
the polygon surfaces.
226 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
15.What is Attribute table?
οΏ½ It includes parameters specifying the degree of transparency of the object and its
surface reflectivity and texture characteristics.
16.What are the lists created by the Geometric table?
οΏ½ A convenient organization for storing geometric data is to create three lists:
4. a vertex table,
5. an edge table, and
6. a polygon table.
17.What is Polygon Mesh?
οΏ½ Some graphics packages provide several polygon functions for modeling objects.
οΏ½ A single plane surface can be specified with a function such as βfillAreaβ.
οΏ½ But when object surfaces are to be tiled, it is more convenient to specify the
surface facets with a mesh function.
18.What is Triangle strip?
οΏ½ One type of polygon mesh is the triangle strip.
οΏ½ This function produces n - 2 connected triangles, .as shown in above figure.
227 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
19.What is Quadrilateral mesh?
οΏ½ Another similar function is the quadrilateral mesh.
οΏ½ which generates a mesh of (n - 1) by (m - 1) quadrilaterals, given the coordinates
for an n by m array of vertices.
οΏ½ Above figure shows 20 vertices forming a mesh of 12 quadrilaterals.
20.What is Spline?
οΏ½ A spline is a flexible strip used to produce a smooth curve through a designated set
of points.
οΏ½ Several small weights are distributed along the length of the strip to hold it in
position on the drafting table as the curve is drawn.
οΏ½ The term spline curve originally referred to a curve drawn in this manner.
228 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
21.What are the Spline specifications?
Spline Specifications
οΏ½ There are three equivalent methods for specifying a particular spline representation:
4. We can state the set of boundary conditions that are imposed on the spline; or
5. We can state the matrix that characterizes the spline; or
6. We can state the set of blending functions (or basis functions) that determine
how specified geometric constraints on the curve are combined to calculate
positions along the curve path.
22.Give some examples of scalar quantities.
β’ energy,
β’ temperature,
β’ pressure,
β’ frequency.
23.Give some examples of vector quantities.
β’ velocity,
β’ force,
β’ electric fields,
β’ electric current.
24. Define Pseudo-color method.
οΏ½ Pseudo-color methods are also used to distinguish different values in a scalar data
set, and color-coding techniques can be combined with graph and chart methods.
229 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ To color code a scalar data set, we choose a range of color and map the range of
data values to the color range.
οΏ½ For example, blue could be assigned to the lowest scalar value, and red could be
assigned to the highest value.
25.What are the translations available in 3D?
The basic transformations are
1. Translation
2. Scaling
3. Rotation
And other tow transformations are
1. Shear
2. Reflection
26.What is 3D Translation?
οΏ½ In a three-dimensional homogeneous coordinate representation, a point is
translated
from position P = (x, y, z) to position P' = (x', y', z') with the matrix operation
230 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
27.What is 3D Rotation?
οΏ½ To generate a rotation transformation for an object, we must designate an axis of
rotation (about which the object is to be rotated) and the amount of angular
rotation.
28.What is 3D Shear?
οΏ½ Shearing transformations can he used to modify object shapes.
οΏ½ They are also useful in three-dimensional viewing for obtaining general projection
transformations.
οΏ½ In two dimensions, we discussed transformations relative to the x or y axes to
produce
distortions in the shapes of objects.
οΏ½ In three dimensions, we can also generate shears relative to the z axis.
29.What are Visible Surface Detection Methods?
A major consideration in the generation of realistic graphics displays is identifying
those parts of a scene that are visible from a chosen viewing position.
οΏ½ Some methods require more memory, some involve more processing time, and
some apply only to special types of objects.
οΏ½ The various algorithms are referred to as visible-surface detection methods.
231 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Sometimes these methods are also referred to as hidden-surface elimination
methods.
30.What are the visible-surface detection methods? <write any 4>
12. Back-face detection
13. Depth-buffer method
14. A-buffer method
15. Scan-line method
16. Depth-sorting method
17. BSP-tree method
18. Area-subdivision b1ethod
19. Octree methods
20. Ray-casting method
21. Curved surfaces
22. wireframe methods
31.What is z-buffer method?
οΏ½ A commonly used image-space approach to detecting visible surfaces is the depth-
buffer method.
οΏ½ Which compares surface depths at each pixel position on the projection plane?
οΏ½ This procedure is also referred to as the z-buffer method.
32.What is A-Buffer method?
οΏ½ An extension of the ideas in the depth-buffer method is the A-buffer method.
οΏ½ The A- buffer method represents an antialiased, area-averaged, accumulation-
buffer method.
232 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
PART B
1.Wht is projection? Explain it with types.
2.How to represent Polygon surfaces in 3D?
3.How to represent Curved surfaces in 3D?
4.How to visualize data sets?
5. What are the 3D transformations available? Explain any two.
6.What is viewing pipeline and viewing coordinate?
7.What are the types of visible-surface detection methods? Explain any two.
233 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT III
GRAPHICS PROGRAMMING
PART A
1.What is Color Model?
οΏ½ A color model is a method for explaining the properties or behavior of color within some
particular context.
2.What are the uses of Chromaticity diagram?
οΏ½ Comparing color gamuts for different sets of primaries.
οΏ½ Identifying complementary colors.
οΏ½ Determining dominant wavelength and purity of a given color.
3.Draw the RGB Unit cube.
4.Draw the CMY color model unit cube.
234 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
5. What is mean by Subtractive Process?
οΏ½ In CMY color model, cyan can be formed by adding green and blue light.
οΏ½ Therefore, when white light is reflected from cyan-colored ink, the reflected light
must have no red component.
οΏ½ That is, red light is absorbed, or subtracted, by the ink.
6. What are the dots used in Printing Proceses?
οΏ½ The printing process often used with the CMY model generates a color point with a
collection of four ink dots, (like RGB monitor uses a collection of three phosphor dots).
β’ Three dots are used for each of the primary colors (cyan, magenta, and yellow).
β’ And one dot is black.
7. Why the block dot is included in Printing Process?
οΏ½ A black dot is included because the combination of cyan, magenta, and yellow inks
typically produce dark gray instead of black.
8. How to convert RGB into CMY?
οΏ½ We can express the conversion from an RGB representation to a CMY representation
with the matrix transformation
235 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Where the white is represented in the RGB system as the unit column vector.
9. How to convert CMY into RGB?
οΏ½ we convert from a CMY color representation to an RGB representation
with the matrix transformation
οΏ½ Where black is represented In the CMY system as the unit column vector.
10.Draw the HSV hexcone.
11. What is Computer Animation?
236 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Computer animation generally refers to any time sequence of visual changes in a
scene.
οΏ½ In addition to changing object position with translations or rotations, a computer-
generated animation could display time variations in object size, color,
transparency, or surface texture.
12. What are the steps for designing animation sequences?
οΏ½ In general, an animation sequence is designed with the following steps:
5. Storyboard layout
6. Object definitions
7. Key-frame specifications
8. Generation of in-between frames
13. What is Storyboard Layout?
οΏ½ The storyboard is an outline of the action.
οΏ½ It defines the motion sequence as a set of basic events that are to take place.
οΏ½ Depending on the type of animation to be produced, the storyboard could consist
of a set of rough sketches or it could be a list of the basic ideas for the motion.
14. Object Definition
οΏ½ An object definition is given for each participant in the action.
οΏ½ Objects can be defined in terms of basic shapes, such as polygons or splines.
οΏ½ In addition, the associated movements for each object are specified along with the
shape.
237 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
15. Keyframe
οΏ½ A keyframe is a detailed drawing of the scene at a certain time in the animation
sequence.
οΏ½ Within each key frame, each object is positioned according to the time for that
frame.
οΏ½ Some key frames are chosen at extreme positions in the action.
16. Generation of in-between frames
οΏ½ In-betweens are the intermediate frames between the key frames.
οΏ½ The number of in-betweens needed is determined by the media to be used to
display the animation.
οΏ½ Film requires 24 frames per second, and graphics terminals are refreshed at the rate
of 30 to 60 frames per second.
17.What is Morphing?
οΏ½ Transformation of object shapes from one form to another is called morphing,
οΏ½ Which is a shortened form of metamorphosis.
οΏ½ Morphing methods can he applied to any motion or transition involving a change
in shape.
18. What are the matrices in Graphics pipeline of OpenGL?
οΏ½ The important three matrices are
iv. Model view matrix
v. Projection matrix
vi. Viewport matrix
19.What is ModelView matrix?
οΏ½ The model view matrix is a single matrix in the actual pipeline.
οΏ½ It combines two effects.
238 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β’ sequence of modelling transformations applied to objects
β’ transformation that orients and positions the camera in space.
οΏ½ the model matrix is in fact the produced VM.
VM = Viewing matrix.
M = modelling matrix.
20.What is Projection matrix?
οΏ½ It scales and shifts each vertex n a particular way.
οΏ½ So that all those vertices that inside the view volume will inside a standard cube.
οΏ½ The projection matrix effectively squashes the view volume into the cube centred
at the origin.
οΏ½ The projection matrix also reverse the sense of the z-axis.
21.What is Viewport matrix?
οΏ½ viewport matrix maps the surviving portion of the block into a β3D viewportβ.
οΏ½ This matrix maps the standard cube into a block shape.
οΏ½ Whose x & y values extend across the viewport.
οΏ½ Whose 2 component extends from 0 to 1
22.List out the model view matrix transformation functions.
β glTranslatef ()
β glRotatef ()
β glScalef ()
239 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
23.List out the Projection matrix transformation functions.
β glFrustum ()
β gluPerspective ()
β glortho ()
β gluOrtho2D ()
24.What are the 2D transformation functions in OpenGL?
οΏ½ glscaled (sx, sy, 1:0):
οΏ½ glTranslated(dx, dy, 0):
οΏ½ glRotated (angle, 0, 0, 1):
PART B
1.What is color model? Explain any two.
2.What is animation? Explain design animation sequences.
3.Explain in detail about OpenGL programming.
4.How basic graphics primitives are achieved in OpenGL?
5.How to draw 3D scenes in OpenGL?
6.How to draw 3D objecsts in OpenGL?
240 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT IV
RENDERING
UNIT IV
1.What is Shading Model?
οΏ½ A shading model dictates how light is scattered or reflected from a surface.
οΏ½ A shading model frequently used in graphics in two types of light sources.
β’ Point light sources
β’ Ambient light
2.How many ways the incident light interact with the surfaces?
οΏ½ The incident light interact with the surface in three different ways.
β’ Some is absorbed by the surface and converted into heat.
β’ Some is reflected from the surface.
β’ Some is transmitted into the interior of the object, as in the case of piece of the
glass.
3.What is Black body?
οΏ½ If all the incident light is absorbed, the object appears black and is known as black
body.
4.What are the types of reflection of incident light?
οΏ½ There are two types of reflection of incident light.
β’ Diffuse scattering
β’ Specular reflection
241 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
5.What is Diffuse Scattering?
οΏ½ It occurs when some of the incident light penetrates the surface slightly and is re-
radiated uniformly in all directions.
οΏ½ Scattered light interacts strongly with the surface, so its color is usually affected by
the nature of the material out of which the surface is made.
6.What is Lambertβs law?
οΏ½ The area subtended is now only the fraction cos (ΞΈ).
οΏ½ So the brightness of S is reduced by that same fraction.
οΏ½ This relationship between brightness and surface irentation is often called
Lambertβs law.
7.What is diffuse reflection coefficient?
οΏ½ For the internsity of the diffuse component, we can adopt the expression.
Id = Ispd * s.m/ |s||m|
οΏ½ Where Is is the intensity of the light source.
οΏ½ pd is the diffuse reflection coefficient
8.What is Specular reflection?
οΏ½ Real objects do not scatter objects uniformly in all directions.
οΏ½ So specular component is added to the shading model.
οΏ½ Specular reflection causes highlights, which can add significantly to the realism of
a picture when objects are shiny.
242 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
9.What are the commands used in OpenGL for shadings?
οΏ½ Flat shading is established in OpenGL by using the command.
glShadeModel(GL_FLAT);
οΏ½ Gouraud shading is established in OpenGL with the use of the function.
glShadeModel(GL_SMOOTH);
10.What are the types of Shading Models?
There are two main types
β’ Flat Shading
β’ Smooth Shading
Shading Model
Flat Shading Smooth Shading
Gouraud Shading Phong Shading
11.What is lateral inhibition?
οΏ½ Edges between faces actually appear more pronounced that they would be on an
actual physical object, due to the phenomenon in the eye known as lateral
inhibition.
12.What is Gouraud Shading?
οΏ½ Computationally speaking, Gouraud shading is modestly more expensive than flat
shading.
οΏ½ Gouraud shading is established in OpenGL with the use of the function.
o glShadeModel(GL_SMOOTH);
243 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
13. What is Phong Shading?
οΏ½ Greater realism can be achieved.
οΏ½ Particularly with regard to highlights on shiny objects.
οΏ½ This is done by approximations of the normal vector to the face at each pixel.
οΏ½ This type of shading is called Phong Shading.
14.What are the drawbacks of Phong shading?
οΏ½ Phong Shading is relatively slow speed.
οΏ½ More computation is required per pixel.
οΏ½ Phong shading can take six to eight times longer than Gouraud Shading.
15.Why OpenGL is not setup to do Phong Shading?
οΏ½ Because it applied the shading model once per vertex right after the modelview
transformation
οΏ½ Normal vector information is not passed to the rendering stage following the
perspective transformation and division.
15.What is Texture?
A texture can be uniform, such as a brick wall, or irregular, such as wood grain or
marble.
οΏ½ The realism of an image is greatly enhanced by adding surface texture to the
various faces of a mesh object.
16.What are the types of Texture?
οΏ½ There are numerous sources of textures.
οΏ½ The most common textures are
β’ Bitmap textures
β’ Procedural texture
244 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
17.What is the basic function used in Texture?
οΏ½ Basic function is
texture (s, t)
οΏ½ This function produces a color or intensity value for each value of s and t
between 0 and 1.
18.What is Bitmap textures?
οΏ½ Textures are often formed from bit representation of images such as digitized
photo, clip art or image can be previously in some program.
19.What are Texels?
οΏ½ Texels formed from bitmap consists of an array, say textr[c][r] of color values
often called Texels.
οΏ½ If the array has c columns and r rows, the indices c and r varies from 0 to c-1
and 0 to r-1respectively.
20. What is Procedural Texture?
οΏ½ Alternatively we can define a texture by mathematical function or procedure
example, the following spherical shape
245 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
21.Write a OpenGL program to generate the Procedural Texture.
float fakeshape (float s, float t)
{
float r = sqrt((s-0.5) * (s-0.5) + (t-0.5) * (t-0.5))
if (r<0.3)
return 1-4/0.3 //sphere intensity
else
return 0.2 //dark background
}
22.What are the advantages of adding shadows?
οΏ½ Shadows make an image much more realistic.
οΏ½ It shows how the objects are positioned with respect to each other.
οΏ½ Using it we can identify the position of light source.
23. What is Shadow Buffer?
οΏ½ Different methods for drawing shadow uses a variant of the depth buffer, that
performs the removal of hidden surfaces.
οΏ½ In this method, an auxiliary second depth buffer called shadow buffer, is
employed for each light sources. This recovers lot of memory.
24.What is meant by Sliding the Camera?
οΏ½ Sliding a camera means to move it along of its own axes.
οΏ½ That is, in the u, v or n direction, without rotating it.
movement along βnβ : forward or backward
movement along βuβ : left or right
movement along βvβ : up or down
246 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
25. What is meant by Rotating the Camera?
οΏ½ We want to roll, pitch or yaw the camera.
οΏ½ Each of these involves a rotation of the camera about one of its own axes.
οΏ½ To roll the camera is to rotate it about its won n-axis.
26.What are the shading methods available?
7. Circulism:
8. Blended circulism:
9. Dark Blacks:
10. Loose Cross Hatching:
11. Tight Cross Hatching:
12. Powder Shading:
PART B
1. What is shading model? Explain it with types.
2. What is texture? How to add texture to faces?
3. Explain about the adding shadows to objects.
4. How to build a camera in your program?
5. What are the methods to create shaded objects?
6. Explain about drawing shadows.
247 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT V
FRACTALS
1.What is Fractal?
οΏ½ is a rough or fragmented geometric shape that can be split into parts.
οΏ½ Each of which is a reduced copy of whole.
οΏ½ Such a property is called Self Similarity.
2.What is Self Similarity?
οΏ½ Self Similarity is a typical property of fractals.
οΏ½ A self similar object is exactly or approximately similar to a part of itself.
3.What is Koch Curve?
οΏ½ Very complex curves can be furnished recursively by repeatedly βrefiningβ a simple
curve.
οΏ½ The simplest example perhaps is the Koch curve.
οΏ½ That is discovered by mathematician Helge Von Koch.
οΏ½ This curve stirred great interest in the mathematical world because it produces an
infinitely long line within a region of finite area.
4.What is Koch Snowflake?
οΏ½ It is formed out of three Koch Curves joined together.
οΏ½ The perimeter of the ith
generation shape Si is the three times the length of a simple Koch
Curve, and S0 is 3(4/3)i
5.What are peano curves?
Peano curves are the fractal like structures that are drawn through a recursive process.
248 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
6.What are the types of peano curves?
οΏ½ The two most famous peano curves are Hilbert and Sierpeniski curves.
οΏ½ Some low order Hilbert curves are shown below
7. How the S-Copier make the images?
οΏ½ It contains 3 lenses.
οΏ½ Each of which reduces the input image to one-half its size.
249 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ And move it to a new position.
οΏ½ These three reduced and shifted images and superposed on the printed output.
οΏ½ Scaling and shifting are easily done by affine transformations.
8.What are Mandelbrot sets?
οΏ½ The Mandelbrot set is a mathematical set of points, whose boundary generates a
distinctive and easily recognisable two dimensional fractal shape.
οΏ½ The set is closely related to the Julia Set.
οΏ½ It generates similarly complex shapes.
οΏ½ This is named after the mathematician Benoit Mandelbrot.
9.What is Iteration Theory
οΏ½ Julia and Mandelbrot sets arises from a branch of analysis known as iteration
theory (or dynamical systems theory)
οΏ½ This theory asks what happens when one iterates a function endlessly.
10.What are Random Fractals?
οΏ½ Fractal shapes are completely deterministic.
οΏ½ They are completely predictable (even though they are very complicated)
οΏ½ In graphics the term fractal has become widely associated randomly generated
curves and surfaces that exhibit a degree of self similarity.
οΏ½ These curves are used to produce βNaturalisticβ shapes for representing objects
such as ragged mountains, grass and fire.
11.What are the stages of Fractalization?
οΏ½ There are three stages of fractalization.
250 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
First Stage:
οΏ½ The midpoint of AB is perturbed to form point C.
Second Stage:
οΏ½ Each of the two segments has its midpoint perturbed to form points D and E.
Third Stage:
οΏ½ At final stage, new points F...........I are added.
12. How to Calculate fractalization in a program?
οΏ½ Line L passes through the midpoint M of segment S and is perpendicular to it.
οΏ½ And point C along L has the parametric form
C(t) = M + (B-A)β₯ t
Where M = (A+B)/2
οΏ½ The most fractal curves, t is modelled as a Gaussian random variable with mean
and some standard derivation.
13. Write the program for drawing a Fractal Curve.
double MinLenSq, factor; //global variables
void drawfractal(Point2 A, Point2 B)
{
double beta, stdDev;
factor = pow(2.0,(1.0-beta).2.0);
cvs.moveTo(A);
fract(A, B, stdDev);
251 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
}
14.What is Ray Tracing?
οΏ½ Ray Tracing is a technique for generating an image by tracing the path of light
through pixels in an image plane.
οΏ½ And simulating the effect of its encounters with virtual objects.
οΏ½ This technique is used to produce a very high degree of visual realism.
οΏ½ Usually higher than the βScanline renderingβ
οΏ½ But greater computational cost.
15.What are the features of Ray Tracing?
οΏ½ Some of the interesting visual effects are easily incorporated
β’ Shadowing
β’ Reflection
β’ Refraction
16.How to compute hit point and color in Ray Tracing?
Computing hit point:
οΏ½ When objects have been tested, the object with the smallest is the closest to the
location of the hit point on the objectβs found.
Computing Color:
οΏ½ The colour of the light that receiving by object, in the direction of the eye is
computed and stored in the pixel.
17.What is βobject listβ in Ray Tracing method?
252 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ Descriptions of all the objects are stored in an object list.
οΏ½ This is a linked list of descriptive records as shown below.
οΏ½ The ray that is shown intersects a sphere, cylinder and two cones.
οΏ½ All the other objects are missed.
18.What is Solid Texture?
οΏ½ Solid texture is sometimes called β3D textureβ.
οΏ½ The object is considered to be carved out of a block of solid material that itself has
texturing.
οΏ½ The ray tracer reveals the colour of the texture at each point on the surface of the
object.
19.What is called Compund Objects? Or
What is called Boolean Objects?
What is called CSG Objects?
οΏ½ Complex shapes are defined by set operations (also called Boolean operations) on
simpler shapes.
οΏ½ Objects such as lenses a hallow fishbowls are easily formed by combining the
generic shapes.
οΏ½ Such objects are variously called compound objects (or) Boolean objects (or) CSG
objects.
253 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
20.What are the three Boolean operators?
β’ Union
β’ Intersection
β’ Difference
PART B
1. How create an image by iterated functions?
2. What are Mandelbrot sets and Julia sets?
3. Wxplain about Random fractals.
4. Explain in detail about ray tracing?
5. Explian about reflections and transparency.
6. How the Boolean operations are applied to objects?
254 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
ROAD MAP
UNIT I
Output primitives
β Definition
β Simple geometric components
β Additional output primitives
Line
β Stair step effect (jaggies)
Line-drawing algorithms
β Slope-intercept equation
DDA algorithm
Bresenham algorithm
β derivations
β algorithm
β problem
Circle algorithm
β General form
β Polar form
β Midpoint circle algorithm
β Theory & derivation
β Algorithm
β Problem
Ellipse algorithm
β General form
β Polar form
255 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β Midpoint ellipse algorithm
β Theory & derivation
β Algorithm
β Problem
Attributes of output primitives
β Attribute parameter
β Line attributes
β’ Type,
o Solid lines,
o Dashed lines,
o And dotted lines
β’ Width
β’ color.
β Pen and brush options
β’ Diagram
β Line color
β Curve attributes
β Color and grayscale levels
β Direct storage scheme
β’ Table 8 color code
β Grayscale
β’ Table(4 level grayscale level)
β Area-fill attributes
β’ Fill styles
β Character attributes
β’ Font
256 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β’ Size
β’ Color
β’ Diagram
2D transformation
β Translation,
β’ Diagram
β’ Equation
β’ Matrix format
β Rotation
β’ Diagram
β’ Equation
β’ Matrix format
β Scaling
β’ Diagram
β’ Equation
β’ Differential scaling.
β’ Matrix format
β Reflection
β’ Diagrams
β’ Definition
β’ Matrix format
β Shear
β’ Diagram
β’ Equation
257 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β’ Matrix format
Transformation functions
Translate (trans-atevector, matrixtranslate)
Rotate (theta, matrixrotate)
Scale (scalevector, matrixscale)
composematrix (matrix2, matrix1, matrixout)
UNIT II
3D Concepts
β Depth cueing
Projections
β Parallel projection
Diagram
Orthographic parallel projection
oblique parallel projection
diagrams
β Perspective projection
Diagrams
Equations
3D Representation
β Boundary representations
β Space-partitioning representation
β Polygon surfaces
258 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β’ Polygon tables
β’ Geometric tables
οΏ½ Attribute tables
7. A vertex table,
8. An edge table, and
9. A polygon table.
β Plane equations
β Polygon meshes
β’ Triangle strip
β’ Quadrilateral mesh
β Problem
β Solution
Curved lines and surfaces
β Quadric surfaces
β’ Sphere
β’ Ellipsoid
β’ Spline
οΏ½ Spline specifications
Visualization of data sets
β Visual representations for scalar fields
β’ diagrams
β’ Pseudo-color methods
β Visual representations for vector fields
β’ Diagrams
β Visual representations for tensor fields
259 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β’ Diagrams
3D Transformation
β Translation
β’ Diagram
β’ Equation
β’ Matrix form
β Rotation
β’ Diagram
β’ Coordinate-axes rotations
β’ Equation
β’ Matrix form
β Scaling
β’ Diagram
β’ Equation
β’ Matrix form
β Shear
β’ Diagram
β’ Matrix form
β Reflecetion
β’ Diagram
β’ Matrix
260 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Viewing pipeline
β Diagram
β Viewing coordinates
β Diagram
β Matrix
Visible surface detection
[ totally 11 methods ]
23. Back-face detection
β’ Equations
β’ Diagrams
24. Depth-buffer method
β’ Algorithm
25. A-buffer method
UNIT III
Color models
β Chromaticity diagram
β Colors representation
β Diagram
β Uses of chromaticity diagram
261 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
RBG color model
β Equation
β Unit cube diagram
β Explanation
β Additive model
YIQ color model
β Explanation
β NTSC signals
β RGB into yIQ
β YIQ into RGB
CMY color model
β Video monitors vs printers, plotters:
β Subtractive process
β Unit cube diagram
β Explanation
β Printing process
β Conversion of RGB into CMY
β Conversion of CMY into RGB
HSV color model
β Explanation
β Diagrams
262 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Animation
β Design of animation sequences
9. Storyboard layout
10. Object definitions
11. Key-frame specifications
12. Generation of in-between frames
Raster animations
β Explanation
β Diagrams
Key-frame systems
β Morphing
β Diagrams
OPENGL
β Advantages:
β Features:
β OpenGL operation
β Diagram
β Glut
β Sample program
β Glut functions
Basic graphics primitives
β Sample code
β Format of OpenGL commands
β OpenGL data types
263 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β Sample code
β Other graphics primitives in OpenGL
β Example
Drawing 3d scenes with OpenGL
β Viewing process & graphics pipeline
β Diagrams
β Important matrices
vii. Model view matrix
viii. Projection matrix
Diagram
ix. Viewport matrix
Diagram
Drawing three dimensional objects
β 3d viewing pipeline
β Diagram
β OpenGL functions for setting up transformations
β 3d viewing β model view matrix
β Sample code
264 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT IV
Introduction to Shading Model
β Light sources
β Black body
β types of reflection
β’ Diffuse scattering
οΏ½ Computing the diffuse component
οΏ½ Diagram
οΏ½ Lambertβs Law
οΏ½ diffuse reflection coefficient
β’ Specular reflection
Flat Shading
β OpenGL function
β Diagram
β lateral inhibition
Smooth Shading
β Types
3. Gouraud Shading
i. OpenGL function
ii. Diagrams
4. Phong Shading
i. Diagrams
ii. Drawback:
265 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Adding texture to faces
β Diagram
β Functions
β Types
β’ Bitmap textures
Texels
β’ Procedural texture
Fakeshape() function
Pasting The Texture On To A Flat Surface
β Sample code
β Diagram
β Mapping a square to rectangle:
β Diagrams
Adding Shadows Of Objects
β Diagrams
β advantages
β Shadow Buffer
Building a Camera in a Program
β Camera functions
β Camera class
β setModelViewMatrix() function
β Sliding the Camera
β Rotating the Camera
266 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Creating Shaded Objects
β Diagrams
β Shading Methods
1. Circulism
2. Blended circulism
3. Dark Blacks
4. Loose Cross Hatching
5. Tight Cross Hatching
6. Powder Shading
Rendering The Texture
β Diagram
β Explanation
Drawing Shadows
β Diagram
β Explanation
β Steps
Step 1: Activate and position the shadows
Step 2: Draw the Shadows Only
Step 3: Draw the Shadows from Above
Step 4: Soften the Shadows
Step 5: Create a new Shadow Material
Step 6: Apply the material to a ground polygon
Shadow Mapping:
Advantages:
Disadvantages
267 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT V
Fractals & self similarity
β Fractal:
β Self similarity:
β Self- similar curves:
β Exactly self similar:
β Statistically self similar:
β Mandelbrot:
β Koch curve:
o Two generations of the koch curve
o Koch snowflake
Peano curves (or) space-filling curves
β Diagram
β Example
Creating an images iterated functions
β Experimental copier
β Sierpinski copier
o Diagrams
Mandelbrot sets
β Iteration theory
β Mandelbrot sets and iterated function systems
β Diagram
268 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Julia sets
β Diagrams
β Drawing filled-in julia sets
Random fractals
β Fractalizing a segment
o Diagram
β Stages of fractalization
o First stage:
οΏ½ Diagram
o Second stage:
οΏ½ Diagram
o Third stage:
οΏ½ Diagram
β Calculation of fractalization in a program
β’ Diagram
β’ Equation
β’ Fract() function
β’ Drawing a fractal curve
β’ Drawfractal() function
Ray tracing
β Introduction
β Diagram
β Reverse process
β Features of ray tracing:
269 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Overview of the ray-tracing process
β Sample code
β Computing hit point:
β Computing color:
o Diagram
β Object list:
o Diagram
Adding surface texture
β Solid texture
β Example:
β Diagram
Reflections & transparency
β Diagrams
β Equation
β Local component
β The refraction of light:
o Diagram
o Equation
Boolean operations on objects
β Csg objects.
β Boolean operators
β’ Union
β’ Intersection
β’ Difference
270 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
β Diagram
β Equations
β Union of four primitives
β Diagram
β Equations
271 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
DIAGRAMS
UNIT I
Stair step Effect (jaggies)
Line
y= m.x + b
Circle
272 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Ellipse
Line Types
273 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Pen Brush Options
Fill Styles
Hatch Fill
274 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Character attribute
UNIT II
Parallel Projection
275 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Perspective Projection
276 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Polygon Table
277 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Triangle strip
Quadrilateral mesh
Sphere
278 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
ELLIPSOID
SPLINE
Translation
279 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Rotation
Scaling
280 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
οΏ½ This sequence of transformations is demonstrated in following fig.
Reflection
281 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Shear
Viewing Pipeline
Viewing Coordinates
282 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Transformation from World to Viewing Coordinates
283 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT III
Chromaticity diagram
RGB color Model
284 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
CMY COLOR MODEL
HSV COLOR MODEL
285 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Morphing
286 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Opngl operation:
287 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Other Graphics Primitives in OPENGL
DRAWING 3D SCENES WITH OPENGL
288 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Projection matrix
Viewport matrix
289 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
DRAWING THREE DIMENSIONAL OBJECTS
3D Viewing β Model View Matrix
290 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT IV
Flat Shading
Gourad Shading
291 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Phong Shading
ADDING TEXTURE TO FACES:
292 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
PROCEDURAL TEXTURE
PASTING THE TEXTURE ON TO A FLAT SURFACE:
MAPPING A SQUARE TO RECTANGLE
293 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
ADDING SHADOWS OF OBJECTS
294 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Rotating the Camera
CREATING SHADED OBJECTS
295 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
RENDERING THE TEXTURE
PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
PREPARED BY S.PRABHU AP/CSE KVCET
DRAWING SHADOWS
296
297 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
UNIT V
TWO GENERATIONS OF THE KOCH CURVE
THE FIRST FEW GENERATIONS OF THE KOCH SNOWLAKE
KOCH SNOFLAKE, S3, S4, AND S5
298 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
PEANO CURVES (OR) SPACE-FILLING CURVES
299 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Low order Hilbert curve
Experimental Copier
300 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Sierpinski Copier
S-Copier Iterated output
301 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
MANDELBROT SETS
IFS System
302 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
JULIA SETS
Random Fractal
First Stage
303 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Second Stage
Third Stage
.
Ray Tracing
304 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Compute color in Ray Tracing
Object list
INTERSTING OF A RAY WITH AN OBJECT
305 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
ADDING SURFACE TEXTURE
Reflection and Transparency
306 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Contributions of light at each part and its tree structure
The refraction of light:
307 PRABHU.S
PREPARED BY S.PRABHU AP/CSE KVCET
Compound Objects built from sphere
Union of 4 primitives
β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦.THE ENDβ¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦β¦..