Upload
surendra-singh-chauhan
View
217
Download
0
Embed Size (px)
Citation preview
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 1/16
CAP405: COMPUTER GRAPHICS
Homework Title / No: 4 Course Code:CAP405
Course Instructor: MR. SANJAY SOOD
DOA: 24/4/ 2010 DOS: 8/5/ 2010
Student’s Roll No:-RD3803B50 Section No. : D3803
Declaration:
I declare that this assignment is my individual work. I have notcopied from any other student’s work or from any other source
except where due acknowledgment is made explicitly in the
text, nor has any part been written for me by another person.
Student’s Signature:
SANDEEP SAINI
Evaluator’s comments: ________________________________________________________________
Marks obtained: ___________ out of ______________________
Part A
1)While clipping a polygon, it is said that Sutherland
Hodgeman is a better method than Weiler Atherton
polygon clipping algorithm. Perform clipping on a
Polygon and justify the above statement.
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 2/16
Clipping refers to an optimization where the computer only
draws things that might be visible to the viewer.
Clipping a polygon uses 2 methods
Sutherland hodgeman
Weiler Atherton clipping polygon
Sutherland hodgeman algorithm
The Sutherland–Hodgman algorithm is used for clipping
polygons. It works by extending each line of the convex
clip polygon in turn and selecting only vertices from the
subject polygon that are on the visible side.
The algorithm begins with an input list of all vertices in the
subject polygon. Next, one side of the clip polygon is
extended infinitely in both directions, and the path of the
subject polygon is traversed. Vertices from the input list are
inserted into an output list if they lie on the visible side of
the extended clip polygon line, and new vertices are added
to the output list where the subject polygon path crossesthe extended clip polygon line.
This process is repeated iteratively for each clip polygon
side, using the output list from one stage as the input list
for the next. Once all sides of the clip polygon have been
processed, the final generated list of vertices defines a new
single polygon that is entirely visible. Note that if the
subject polygon was concave at vertices outside the
clipping polygon, the new polygon may have coincident(i.e. overlapping) edges – this is acceptable for rendering,
but not for other applications such as computing shadows.
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 3/16
All steps of clipping concave polygon 'W' by 5-sidedconvex polygon
The Weiler–Atherton algorithm overcomes this by
returning a set of divided polygons, but is more complexand computationally more expensive, so Sutherland–
Hodgman is used for many rendering applications.
Sutherland–Hodgman can also be extended into 3D space
by clipping the polygon paths based on the boundaries of
planes defined by the viewing space.
Weiler Atherton clipping algorithm
The Weiler-Atherton algorithm is capable of clipping aconcave polygon with interior holes to the boundaries of another concave polygon, also with interior holes. Thepolygon to be clipped is called the subject polygon (SP) andthe clipping region is called the clip polygon (CP). The newboundaries created by clipping the SP against the CP areidentical to portions of the CP. No new edges are created.Hence, the number of resulting polygons is minimized.
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 4/16
The algorithm describes both the SP and the CP by acircular list of vertices. The exterior boundaries of thepolygons are described clockwise, and the interiorboundaries or holes are described counter-clockwise. When
traversing the vertex list, this convention ensures that theinside of the polygon is always to the right. The boundariesof the SP and the CP may or may not intersect. If theyintersect, the intersections occur in pairs. One of theintersections occurs when the SP edge enters the inside of the CP and one when it leaves. Fundamentally, thealgorithm starts at an entering intersection and follows theexterior boundary of the SP clockwise until an intersectionwith a CP is found. At the intersection a right turn is made,
and the exterior of the CP is followed clockwise until anintersection with the SP is found. Again, at the intersection,a right turn is made, with the SP now being followed. Theprocess is continued until the starting point is reached.Interior boundaries of the SP are followed counter-clockwise.
A more formal statement of the algorithm is
• Determine the intersections of the subject and clippolygons - Add each intersection to the SP and CP vertexlists. Tag each intersection vertex and establish abidirectional link between the SP and CP lists for eachintersection vertex.
• Process nonintersecting polygon borders - Establish twoholding lists: one for boundaries which lie inside the CPand one for boundaries which lie outside. Ignore CPboundaries which are outside the SP. CP boundaries insidethe SP form holes in the SP. Consequently. a copy of the
CP boundary goes on both the inside and the outsideholding list. Place the boundaries on the appropriateholding list.
• Create two intersection vertex lists - One, the entering list,contains only the intersections for the SP edge enteringthe inside of the CP. The other, the leaving list, containsonly the intersections for the SP edge leaving the inside of the CP. The intersection type will alternate along theboundary. Thus, only one determination is required for
each pair of intersections.
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 5/16
• Perform the actual clipping -Polygons inside the CP are found using the followingprocedure.
o Remove an intersection vertex from the entering list.
If the list is empty, the process is complete.o Follow the SP vertex list until an intersection is found.
Copy the SP list upto this point to the inside holdinglist.
o Using the link, jump to the CP vertex list.o Follow the CP vertex list until an intersection is found.
Copy the CP vertex list upto this point to the insideholding list.
o Jump back to the SP vertex list.o
Repeat until the starting point is again reached. Atthis point, the new inside polygon has been closed.
Polygons outside the CP are found using the sameprocedure, except that the initial intersection vertex isobtained from the leaving list and the CP vertex list isfollowed in the reverse direction. The polygon lists arecopied to the outside holding list.
2) Write a procedure for area-Subdivision algorithm
for visible surface.
The area-subdivision method takes advantage of areacoherence in a scene by locating those view areas thatrepresent part of a single surface.
The total viewing area is successively divided into smallerand smaller rectangles until each small area is simple, ie. itis a single pixel, or is covered wholly by a part of a singlevisible surface or no surface at all.
The procedure to determine whether we should subdividean area into smaller rectangle is:1. We first classify each of the surfaces, according to theirrelations with the area:Surrounding surface - a single surface completely enclosesthe areaOverlapping surface - a single surface that is partly insideand partly outside the areaInside surface - a single surface that is completely inside
the area
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 6/16
Outside surface - a single surface that is completelyoutside the area.
To improve the speed of classification, we can make use of the bounding rectangles of surfaces for early confirmation
or rejection that the surfaces should be belong to thattype.2. Check the result from 1., that, if any of the followingcondition is true, then, no subdivision of this area isneeded.a. All surfaces are outside the area.b. Only one surface is inside, overlapping or surroundingsurface is in the area.c. A surrounding surface obscures all other surfaces within
the area boundaries.
For cases b and c, the color of the area can be determined
from that single surface.
3)Write a program that allows a user to design a
picture from a menu of basic shapes by dragging
each selected shape into position with a pick device.
#include <dos.h>
#include <graphics.h>
#include<stdio.h>
#include<conio.h>
#include<iostream.h>
union REGS in, out;
int cirrad1=0,cirrad2;
void detectmouse ( )
{
in.x.ax = 0;
int86 (0X33,&in,&out);
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 7/16
if (out.x.ax == 0)
printf ("\nMouse Fail To Initialize");
else
printf ("\nMouse Succesfully Initialize");
}
void showmousetext ( )
{
in.x.ax = 1;
int86 (0X33,&in,&out);
}
void showmousegraphics ( )
{
in.x.ax = 1;
int86 (0X33,&in,&out);
getch ();
closegraph ();
}
void hidemouse ( )
{
in.x.ax = 2;
int86 (0X33,&in,&out);
}
void draw ()
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 8/16
{ while(out.x.bx!=2)
{
int x,y,x1,y1;
in.x.ax = 3;
int86 (0X33,&in,&out);
cleardevice();
if (out.x.bx == 1)
{
x = out.x.cx;
y = out.x.dx;
setcolor(10);
circle(x,y,cirrad1);
}
if (out.x.bx == 1)
{
x = out.x.cx;
y = out.x.dx;
setcolor(10);
circle(x,y,cirrad2);
}
if (out.x.bx == 1)
{
x = out.x.cx;
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 9/16
y = out.x.dx;
//setcolor(10);
// circle(x,y,cirrad2);
}
if (out.x.bx == 1)
{
x1 = out.x.cx;
y1 = out.x.dx;
}
line(x,y,x+34,y+23);
line(x,y,x-90,y-0);
delay (10);
}
getch( );
}
int main ( )
{
cout<<"There will be 2 circle followes by a rectangle andthen a line";
cout<<"\nEnter the radius of the two circle ";
cin>>cirrad1;
cin>>cirrad2;
clrscr( );
int gdriver = DETECT, gmode, errorcode;
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 10/16
initgraph(&gdriver, &gmode, "d:\\tc\\bgi");
detectmouse ( );
showmousetext ( );
draw ( );
hidemouse ( );
getch ( );
return 0;
}
Part B
4) Design the scan-line algorithm for the removal of
hidden lines from a scene.
Scanline algorithms have a variety of applications in
computer graphics and related fields. These notes contain
some details and hints concerning the programming
assignments relating to scan conversion of polygons,
hidden feature elimination, and shaded rendering. The
most important thing to keep in mind when implementing
an algorithm at the pixel level is to have a clear
paradigm how geometry specified in floating-point
precision is mapped into integer valued pixels - and to
stick to that model religiously !
It is assumed for both the polygonal case and the
parametric curve case that the objects to be drawn have
been transformed to a screen space with X going to the
right, Y going up and Z going into the screen. Furthermore,
the perspective transformation is assumed to have been
performed on all objects so that an orthographic projection
of X and Y onto the screen is appropriate. In the case of
parametric curved surfaces this serves to alter the form of the functions somewhat but the processing performed
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 11/16
upon those functions remains the same. A scan line
algorithm basically consists of two nested loops, one for
the Y coordinate going down the screen and one for the X
coordinate going across each scan line of the current Y. For
each execution of the Y loop, a plane is defined by the
eyepoint and the scan line on the screen. All objects to be
drawn are intersected with this plane. The result is a set of
line segments in XZ, one (or more) for each potentially
visible polygon on that scan line. These line segments are
then processed by the X scan loop. For each execution of
this loop a scan ray is defined by the eyepoint and a
picture element on the screen. All segments are
intersected with this ray to yield a set of points, one for
each potentially visible polygon at that picture element.
These points are then sorted by their Z position. The point
with the smallest Z is deemed visible and an intensity is
computed from it. The processing during the X scan is,
then, fundamentally the same as the processing during the
Y scan except for the change in dimensionality. During the
Y scan, 3D Polygons are intersected with a plane to
produce 2D line segments. During the X scan, 2D line
segments are intersected with a line to produce ID points.
For each scan line:1. Find the intersections of the scan line with all edges of the polygon.2. Sort the intersections by increasing x coordinate.3. Fill in all pixels between pairs of intersections.Problem:Calculating intersections is slow.Solution:Incremental computation / coherence
5)Suppose you are given an image. How will you
detect the presence of Hidden surfaces and remove
hindrance from the image?
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 12/16
we can detect the presence of hidden surfaces with the
help of visible surface detection. There are two approaches
called object space method and image space method. An
object space method compares objects and parts of
objects to each other within the scene definition to
determine which surfaces,as a whole,we should label as
visible.
In an image space algorithm,visibility is decided point by
point at each pixel position on the projection plane.most
visible surface algorithm use image space method
,although the object space method can be used effectively
to locate visible surfaces in some cases. line displayalgorithms, on the other hand, generally used object space
method to identify visible lines in wire frame display,but
many image space visible surface algorithm can be
adapted easily to visible line detection.
Hidden surface removal algorithms: -
Considering the rendering pipeline, the projection, theclipping, and the rasterization steps are handleddifferently by the following algorithms:
1. Z-buffering: - During rasterization the depth/Zvalue of each pixel is checked against an existingdepth value. If the current pixel is behind the pixel inthe Z-buffer, the pixel is rejected, otherwise it is
shaded and its depth value replaces the one in the Z-buffer. Z-buffering supports dynamic scenes easily,and is currently implemented efficiently in graphicshardware. This is the current standard. The cost of using Z-buffering is that it uses up to 4 bytes perpixel, and that the rasterization algorithm needs tocheck each rasterized sample against the z-buffer.
The z-buffer can also suffer from artefacts due toprecision errors although this is far less common now
that commodity hardware supports 24-bit and higherprecision buffers.
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 13/16
2. Ray tracing: - attempts to model the path of light rays to a viewpoint by tracing rays from the
viewpoint into the scene. Although not a hiddensurface removal algorithm as such, it implicitly solvesthe hidden surface removal problem by finding thenearest surface along each view-ray. Effectively thisis equivalent to sorting all the geometry on a perpixel basis.
3. The Warnock algorithm: - divides the screen into
smaller areas and sorts triangles within these. If there is ambiguity (i.e., polygons overlap in depthextent within these areas), then further subdivisionoccurs. At the limit, subdivision may occur down tothe pixel level.
4. Coverage buffers and Surface buffer: - faster
than z-buffers and commonly used in games in theQuake I era. Instead of storing the Z value per pixel,they store list of already displayed segments per lineof the screen. New polygons are then cut againstalready displayed segments that would hide them. AS-Buffer can display unsorted polygons, while a C-Buffer require polygons to be displayed from thenearest to the furthest. C-buffer having nooverdrawn, they will make the rendering a bit faster.
They were commonly used with BSP trees whichwould give the polygon sorting.
5. Painter's algorithm: - sorts polygons by theirbarycentre and draws them back to front. Thisproduces few artefacts when applied to scenes withpolygons of similar size forming smooth meshes andback face culling turned on. The cost here is the
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 14/16
sorting step and the fact that visual artefacts canoccur.
6. Binary space partitioning: - (BSP) divides ascene along planes corresponding to polygonboundaries. The subdivision is constructed in such away as to provide an unambiguous depth orderingfrom any point in the scene when the BSP tree istraversed. The disadvantage here is that the BSP treeis created with an expensive pre-process. This meansthat it is less suitable for scenes consisting of dynamic geometry. The advantage is that the data ispre-sorted and error free, ready for the previouslymentioned algorithms. Note that the BSP is not asolution to HSR, only a help.
6) Is Z-buffer better than other hidden surface
algorithm? Give reasons.
z-buffering is the mana of image depth coordinates inthree-dimensional (3-D) graphics, usually done
in hardware, sometimes insoftware. It is one solution to
the visibility problem, which is the problem of deciding
which elements of a rendered scene are visible, and which
are hidden. The painter's algorithm is another common
solution which, though less efficient, can also handle non-
opaque scene elements. Z-buffering is also known as depth
buffering.When an object is rendered by a 3D graphics card, the
depth of a generated pixel (z coordinate) is stored in
a buffer (the z-buffer or depth buffer). This buffer is usually
arranged as a two-dimensional array (x-y) with one
element for each screen pixel. If another object of the
scene must be rendered in the same pixel, the graphics
card compares the two depths and chooses the one closer
to the observer. The chosen depth is then saved to the z-buffer, replacing the old one. In the end, the z-buffer will
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 15/16
allow the graphics card to correctly reproduce the usual
depth perception: a close object hides a farther one. This is
called z-culling.
The granularity of a z-buffer has a great influence on thescene quality: a 16-bit z-buffer can result
in artifacts (called "z-fighting") when two objects are very
close to each other. A 24-bit or 32-bit z-buffer behaves
much better, although the problem cannot be entirely
eliminated without additional algorithms. An 8-bit z-buffer
is almost never used since it has too little precision.
Z-buffer data in the area of video editing permits one to
combine 2D video elements in 3D space, permitting virtualsets, "ghostly passing through wall" effects, and complex
effects like mapping of video on surfaces. An application
for Maya, called IPR, permits one to perform post-rendering
texturing on objects, utilizing multiple buffers like z-
buffers, alpha, object id, UV coordinates and any data
deemed as useful to the post-production process, saving
time otherwise wasted in re-rendering of the video.
Z-buffer data obtained from rendering a surface from a
light's POV permits the creation of shadows in a scanline
renderer, by projecting the z-buffer data onto the ground
and affected surfaces below the object. This is the same
process used in non-raytracing modes by the free and
open sourced 3D application Blender.
Z Buffer Algorithm
1. Clear the color buffer to the background color
2. Initialize all xy coordinates in the Z buffer to one
3. For each fragment of each surface, compare depth
values to those already stored in the Z buffer
- Calculate the distance from the projection plane
for each xy position on the surface
8/8/2019 Graphics 4
http://slidepdf.com/reader/full/graphics-4 16/16
- If the distance is less than the value currently
stored in the Z buffer:
Set the corresponding position in the color buffer to the color of
the fragment
Set the value in the Z buffer to the distance to that object
- Otherwise:
Leave the color and Z buffers unchanged
The reason why z-buffer is better than other hidden surface
algorithm- Z-buffer testing can increase application performance
- Software buffers are much slower than specialized hardware
depth buffers
- The number of bitplanes associated with the Z buffer
determine its precision or resolution