Upload
harvinder-singh
View
761
Download
0
Embed Size (px)
DESCRIPTION
MC0072_Computer Graphics-Assignement
Citation preview
February 2011
Master of Computer Application (MCA) – Semester 3
MC0072 – Computer Graphics – 4 Credits
(Book ID: B0810) Assignment Set – 2
1. Write a short note on the followings:
A) Video mixing
Ans: Video controller provides the facility of video mixing. In which it accepts information of two images
simultaneously. One from frame buffer and other from television camera, recorder or other source.
This is illustrated in fig 2.7. The video controller merges the two received images to form a composite
image.
Frame BufferVideo
ControllerMonitor
Video Signal Source
Video mixing
There are two types of video mixing. In first, a graphics image is set into a video image. Here mixing is
accomplished with hardware that treats a designated pixel value in the frame buffer as a flag to
indicate that the video signal should be shown instead of the signal from the frame buffer, normally
the designated pixel value corresponds to the background color of the frame buffer image.
In the second type of mixing, the video image is placed on the top of the frame buffer image. Here,
whenever background color of video image appears, the frame buffer is shown, otherwise the video
image is shown.
B) Frame buffer
Ans: A frame buffer is a video output device that drives a video display from a memory buffer containing a
complete frame of data.
The information in the memory buffer typically consists of color values for every pixel (point that can
be displayed) on the screen. Color values are commonly stored in 1‐bit binary (monochrome), 4‐bit
palettized, 8‐bit palettized, 16‐bit high color and 24‐bit true color formats. An additional alpha
channel is sometimes used to retain information about pixel transparency. The total amount of the
memory required to drive the frame buffer depends on the resolution of the output signal, and on the
color depth and palette size.
Frame buffers differ significantly from the vector displays that were common prior to the advent of
the frame buffer. With a vector display, only the vertices of the graphics primitives are stored. The
electron beam of the output display is then commanded to move from vertex to vertex, tracing an
analog line across the area between these points. With a frame buffer, the electron beam (if the
display technology uses one) is commanded to trace a left‐to‐right, top‐to‐bottom path across the
entire screen, the way a television renders a broadcast signal. At the same time, the color information
for each point on the screen is pulled from the frame buffer, creating a set of discrete picture
elements (pixels).
The term "frame buffer" has also entered into colloquial usage to refer to a backing store of graphical
information. The key feature that differentiates a frame buffer from memory used to store graphics –
the output device – is lost in this usage.
Display modes
Frame buffers used in personal and home computing often had sets of defined "modes" under which
the frame buffer could operate. These modes would automatically reconfigure the hardware to
output different resolutions, color depths, memory layouts and refresh rate timings.
In the world of Unix machines and operating systems, such conveniences were usually eschewed in
favor of directly manipulating the hardware settings. This manipulation was far more flexible in that
any resolution, color depth and refresh rate was attainable – limited only by the memory available to
the frame buffer.
An unfortunate side‐effect of this method was that the display device could be driven beyond its
capabilities. In some cases this resulted in hardware damage to the display.[3] More commonly, it
simply produced garbled and unusable output. Modern CRT monitors fix this problem through the
introduction of "smart" protection circuitry. When the display mode is changed, the monitor attempts
to obtain a signal lock on the new refresh frequency. If the monitor is unable to obtain a signal lock, or
if the signal is outside the range of its design limitations, the monitor will ignore the frame buffer
signal and possibly present the user with an error message.
LCD monitors tend to contain similar protection circuitry, but for different reasons. Since the LCD
must digitally sample the display signal (thereby emulating an electron beam), any signal that is out of
range cannot be physically displayed on the monitor.
Color palette
Frame buffers have traditionally supported a wide variety of color modes. Due to the expense of
memory, most early frame buffers used 1‐bit (2 color), 2‐bit (4 color), 4‐bit (16 color) or 8‐bit (256
color) color depths. The problem with such small color depths is that a full range of colors cannot be
produced. The solution to this problem was to add a lookup table to the frame buffers. Each "color"
stored in frame buffer memory would act as a color index; this scheme was sometimes called
"indexed color".
The lookup table served as a palette that contained data to define a limited number (such as 256) of
different colors. However, each of those [256] colors, itself, was defined by more than 8 bits, such as
24 bits, eight of them for each of the three primary colors. With 24 bits available, colors can be
defined far more subtly and exactly, as well as offering the full range gamut which the display can
show. While having a limited total number of colors in an image is somewhat restrictive, nevertheless
they can be well chosen, and this scheme is markedly superior to 8‐bit color.
The data from the frame buffer in this scheme determined which of the [256] colors in the palette
was for the current pixel, and the data stored in the lookup table (sometimes called the "LUT") went
to three digital‐to‐analog converters to create the video signal for the display.
The frame buffer's output data, instead of providing relatively‐crude primary‐color data, served as an
index – a number – to choose one entry in the lookup table. In other words, the index determined
which color and the data from the lookup table determined precisely what color to use for the current
pixel.
Memory access
While frame buffers are commonly accessed via a memory mapping directly to the CPU memory
space, this is not the only method by which they may be accessed. Frame buffers have varied widely
in the methods used to access memory. Some of the most common are:
• Mapping the entire frame buffer to a given memory range.
• Port commands to set each pixel, range of pixels or palette entry.
• Mapping a memory range smaller than the frame buffer memory, then bank switching as
necessary.
The frame buffer organization may be chunky (packed pixel) or planar.
Virtual frame buffers
Many systems attempt to emulate the function of a frame buffer device, often for reasons of
compatibility. The two most common "virtual" frame buffers are the Linux frame buffer device (fbdev)
and the X Virtual Framebuffer (Xvfb). The X Virtual Framebuffer was added to the X Window System
distribution to provide a method for running X without a graphical frame buffer. While the original
reasons for this are lost to history, it is often used on modern systems to support programs such as
the Sun Microsystems JVM that do not allow dynamic graphics to be generated in a headless
environment.
The Linux frame buffer device was developed to abstract the physical method for accessing the
underlying frame buffer into a guaranteed memory map that is easy for programs to access. This
increases portability, as programs are not required to deal with systems that have disjointed memory
maps or require bank switching.
Page flipping
Since frame buffers are often designed to handle more than one resolution, they often contain more
memory than is necessary to display a single frame at lower resolutions. Since this memory can be
considerable in size, a trick was developed to allow for new frames to be written to video memory
without disturbing the frame that is currently being displayed. The concept works by telling the frame
buffer to use a specific chunk of its memory to display the current frame. While that memory is being
displayed, a completely separate part of memory is filled with data for the next frame. Once the
secondary buffer is filled (often referred to as the "back buffer"), the frame buffer is instructed to look
at the secondary buffer instead. The primary buffer (often referred to as the "front buffer") becomes
the secondary buffer, and the secondary buffer becomes the primary. This switch is usually done
during the vertical blanking interval to prevent the screen from "tearing" (i.e., half the old frame is
shown, and half the new frame is shown).
Most modern frame buffers are manufactured with enough memory to perform this trick even at high
resolutions. As a result, it has become a standard technique used by PC game programmers.
Graphics accelerators
As the demand for better graphics increased, hardware manufacturers created a way to decrease the
amount of CPU time required to fill the frame buffer. This is commonly called a "graphics accelerator"
in the Unix world.
Common graphics drawing commands (many of them geometric) are sent to the graphics accelerator
in their raw form. The accelerator then rasterizes the results of the command to the frame buffer.
This method can save from thousands to millions of CPU cycles per command, as the CPU is freed to
do other work.
While early accelerators focused on improving the performance of 2D GUI systems, most modern
accelerators focus on producing 3D imagery in real time. A common design is to send commands to
the graphics accelerator using a library such as OpenGL. The OpenGL driver then translates those
commands to instructions for the accelerator's graphics processing unit (GPU). The GPU uses those
microinstructions to compute the rasterized results. Those results are bit blitted to the frame buffer.
The frame buffer's signal is then produced in combination with built‐in video overlay devices (usually
used to produce the mouse cursor without modifying the frame buffer's data) and any analog special
effects that are produced by modifying the output signal. An example of such analog modification was
the anti‐aliasing technique used by the 3dfx Voodoo cards. These cards add a slight blur to output
signal that makes aliasing of the rasterized graphics much less obvious.
Popular manufacturers of 3D graphics accelerators are Nvidia and ATI Technologies.
C) Color table
Ans: In color displays, 24‐ bits per pixel are commonly used, where 8‐bits represent 256 levels for each
color. Here it is necessary to read 24‐bits for each pixel from frame buffer. This is very time
consuming. To avoid this video controller uses look up table (LUT) to store many entries of pixel
values in RGB format. With this facility, now it is necessary only to read index to the look up table
from the frame buffer for each pixel. This index specifies the one of the entries in the look‐up table.
The specified entry in the loop up table is then used to control the intensity or color of the CRT.
Usually, look‐up table has 256 entries. Therefore, the index to the look‐up table has 8‐bits and hence
for each pixel, the frame buffer has to store 8‐bits per pixel instead of 24 bits. Fig. 2.6 shows the
organization of a color (Video) look‐up table.
Organization of a Video look-up table
There are several advantages in storing color codes in a lookup table. Use of a color table can provide
a "reasonable" number of simultaneous colors without requiring Iarge frame buffers. For most
applications, 256 or 512 different colors are sufficient for a single picture. Also, table entries can be
changed at any time, allowing a user to be able to experiment easily with different color combinations
in a design, scene, or graph without changing the attribute settings for the graphics data structure. In
visualization and image‐processing applications, color tables are convenient means for setting color
thresholds so that all pixel values above or below a specified threshold can be set to the same color.
For these reasons, some systems provide both capabilities for color‐code storage, so that a user can
elect either to use color tables or to store color codes directly in the frame buffer.
Display technology
The image is shown on a screen (also called a moniteur), which is an output peripheral device that
allows a visual representation to be offered. This information comes from the computer, but in an
“indirect” way. Indeed, the processor does not directly send information to the monitor, but
processes the information coming from its Random access memory (RAM), then sends it to a graphics
card that converts the information into electrical impulses, which it then sends to the monitor.
Computer monitors are usually cathode tubes, i.e. a tube made out of glass in which an electron gun
emits electrons which are then directed by a magnetic field towards a screen on which small
phosphorescent elements (luminophores) are laid out, constituting points (pixels) that emit light
when the electrons hit them.
The pixel concept
An image consists of a set of points called pixels (the word pixel is an abbreviation of PICture
ELement) The pixel is thus the smallest component of a digital image. The entire set of these pixels is
contained in a two‐dimensional table constituting the image:
The screen‐sweeping is carried out from left to right and from top to bottom, it is usual to indicate the
pixel located at the top left hand corner of the image using the coordinates [0,0], this means that the
directions of the image axes are the following:
• The direction of the X‐axis is from left to right
• The direction of the Y‐axis is from top to bottom, contrary to the conventional notation in
mathematics, where the direction of the Y‐axis is upwards.
Definition and resolution
The number of points (pixels) constituting the image, that is, its “dimensions” (the number of columns
of the image multiplied by its number of rows) is known as the definition. An image 640 pixels wide
and 480 pixels high will have a definition of 640 by 480 pixels, which is written as 640x480.
On the other hand, the resolution, a term often confused with the “definition”, is determined by the
number of points per unit of area, expressed in dots per inch (DPI), an inch being equivalent to 2.54
cm. The resolution thus makes it possible to establish the relationship between the number of pixels
of an image and the actual size of its representation on a physical support. A resolution of 300 dpi
thus means 300 columns and 300 lines of pixels in a square inch which thus yields 90000 pixels in a
square inch. The 72 dpi reference resolution gives us a 1”/72 (an inch divided by 72) pixel, that is to
say 0.353mm, corresponding to a pica (Anglo‐Saxon typographical unit).
Colour models
An image is thus represented by a two‐dimensional table in which each cell is a pixel. To represent an
image by means of computer, it is thus enough to create a pixel table in which each cell contains a
value. The value stored in a cell is coded on a certain number of bits which determine the colour or
the intensity of the pixel, This is called the coding depth (or is sometimes also called the colour
depth). There are several coding depth standards:
• Black and white bitmap: by storing one bit in each cell, it is possible to define two colours (black or
white).
• Bitmap with 16 colours or 16 levels of grey: storing 4 bits in each cell, it is possible to define 24
intensities for each pixel, that is, 16 degrees of grey ranging from black to white or 16 different
colours.
• Bitmap with 256 colours or 256 levels of grey: by storing a byte in each cell, it is possible to define
24 intensities, that is, 256 degrees of grey ranging from black to white or 256 different colours.
• Colour palette colourmap): thanks to this method it is possible to define a pallet, or colour table,
with all the colours that can be contained in the image, for each of which there is an associated
index. The number of bits reserved for the coding of each index of the palette determines the
number of colours which can be used. Thus, by coding the indexes on 8 bits it is possible to define
256 usable colours; that is, each cell of the two‐dimensional table that represents the image will
contain a number indicating the index of the colour to be used. An image whose colours are
coded according to this technique is thus called an indexed colour image.
• "True Colours" or "real colours": this representation allows an image to be represented by defining
each component (RGB, for red, green and blue). Each pixel is represented by a set comprising the
three components, each one coded on a byte, that is, on the whole 24 bits (16 million colours). It
is possible to add a fourth component, making it possible to add information regarding
transparency or texture; each pixel is then coded on 32 bits.
2. Describe the following with respect to methods of generating characters:
A) Stroke method
Ans: This method uses small line segments to generate a character. The small series of line segments are
drawn like a stroke of pen to form a character as shown in the figure above.
Stroke method
We can build our own stroke method character generator by calls to the line drawing algorithm. Here
it is necessary to decide which line segments are needed for each character and then drawing these
segments using line drawing algorithm.
B) Starbust method
Ans: In this method a fix pattern of line segments are used to generate characters. As shown in the fig.
5.20, there are 24 line segments. Out of these 24 line segments, segments required to display for
particular character are highlighted. This method of character generation is called starbust method
because of its characteristic appearance
Figure shows the starbust patterns for characters A and M. the patterns for particular characters are
stored in the form of 24 bit code, each bit representing one line segment. The bit is set to one to
highlight the line segment; otherwise it is set to zero. For example, 24‐bit code for Character A is 0011
0000 0011 1100 1110 0001 and for character M is 0000 0011 0000 1100 1111 0011.
This method of character generation has some disadvantages. They are
1. The 24‐bits are required to represent a character. Hence more memory is required
2. Requires code conversion software to display character from its 24‐bit code
3. Character quality is poor. It is worst for curve shaped characters.
C) Bitmap method
The third method for character generation is the bitmap method. It is also called dot matrix because
in this method characters are represented by an array of dots in the matrix form. It is a two
dimensional array having columns and rows. An 5 x 7 array is commonly used to represent characters
as shown in the below figure. However 7 x 9 and 9 x 13 arrays are also used. Higher resolution devices
such as inkjet printer or laser printer may use character arrays that are over 100x100.
Character A in 5 x 7 dot matrix format
Each dot in the matrix is a pixel. The character is placed on the screen by copying pixel values from the
character array into some portion of the screen’s frame buffer. The value of the pixel controls the
intensity of the pixel.
3. Discuss the homogeneous coordinates for translation, rotation and scaling
Ans: For translation: The third 2D graphics transformation we consider is that of translating a 2D line
drawing by an amount Tx along the x axis and Ty along the y axis. The translation equations may be
written as:
(5)
We wish to write the Equations 5 as a single matrix equation. This requires that we find a 2 by 2
matrix,
Such that x x a + y x c = x + Tx From this it is clear that a=1 and c=0, but there is no way to obtain the Tx term required in the first equation of Equations 5. Similarly we must have x x b + y x d = y + Ty. Therefore, b=0 and d=1, and there is no way to obtain the Ty term required in the second equation of Equations 5.
For Rotation: Suppose we wish to rotate a figure around the origin of our 2D coordinate system.
Below figure shows the point x,y being rotated degrees (by convention, counter clock‐wise direction is positive) about the origin.
Rotating a Point About the Origin
The equations for changes in the x and y coordinates are:
(1)
If we consider the coordinates of the point (x,y) as a one row two column matrix [ x y ] and the matrix
Then, given the J definition for matrix product, mp =: +/ . *, we can write Equations (1) as the matrix
equation
(2)
We can define a J monad, rotate, which produces the rotation matrix. This monad is applied to an
angle, expressed in degrees. Positive angles are measured in a counter‐clockwise direction by
convention.
rotate =: monad def '2 2 $ 1 1 _1 1 * 2 1 1 2 o. (o. y.) % 180'
rotate 90
0 1
_1 0
rotate 360
`1 _2.44921e_16
2.44921e_16 1
We can rotate the square of Figure 1 by:
square mp rotate 90
0 0
0 10
_10 10
_10 0
0 0
producing the rectangle shown in below figure.
The Square, Rotated 90 Degrees
For Scaling: Next we consider the problem of scaling (changing the size of) a 2D line drawing. Size
changes are always made from the origin of the coordinate system. The equations for changes in x the y and coordinates are:
(3)
As before, we consider the coordinates of the point (x,y) as a one row two column matrix [ x y ] and the matrix
then, we can write Equations (3) as the matrix equation
(4)
We next define a J monad, scale, which produces the scale matrix. This monad is applied to a list of
two scale factors for x and y respectively.
scale =: monad def '2 2 $ (0 { y.),0,0,(1 { y.)'
scale 2 3
2 0
0 3
We can now scale the square of Figure 1 by:
square mp scale 2 3
0 0
20 0
20 30
0 30
0 0
producing the square shown in below figure.
Scaling a Square
4. Describe the following with respect to Projection:
A) Parallel Projection
Ans: In parallel projection, z coordinate is discarded and parallel lined from each vertex on the object are
extended until they intersect the view plane. The point of intersection is the projection of the vertex.
We connect the projected vertices by line segments which correspond to connections on the original
object.
Parallel projection of an object to the view plane
As shown in the Figure above, a parallel projection preserves relative proportions of objects but does
not produce the realistic views.
B) Perspective Projection
Ans: The perspective projection, on the other hand, produces realistic views but does not preserve relative
proportions. In perspective projection, the lines of projection are not parallel. Instead, they all
coverage at a single point called the center of projection or projection reference point. The object
positions are transformed to the view plane along these converged projection lines and the projected
view of an object is determines by calculating the intersection of the converged projection lines with
the view plane, as shown in the shown figure
Perspective projection of an object to the view plane
C) Types of Parallel Projections
Ans: Parallel projections are basically categorized into two types, depending on the relation between the
direction of projection and the normal to the view plane. When the direction of the projection is
normal (perpendicular) to the view plane, we have an orthographic parallel projection. Otherwise, we
have an oblique parallel projection. Figure above illustrates the two types of parallel projection.
Orthographic Projection
The orthographic projection can display more than one face of an object. Such as orthographic
projection is called axonometric orthographic projection. It uses projection planes (view planes) that
are not normal to a principle axis. They resemble the perspective projection in this way, but differ in
that the foreshortening is uniform rather than being related to the distance from the center of
projection. Parallelism of lines is preserved but angles are not. The most commonly used axonometric
orthographic projection is the isometric projection.
The isometric projection can be generated by aligning the view plane so that it intersects each
coordinate axis in which the object is defined at the same distance from the origin. As shown in the
shown figure, the isometric projection is obtained by aligning the projection vector with the cube
diagonal. It uses an useful property that all three principle axes are equally foreshortened, allowing
measurements along the axes to be made to the same scale (hence the name: iso for equal, metric for
measure).
Isometric projection of an object onto a viewing plane
Oblique Projection
An oblique projection is obtained by projecting points along parallel lines that are not perpendicular
to the projection plane. Notice that the view plane normal and the direction of projection are not the
same. The oblique projections are further classified as the cavalier and cabinet projections. For the
cavalier projection, the direction of projection makes a 450 angle with the view plane. As a result, the
projection of a line perpendicular to the view plane has the same length as the line itself; that is, there
is no foreshortening.
Cavalier Projections of the unit cube
When the direction of projection makes an angle of arctan (2)=63.40 with the view plane, the
resulting view is called a cabinet projection. For this angle, lines perpendicular to the viewing surface
are projected at one‐half their actual length. Cabinet projections appear more realistic than cavalier
projections because of this reduction in the length of perpendiculars. Figure below shows the
examples of cabinet projections for a cube.
Cabinet projections of the Unit Cube