118
INTRODUCTION : The old adage “one picture is worth thousand-words” can be modified in this computer age into “One picture is worth many kilobytes of data”. The computer is information processing machine, a tool for storing, manipulating and correlating data. We are able to generate or collect and process information on a scope never before possible. This information can help us make decisions, understand our world, and control its operation. But as the volume of information increases, a problem arises. How this information be efficiently and effectively transferred between machine and human? The machine can easily generate tables of numbers hundreds of pages long. But such a printout may be worthless if the human reader does not have to understand it. Computer graphics solves this problem. Computer graphics is a study of techniques to improve communication between human and machine. A graph may replace that huge table of numbers and allow the reader to note the relevant patterns and characteristics at a glance. Giving the computer the ability to express its data in pictorial form can greatly increase its ability to provide information to the human user. Thus, computer graphics is the use of a computer to define store, manipulate, interrogate and present pictorial output. Computer graphics is in daily use in the fields of science, engineering, medicine, entertainment, advertising, the graphics arts, the fine arts, business, education and training. Overview of Computer Graphics : Computer graphics is the art or science of producing graphical images with the aid of computer.

Computer Graphics Unit1

Embed Size (px)

Citation preview

Page 1: Computer Graphics Unit1

INTRODUCTION:

The old adage “one picture is worth thousand-words” can be modified in this computer age into “One picture is worth many kilobytes of data”. The computer is information processing machine, a tool for storing, manipulating and correlating data. We are able to generate or collect and process information on a scope never before possible. This information can help us make decisions, understand our world, and control its operation. But as the volume of information increases, a problem arises. How this information be efficiently and effectively transferred between machine and human? The machine can easily generate tables of numbers hundreds of pages long. But such a printout may be worthless if the human reader does not have to understand it. Computer graphics solves this problem.

Computer graphics is a study of techniques to improve communication between human and machine. A graph may replace that huge table of numbers and allow the reader to note the relevant patterns and characteristics at a glance. Giving the computer the ability to express its data in pictorial form can greatly increase its ability to provide information to the human user.

Thus, computer graphics is the use of a computer to define store, manipulate, interrogate and present pictorial output. Computer graphics is in daily use in the fields of science, engineering, medicine, entertainment, advertising, the graphics arts, the fine arts, business, education and training.

Overview of Computer Graphics:

Computer graphics is the art or science of producing graphical images with the aid of computer.

Computer is the use of a computer to define, store, manipulate interrogate and present pictorial output. Computer graphics can be classified into two ways:

1. Passive Computer Graphics 2. Interactive Computer Graphics

Passive Computer Graphics:

Computer graphics is the use of a computer to define, store, manipulate, interrogate and present pictorial output. This is essentially a passive operation. The computer prepares and presents stored information to an observer in the form of pictures. The observer has no direct control over the picture being presented. The application may be as simple as the presentation of the graph of a single function or as complex as the simulation of the automatic reentry and landing of a space vehicle or an aircraft.

Page 2: Computer Graphics Unit1

Interactive Computer Graphics:

Interactive computer Graphics involves two-way communication between computer and user. It can be convenient and appropriate to input graphical output devices. It is often desirable to have the input from the user alter the output presented by the machine. A dialogue can be established through the graphics medium. This is termed as interactive computer graphics because the user interacts with the machine. Computer graphics allows communication through pictures, charts and diagrams.

Applications of Computer Graphics:

Computer Graphics is in daily use in the fields of science, engineering, medicine, entertainment, advertising, the graphic arts, arts, the fine arts business, education and training.The list of applications is enormous and some of the applications are listed below:

1. Computer Aided Design:

In computer aided design (CAD), computer graphics is used to design components and systems of mechanical, electrical, electro-mechanical, and electronic devices, including structures such as buildings, automobile bodies, airplane and ship hulls, very large scale integrated (VLSI) chips, optical systems and telephone and computer networks.

The electronics industry is dependent on the use of computer graphics. A typical integrated electronic circuit used in a computer is so so complex that it would take an engineer weeks\s to draw by hand and an equally long time to redraw in the event of a major modification. Using interactive computer graphics systems, the engineers can draw the circuit in much shorter time. He can then use the computer to help in checking the design and can make modification to the design is less time.

Architects can use interactive graphics methods to layout floor plans, such as the positioning of rooms, doors, windows, stairs, shelves, counters, and other building features. Working from the display of a building layout on a video monitor, an electrical designer can try out arrangements for wiring, electrical outlets, and fire warning system.

2. Presentation Graphics:

Another major application area of computer graphics is presentation graphics used to produce illustrations for reports or to generate 35-mm slides or transparencies for use with projectors. Presentation graphics is commonly used to summarize financial, statistical, and mathematical, scientific, and economic data for research reports, managerial reports, consumer information bulletins, and other types of reports. Workstation devices and services service bureaus exist for use in presentations.

Page 3: Computer Graphics Unit1

3. Computer Art:

Computer graphics methods are widely used in both fine art and commercial art applications. Artists use a variety of computer methods, including specially developed software, symbolic mathematics packages, CAD packages, desktop publishing software, and animation packages that provide facilities for designing object shapes and specifying object motions.

4. Entertainment:

Computer graphics methods are commonly used in making motion pictures, music, video and television shows. Music videos use graphics in several ways. Graphics objects can be combined with the live action, or graphics and image processing techniques can be used to produce a transformation of one person or object into another (morphing).

LINES:

A point (a position in a plane) can be specified with an order pair of numbers (x, y), where x is the horizontal distance from the origin and y is the vertical distance. Two points will specify a line. Line is descried by equations such that if a point (x, y) satisfies the equation, then the point is on the line. If the two point used to specify a line (x1, y1) and (x2, y2). Then an equation for the line is given by

y – y1 y2 – y1 = ….. (1.1)

x – x1 x2 –x1

The slope between any point on the line and (x1, y1) is the same as the slope between (x2, y2) and (x1, y1).

There are many equivalent forms for this equation. Multiplying by the denominators gives the forms

(x-x1)(y2-1) = (y-y1)(x2-x1) ….. (1.2)

Solving for y, we have

y = y2-y1 / x2-x1 (x-x1) + y1

or y = mx +b

Page 4: Computer Graphics Unit1

where m = y2-y1 / x2-x1

and b= y1 – mx

This is called the slope-intercept form of the line. The slope m is the change in height divided by the change in width for two points on the line. The intercept b is the height at which the line crosses the y-axis.

A difference form of the line equation , called the general form, may be found by multiplying out the factors in equation (1.2) and collecting them on one side of the equal sign.

(Y2-y1)x – (x2-x1)y + x2y1-x1y2 = 0 ….. (1.5)

Or, rx + sy+ t = 0 ….. (1.6)

Where r=(y2-y1)

S=(x2-x1) T=x2y1-x1y2

The values for r , s , & t are sometimes chosen so that

r² + s² = 1 ……(1.7)

comparing equations (1.4) & (1.6) , we have

m = - r / s

b = -t / s

we can determine where two will cross. By the two lines crossing we mean that the share some point in common.. that point will satisfy both of the equations for the two lines. The problem is to find this point. Suppose the equations for two lines in their slope-intercept form is :

line 1: y = m1x + b1 ……(1.9)

line 2: y = m2x + b2

If there is common point (x1,y1) shared by both lines , then

yi = m1xi + b1yi = m2xi + b2

Page 5: Computer Graphics Unit1

will both be true. equating over yi gives

m1xi + b1 = m2x1 + b2solving for xi gives

xi = b2 – b1 / m1 – m2

Substituting this into the equation for either line 1 or line 2 gives

yi = ( b2m1 – b1m2 ) / ( m1 – m2 )

therefore, the point

Is the intersection point. Two parallel lines will have the slope. Since such lines will not intersect, the above expressions result in a division by 0. when no point exists, we cannot solve it.

pixel

The pixel (a word invented from "picture element") is the basic unit of programmable color on a computer display or in a computer image. Think of it as a logical - rather than a physical - unit. The physical size of a pixel depends on how you've set the resolution for the display screen. If you've set the display to its maximum resolution, the physical size of a pixel will equal the physical size of the dot pitch (let's just call it the dot size) of the display. If, however, you've set the resolution to something less than the maximum resolution, a pixel will be larger than the physical size of the screen's dot (that is, a pixel will use more than one dot).

The specific color that a pixel describes is some blend of three components of the color spectrum - RGB. Up to three bytes of data are allocated for specifying a pixel's color, one byte for each major color component. A true color or 24-bit color system uses all three bytes. However, many color display systems use only one byte (limiting the display to 256 different colors).

A bitmap is a file that indicates a color for each pixel along the horizontal axis or row (called the x coordinate) and a color for each pixel along the vertical axis (called the y coordinate). A Graphics Interchange Format file, for example, contains a bitmap of an image (along with other data).

Screen image sharpness is sometimes expressed as dpi (dots per inch). (In this usage, the term dot means pixel, not dot as in dot pitch.) Dots per inch is determined by both the physical screen size and the resolution setting. A given image will have lower resolution - fewer dots per inch - on a larger screen as the same data is spread out over a larger physical area. On the same size screen, the image will have lower resolution if the resolution setting is made lower - resetting from

Page 6: Computer Graphics Unit1

800 by 600 pixels per horizontal and vertical line to 640 by 480 means fewer dots per inch on the screen and an image that is less sharp. (On the other hand, individual image elements such as text will be larger in size.)

Pixel has generally replaced an earlier contraction of picture element, pel.

Vector:

A vector is a mathematical object that has magnitude and direction, and satisfies the laws of vector addition. Vectors are used to represent physical quantities that have a magnitude and direction associated with them.  For example,

 The velocity of an object is a vector.  The direction of the vector specifies the direction of travel, and the magnitude specifies the speed.

 The force acting on an object is a vector.  The  direction of the vector specifies the line of action of the force, and the magnitude specifies how large the force is.

 Other examples of vectors include position; acceleration; electric field; electric current flow; heat flow; the normal to a surface.  Examples of quantities that are not vectors include mass, temperature, electric potential, volume, and energy.  These can all be described by their magnitude only (they have no direction) and so are scalars.

A vector is often represented pictorially as an arrow (the arrow’s length is its magnitude, and it points in its direction) and symbolically by an underlined letter

 , using bold type  or by an arrow symbol over a variable

.  The magnitude of a vector is denoted  ,  or

.  There are two special cases of vectors: the unit vector  

has ; and the null vector  has

a

Page 7: Computer Graphics Unit1

Display Devices:

In most applications of computers graphics the quality of the displayed image is very important. Typically, the primary output device in a graphics system is a video monitor. The operation of most video monitors is based on the standard cathode – ray tube (CRT), design but several other technologies exist. Display devices are the crts and other displays that are part of computer terminals, computer consoles, and microcomputers. They are designed to project, show, exhibit, or display softcopy information (alphanumeric or graphic symbology).

The information displayed on a display device screen is not permanent. That is where the term soft-copy comes from. The information is available for viewing only as long as it is on the display screen. Two types of display devices used with personal/microcomputers are the raster scan crt's and the flat panel displays.

Cathode ray tube

Cathode ray tube employing electromagnetic focus and deflection

Page 8: Computer Graphics Unit1

Cutaway rendering of a color CRT

The cathode ray tube or CRT, invented by Karl Ferdinand Braun, is the display device that was traditionally used in most computer displays, video monitors, televisions and oscilloscopes. The CRT developed from Philo Farnsworth's work was used in all television sets until the late 20th century and the advent of plasma screens, LCDs, DLP, OLED displays, and other technologies. As a result of this technology, television continues to be referred to as "The Tube" well into the 21st century, even when referring to non-CRT sets.

Apparatus description

The earliest version of the CRT was a cold-cathode diode, a modification of the Crooks tube with a phosphor-coated screen, sometimes called a Braun tube. The first version to use a hot cathode was developed by J. B. Johnson (who gave his name to the term Johnson noise) and H. W. Weinhart of Western Electric and became a commercial product in 1922.

Cathode rays exist in the form of streams of high speed electrons emitted from the heating of cathode inside a vacuum tube. The released electrons form a beam within the cathode ray tube due to the voltage difference applied in the two electrodes, and the direction of this beam is then altered either by a magnetic or electric field to swap over the surface at the fluorescent screen (anode), covered by phosphorescent material (often transition metals or rare earths). Light is emitted at the instant that electrons hit the surface of that material.

In case of a television and modern computer monitors, the entire front area of the tube is scanned in a fixed pattern called a raster, and a picture is created by modulating the intensity of the electron beam according to the program's video signal. The beam in all

Page 9: Computer Graphics Unit1

modern TV sets is scanned with a magnetic field applied to the neck of the tube with a "magnetic yoke", a set of coils driven by electronic circuits. This usage of electromagnets to change the electron beam's original direction is known to be "magnetic deflection".

In case of an oscilloscope, the intensity of the electron beam is kept constant, and the picture is drawn by steering the beam along an arbitrary path. Usually, the horizontal deflection is proportional to time, and the vertical deflection is proportional to the signal. The tube for this kind of use is longer and narrower, and deflection is done by applying an electrical field via deflection plates built into the tube's neck. The use of an electrical field (so-called "electrostatic deflection") allows the electron beam to be steered much more rapidly than with a magnetic field, where the inductance of the electromagnets imposes relatively severe limits on the frequency range that can be accurately reproduced.

The electron beam source is the electron gun, producing the stream of electrons by thermionic emission and then focusing it to a thin beam. The gun was often mounted slightly off-axis, as it accelerated not only electrons but also ions resulting from out gassing of the internal tube components and from an imperfect vacuum. The ions are heavier than electrons, therefore they are less likely to be deflected by the magnetic field from the deflection coils, and in older constructions with in-axis guns they were bombarding the phosphor in the center of the screen and causing its deterioration; some very old black and white TV sets show browning of the center of the screen, known as ion burn. The combination of an off-axis mounting of electron guns and permanent magnets bending the electron beam back in the desired direction forms an ion trap; the ions were not deflected enough so they struck the neck of the tube instead of the screen and harmlessly dissipated. This system was later replaced with aluminium coating of the phosphor.

The internal side of the phosphor layer is often covered with a layer of aluminium. The phosphors are usually poor electrical conductors, which leads to deposition of residual charge on the screen, effectively decreasing the energy of the impacting electrons due to electrostatic repulsion (an effect known as "sticking"). The aluminium layer is connected to the conductive layer inside the tube, disposing of this charge. It also reflects the phosphor light in the desired direction towards the viewer, and protects the phosphor from ion bombardment.

Electron Gun

Graphical displays for early computers used vector monitors, a type of CRT similar to the oscilloscope. Here, the beam would trace straight lines between arbitrary points, repeatedly refreshing the display as quickly as possible. Vector monitors were used in many computer displays as well as by some late 1970s to mid 1980s arcade games such as Asteroids. Vector displays for computers did not noticeably suffer the display artifacts

Page 10: Computer Graphics Unit1

of aliasing and pixelization, but were limited in that they could display only a shape's outline, and only a very small amount of rather largely-drawn text. (Because the speed of refresh was roughly inversely proportional to how many vectors needed to be drawn, "filling" an area using many individual vectors was impractical as was the display of a large amount of text.) Some vector monitors are capable of displaying several colors using either an ordinary tri-color CRT or two phosphor layers (so called "penetration color"). In these dual-layer tubes, by controlling the strength of the electron beam, electrons could be made to reach (and illuminate) either or both phosphor layers, typically producing green, orange, or red.

Other graphical displays used storage tubes including Direct View Bistable Storage Tubes (DVBSTs). These CRTs inherently stored the image and did not require periodic refreshing.

Some displays for early computers (those that needed to display more text than was practical using vectors, or required high speed for photographic output) used Charactron CRTs. These used a perforated metal character mask ("Stencil") to shape a wide electron beam to form a selected character shape on the screen. The electronics could quickly select a character on the mask with one set of deflection circuits, while selecting the position to display the character at with a second set of deflection circuits, and then just turn on the beam briefly to draw that character. Graphics could still be drawn by selecting the unneeded position on the mask corresponding to the code for a space (when drawing a space the beam was simply kept off), which had a small round hole in the center instead of being solid, and draw this as with other displays.

Many of these various types of early computer display CRTs use "slow" or long persistence phosphor, to reduce flicker for the operator.

Aperture grille CRT close-up

Color tubes use three different materials which specifically emit red, green, and blue light, closely packed together in strips (in aperture grille designs) or clusters (in shadow mask CRTs). There are three electron guns, one for each color, and each gun can reach only the dots of one color, as the grille or mask absorbs electrons that would otherwise hit the wrong phosphor.

Page 11: Computer Graphics Unit1

The outer glass allows the light generated by the phosphor out of the monitor, but (for color tubes) it must block dangerous X-rays generated by the impact of the high energy electron beam. For this reason, the glass is made of leaded glass (sometimes called "lead crystal"). Because of this and other shielding, and protective circuits designed to prevent the anode voltage rising too high, the X-ray emission of modern CRTs is well within safety limits.

CRTs have a pronounced triode characteristic, which results in significant gamma (a nonlinear relationship between beam current and light intensity). In early televisions, screen gamma was an advantage because it acted to compress the screen contrast. The gamma characteristic exists today in all digital video systems. However, in some systems where a linear response is required, as in desktop publishing, gamma correction is applied.

CRT displays accumulate static electrical charge on the screen, unless protective measures are taken. This charge does not pose a safety hazard, but can lead to significant degradation of image quality through attraction of dust particles to the surface of the screen. Unless the display is regularly cleaned with a dry cloth or special cleaning tissue (using ordinary household cleaners may damage anti-glare protective layer on the screen), after a few months the brightness and clarity of the image drops significantly.

The high voltage (E.H.T.) used for accelerating the electrons is provided by a transformer. For CRTs used in televisions, this is usually a fly back transformer that steps up the line (horizontal) deflection supply to as much as 32,000 volts for a color tube. (Monochrome tubes may operate at a somewhat lower voltage and specialty CRTs may operate at much lower voltages.) The output of the transformer is rectified and the pulsating output voltage is smoothed by a capacitor formed by the tube itself: the accelerating anode being one plate, the glass being the dielectric, and the earthed coating on the outside of the tube being the other plate. Before all-glass tubes, the structure between the screen and the electron gun was made from a heavy metal cone which served as the accelerating anode. Smoothing of the E.H.T. was then done with a massive capacitor, external to the tube itself.

Video Display Devices

Raster Scan Displays

Raster scan crts (TV scan video monitors or display monitors) are used extensively in the display of alphanumeric data and graphics. They are used primarily in non-tactical display applications such as SNAP II user terminals and desktop computers.

Page 12: Computer Graphics Unit1

The raster is a series of horizontal lines crossing the face of the crt screen (fig. 2-28). Each horizontal line is made up of one trace of the electron beam from left to right. The raster starts at the top left corner of the crt screen. As each horizontal line is completed, the blanked electron beam is rapidly returned or retraced to the left of the screen.

Raster or TV scan.

Vertical deflection moves the beam down, and the horizontal sweep repeats. When the vertical sweep reaches the bottom line of the raster, a vertical blanked retrace returns the sweep to the starting position of the raster, and the process is repeated.

Each completed raster scan is referred to as a field; two fields make up a frame. The display rate of fields and frames determines the amount of flicker in the display that is perceived by the human eye. Each field is made up of approximately 525 horizontal lines. The actual number of horizontal lines varies from device to device. A frame consists of the interlaced lines of two fields. The horizontal lines of the two fields are interlaced to smooth out the display. A display rate of 30 frames per second produces a smooth, flicker-free raster and corresponding display on the screen.

Page 13: Computer Graphics Unit1

Computer Graphics can use many different output devices, such as monitors, printer, plotter, etc. but the most common display device is the Cathode Ray Tube (CRT) monitor.

In a CRT the focusing system acts like a light lens with a focal length such that the center of focus is the screen. The horizontal and vertical deflectors allow the electron beam to be focused on any spot on the screen. The screen is coated with a special organic compound called a phosphor. For color systems there are groups of three different phosphors, one to produce red shades, one for green shades, and one for blue shades.

Electrons hit the screen phosphor molecules and cause a ground state to singlet excited state transition. Most of the phosphors relax back to the ground state by emitting a photon of light which is called fluorescence. This happens very rapidly so that all of the molecules which fluoresce do so in under a millisecond. Some of the molecules convert from an excited singlet state to an excited triplet state in a process called inter-system crossing. This is quantum mechanically forbidden, which means it still happens but only by a relatively small fraction of the phosphors and is a slower process. These phosphors then emit light, called phosphorescence, that decays slower but still rapidly (in about 15-20 milliseconds) so there is the need to refresh the screen by redrawing the image.

The physical process is depicted below:

Phosphors are characterized by color (usually red, green, or blue) and persistence, which is the time for the emitted light to decay to 10 % of the initial intensity. High persistence phosphors allow for a lower refresh rate to avoid flicker, e.g., the original IBM PC monochrome monitor had a high persistence phosphor. This allowed it to have good resolution for text with inexpensive electronics. However, this is poor for animation since a "trail" is left with moving objects.

Low persistence phosphors are good for animation but require a high refresh rate to prevent flicker. A refresh rate of 50 - 60 Hz is usually sufficient to prevent flicker, but some systems refresh at even higher rates such as 72-76 Hz.

The user can vary the voltage on the control grid to attenuate the electron flow. This is done with the "intensity" knob:

Page 14: Computer Graphics Unit1

high - voltage --> few e- (are repelled)low or 0 - voltage --> high number of e-

The maximum number of points that can be displayed without overlap is called the resolution and is usually given as the number of horizontal points versus the number of vertical points These are called pixels (picture elements), e.g. a monitor might have a resolution of 1024 X 768 pixels.

The maximum resolution may be determined by the characteristics of the monitor for a random scan system or by a combination of the monitor and graphics card memory for a raster scan system. For a random scan system the resolution can be up to 4096 X 4096

The aspect ratio equals the ratio of vertical pixels/horizontal pixels for an equal length line. It sometimes refers to the ratio of the horizontal dimension/vertical dimension. Examples:

If have the above monitor and a resolution of 640 X 480 pixels

==> horizontal --> 640/8 = 80 pixels/inch ==> vertical --> 480/6 = 80 pixels/inch

Therefore, it has "square" pixels, i.e. the same size in the vertical and horizontal directions.

If the resolution is 320 X 200 pixels, then: ==> horizontal --> 320/8 = 40 pixels/inch==> vertical --> 200/6 = 33 1/3 pixels/inch

So the size of a horizontal pixel does not equal the size of a vertical pixel. We must correct for this in our image display, e.g. line or object drawing. Otherwise all of the drawings will be distorted.

Computer graphics systems can be random scan or raster scan.

Page 15: Computer Graphics Unit1

In a Random Scan system, also called vector, stroke-writing, or calligraphic the electron beam directly draws the picture.

Advantages of random scan:

very high resolution, limited only by monitor easy animation, just draw at different positions requires little memory (just enough to hold the display program)

Disadvantages:

requires "intelligent electron beam, i.e., processor controlled limited screen density before have flicker, can't draw a complex image limited color capability (very expensive)

A large technology advance was the modified random scan system, called the Direct View Storage Tube (DVST) system from Tektronix in the late 1960's.

The primary electron gun draws the picture by knocking out electrons from the storage grid, producing a positively charged pattern. Low speed electrons from the flood gun are

Page 16: Computer Graphics Unit1

attracted to the storage grid, and pass through the positively charge pattern spots (past the collector grid) to hit the screen. So once a picture is drawn it stays until erased by putting a charge over all of the storage grid so that the electrons hit all of the screen (producing a green flash on a green screen monitor). There is no animation or color capability but a very complex, high density image is possible. The DVST was much cheaper then a regular random scan system since it did not require a built in CPU (the host computer could be used to draw the image once).

Note that a Pen - Plotter and Laser Light Show are also random scan display devices.

A Raster Scan device scans the screen from top to bottom in a regular pattern. This is common TV technology.

The electron beam is turned on/off so the image is a collection of dots which are painted on screen one row (scan line or raster line) at a time. Note: A raster is a matrix of pixels covering the screen (output display) area and consists of raster lines.

Question: How to tell the electron beam when to turn on/off?

Answer: We must store the pattern in a special graphics memory area called a frame buffer (also called a bit map) where each memory location corresponds to a pixel. So have display processor scan memory and then turn electron beam on/off depending if bit is 1 or 0. We may have an interlaced scan, i.e., the first scan does the even lines 0,2,4,... then the second scan does the odd lines 1,3,5,... Must do at least > 45 Hz or else get flicker.

Look at the memory requirements for a monochrome system, where a pixel is either on or off: 640 X 480 resolution --> 307,200 bits/8 bits/byte = 38,400 bytes. In the 1960's and 70's, memory was very expensive (>= $1,000,000/Mbyte) so there were few raster scan

Page 17: Computer Graphics Unit1

systems, mostly random scan. Today with memory inexpensive, most graphics systems (99%) are raster scan.

There may be several bits/pixel for color or gray scale intensity, e.g., 8 bits/pixel (0 --> 255) gives 256 colors. This is sometimes called 8 bit planes. So the memory requirements for 1024 x768 with 8bits/pixel is 768 K bytes.

Look at speed requirements for memory access:

60 Hz for 1024 x 768 X 8 = 768 Kbytes = 786,432 bytesRead 786 X 10^3 bytes in 1600 X 10^-5 sec. or 1 byte in 2 X 10-8 sec. = 20 ns

Also, look at rough estimation of scanning rates: frequency X number of vertical lines: Example: for an IBM VGA 60 X 480 = 30 KHz; for 1024 x 768 --> 60 X 768 = 46 Khz

PC's all use raster scan and they can easily do color/shading

Look at color CRT - use different phosphors, usually Red, Green, Blue then combine.

Can use 2 methods: First method (random scan) : beam penetration - have 2 layers (R,G) phosphors. Low speed e- excite R, very high speed e- excite G, intermediate do both so get yellow or orange-- so control e- beam voltage.

Second Method (raster scan) uses a shadow mask with three electron guns: red, green, & blue.

Page 18: Computer Graphics Unit1

So by combining different intensities from guns can get different colors (more later).

Example:

R G B color

0 0 0 black

0 0 1 blue

0 1 0 green

0 1 1 cyan

1 0 0 red

1 0 1 magenta

1 1 0 yellow

1 1 1 white

More expensive systems will have multiple levels of each color, not just on/off. They do this with Digital to Analog Converters (DACs). This converts a digital number, e.g., from 0 to 255, to a voltage, e.g., from 0.0 to 5.0 volts. So, for example, a color intensity of 100 would be converted to a voltage of about 100/255 * 5.0 = 1.96 volts. This is the voltage that would be applied to that color's electron gun so about 40% of the maximum possible value of that color's phosphors would be hit by electrons.

Note : Printers are also raster scan devices and can give fairly high quality output (dot matrix or laser).

Look at frame buffer for VGA 640 X 480 monochrome resolution --> 1 bit/pixel (on/off) 640/8 = 80 bytes/row X 480 rows = 28,800 bytes so the central processor performs the scan conversion of the scene, writes 1's, 0's into the frame buffer and then the display processor scans frame buffer and turns electron gun on/off.

Overall Hardware System for interactive graphics.

Page 19: Computer Graphics Unit1

For random scan the host system generates a display file of graphics commands which is executed by the display processor. For cheap, e.g. PC's, raster scan systems the host processor performs can conversion (from mathematical model to a frame buffer image). The display processor then just reads the frame buffer and turns the electron beams on/off. More expensive raster scan systems will have a graphics processor that performs some scan conversion.

Direct-View Storage Tubes

DVBST was an acronym used by Tektronix to describe their line of "Direct-View Bi-stable Storage Tubes". These were cathode ray tubes (CRTs) that stored information written to them using an analog technique inherent in the CRT and based upon the secondary emission of electrons from the phosphor screen itself. (See the discussion of "Analogue Storage" in the oscilloscope article.) The resulting image was visible in the continuously glowing patterns on the face of the CRT

DVBST technology was anticipated by Andrew Haeff of the (United States) Naval Research Laboratory, and by Williams and Kilburn in the late 1940s. Tek's Bob Anderson reduced to practice the science and technology in the late 1950s to yield a reliable and simple DVST.

DVBSTs were used for analog oscilloscopes (first in the 564 oscilloscope, then the 601 monitor, the 611 monitor, the 7613 plug-in mainframe oscilloscope, all from Tektronix) and for computer terminals such as the archetypical Tek 4010 (the "mean green flashin' machine") and its several successors including the Tektronix 4014. Portions of the screen

Page 20: Computer Graphics Unit1

are individually written-to by a conventional electron beam gun, and "flooded" by a wide, low velocity electron gun. Erasure required erasing the entire screen in a bright flash of green light, leading to the nickname.

Some DVBST implementations also allowed the "write-through" of a small amount of dynamically refreshed, non-stored data. This allowed the display of cursors, graphic elements under construction, and the like on computer terminals.

Flat panel displays encompass a growing number of technologies enabling video displays that are lighter and much thinner than traditional television and video displays using cathode ray tubes, usually less than 10 cm (4 inches) thick. These include:

Flat panel displays requiring continuous refresh:

Plasma displays Liquid crystal displays (LCDs) Digital light processing (DLPs) Liquid crystal on silicon (LCOSs) Organic light-emitting diode displays (OLEDs) Surface-conduction Electron-emitter Displays (SEDs) Field emission displays (FEDs) Nano-emissive display (NEDs)

Only the first five of these displays are commercially available today, though OLED displays are beginning deployment only in small sizes (mainly in cellular telephones). SEDs are promised for release in 2006, while the FEDs and NEDs are (as of November 2005) in the prototype stage.

Bistable flat panel displays (or electronic paper):

e-ink displays Gyricon displays Iridigm displays magink displays

Bistable displays are beginning deployment in niche markets (magink displays in outdoor advertising, e-ink and Gyricon displays in in-store advertising).

Flat panel displays balance their smaller footprint and trendy modern look with high costs and in many cases inferior images compared with traditional CRTs. In many applications, specifically modern portable devices such as laptops, cellular phones, and digital cameras, whatever disadvantages are overcome by the portability requirements.

Page 21: Computer Graphics Unit1

A field emission display (FED) is a type of flat panel display using phosphor coatings as the emissive medium. First invented by Harjinder Kamboja, a Sikh from Amritsar India.

Field emission displays are very similar to cathode ray tubes, however they are only a few millimeters thick. Instead of a single electron gun, a field emission display (FED) uses a large array of fine metal tips or carbon nanotubes (which are the most efficient electron emitters known), one positioned behind each phosphor dot, to emit electrons through a process known as field emission. A similar technology to be commercialized in 2005 is the SED (surface-conduction electron-emitter) display.

Like LCDs, FEDs are energy efficient and could provide a flat panel technology that features less power consumption than existing LCD and plasma display technologies. They can also be cheaper to make, as they have fewer total components. As of yet, however, there are no consumer production models available in the United States, although small demo panels have been produced.

In 2001, Candescent had spent $600 million on producing FEDs with non-carbon material, but it was abandoned, with assets sold to Canon in August 2004, two months after filing for voluntary reorganization under Chapter 11. Advance Nanotech, in collaboration with the University of Bristol, has developed a similar panel that relies on specially doped diamond dust. Carbon Nanotechnologies claimed production would start on late 2006.

The Plasma display

History

The Plasma display panel was invented at the University of Illinois at Urbana-Champaign by Donald L. Bitzer and H. Gene Slottow in 1964 for the PLATO Computer System. The original monochrome (usually orange or green) panels enjoyed a surge of popularity in the early 1970s because the displays were rugged and needed neither memory nor refresh circuitry. There followed a long period of sales decline in the late 1970s as semiconductor memory made CRT displays incredibly cheap. Nonetheless, plasma's relatively large screen size and thin profile made the displays attractive for high-profile placement such as lobbies and stock exchanges. In 1992, Fujitsu introduced the world's first 21-inch full color display. It was a hybrid based on the plasma display created at the University of Illinois at Urbana-Champaign and NHK STRL, achieving superior brightness.

In 1997 Pioneer started selling the first Plasma TV to the public

Page 22: Computer Graphics Unit1

Screen sizes have increased since the 21 inch display in 1992. The largest Plasma display in the world was shown at the CES (Consumer Electronics Show) in Las Vegas in 2006. It measured 103" and was made by Matsushita Electrical Industries (Panasonic).

Today the superior brightness and viewing angle, when compared to LCD, of color plasma panels have caused these displays to become one of the most popular form of display for HDTV.

General characteristics

Plasma displays are bright (1000 lx or higher for the module), have a wide color gamut, and can be produced in fairly large sizes, up to 200 cm (80 inches) diagonally. They have a very high "dark-room" contrast, creating the "perfect black" desirable for watching movies. The display panel is only 6 cm (2 1/2 inches) thick, while the total thickness, including electronics, is less than 10 cm (4 inches). Plasma displays use as much power per square meter as a CRT or an AMLCD television; in 2004 the cost has come down to US$1900 or less for the popular 42 inch (107 cm) diagonal size, making it very attractive for home-theatre use. Real life measurements of plasma power consumption find it to be much less than that normally quoted by manufacturers. Nominal measuments indicate 150 Watts for a 50" screen. The lifetime of the latest generation of PDPs is estimated at 60,000 hours to half life when displaying video. Half life is the point where the picture has degraded to half of its original brightness, which is considered the end of the functional life of the display. So if you use it at an average of 2-1/2 hours a day, the PDP will last approximately 65 years.

Competing displays include the Cathode ray tube, OLED, AMLCD, DLP, SED-tv and field emission flat panel displays. The main advantage of plasma display technology is that a very wide screen can be produced using extremely thin materials. Since each pixel is lit individually, the image is very bright and looks good from almost every angle. Because many plasma displays still have a lower resolution the image quality is often not quite up to the standards of good LCD displays or cathode ray tube sets, but it certainly meets most people's expectations. Also, most cheaper consumer displays appear to have an insufficient color depth - a moving dithering pattern may be easily noticible for a discerning viewer over flat areas or smooth gradients; expensive high-res panels are much better at managing the problem.

With prices starting around US$2,000 and going all the way up past US$20,000 (as of 2004), these sets did not sell as quickly as older technologies like CRT. But as prices fall and technology advances, they have started to seriously compete against the CRT sets. Some 42" sets fell below $1,500 at major retailers like Best Buy and Costco during the 2005 Christmas season, and many of the retailers reported that plasma TVs were among the hottest selling items for that season.

Page 23: Computer Graphics Unit1

Functional details

The xenon and neon gas in a plasma television is contained in hundreds of thousands of tiny cells positioned between two plates of glass. Long electrodes are also sandwiched between the glass plates, on both sides of the cells. The address electrodes sit behind the cells, along the rear glass plate. The transparent display electrodes, which are surrounded by an insulating dielectric material and covered by a magnesium oxide protective layer, are mounted above the cell, along the front glass plate.

In a monochrome plasma panel, control circuitry charges the electrodes that cross paths at a cell, causing the plasma to ionize and emit photons between the electrodes. The ionizing state can be maintained by applying a low-level voltage between all the horizontal and vertical electrodes - even after the ionizing voltage is removed. To erase a cell all voltage is removed from a pair of electrodes. This type of panel has inherent memory and does not use phosphors. A small amount of nitrogen is added to the neon to increase hysteresis.

To ionize the gas in a color panel, the plasma display's computer charges the electrodes that intersect at that cell thousands of times in a small fraction of a second, charging each cell in turn. When the intersecting electrodes are charged (with a voltage difference between them), an electric current flows through the gas in the cell. The current creates a rapid flow of charged particles, which stimulates the gas atoms to release ultraviolet photons.

The phosphors in a plasma display give off colored light when they are excited. Every pixel is made up of three separate subpixel cells, each with different colored phosphors. One subpixel has a red light phosphor, one subpixel has a green light phosphor and one subpixel has a blue light phosphor. These colors blend together to create the overall color of the pixel. By varying the pulses of current flowing through the different cells, the control system can increase or decrease the intensity of each subpixel color to create hundreds of different combinations of red, green and blue. In this way, the control system can produce colors across the entire visible spectrum. Plasma displays use the same phosphors as CRTs, accounting for the extremely accurate color reproduction.

Contrast ratio claimsContrast ratio indicates the difference between the brightest part of a picture and the darkest part of a picture, measured in discrete steps, at any given moment. The implication is that a higher contrast ratio means more picture detail. Contrast ratios for plasma displays are often advertised as high as 5000:1. On the surface, this is a great thing. In reality, there are no standardised tests for contrast ratio, meaning each manufacturer can publish virtually any number that they like. To illustrate, some manufacturers will measure contrast with the front glass removed, which accounts for some of the wild claims regarding their advertised ratios. For reference, the page you're

Page 24: Computer Graphics Unit1

reading now (on a computer monitor) is actually about 50:1. A printed page is about 80:1. A really good print at a movie theater will be about 500:1

Non-emissive Displays:

Non-emissive displays (or non emitters) use optical effects to convert sunlight or light from some other source into graphic patterns.

Example : Liquid Crystal Display

Liquid Crystal Display

Introduction

We all know them from wrist watches, pocket calculators or flat screens for laptop computer, and even TVs more recently. Since more than twenty years, liquid crystals promote their way towards the development of flat optical displays and screens. In the very next future more and more space and energy eating cathode ray tube (CRT) displays will be replaced by flat screens with low energy consumption, and liquid crystal displays (LCDs) will take over main parts. The worldwide LCD market was estimated to grow from 6 Billions US$ in 1993 up to 16 Billions US$ shipment value in 2002 (Stanford Resources 1995).

In the following, the main principles and operation conditions of modern LCDs will be explained briefly.

Page 25: Computer Graphics Unit1

Fig. 5: The Twisted Nematic effect

The main breakthrough in the development of LCDs was the invention of the so-called twisted nematic effect by Schadt and Helfrich in 1971. It is still the most prominent type of liquid crystal light modulators in modern displays. Fig. 5 shows the principles. A nematic liquid crystal is filled between two glass plates, which are separated by thin spacers, coated with transparent electrodes and orientation layers inside. The orientation layer usually consists of a polymer (e.g. polyimide) which has been unidirectionally rubbed e.g. with a soft tissue. As a result, the liquid crystal molecules are fixed with their alignment more or less parallel to the plates, pointing along the rubbing direction, which include an angle of 90 deg between the upper and the lower plate. Consequently, a homogeneous twist deformation in alignment is achieved. The polarization of a linearly polarized light wave is then guided by the resulting quarter of a birefringent helix, if the orientation is not disturbed by an electrical field. The transmitted wave may pass therefore a crossed exit polarizer, and the modulator appears bright. If, however, an AC voltage of a few Volts is applied, the resulting electrical field forces the molecules to align themselves along the field direction and the twist deformation is unwound. Now,

Page 26: Computer Graphics Unit1

the polarization of a light wave is not affected and can not pass the crossed exit polarizer. The modulator appears dark. Obviously, the inverse switching behaviour can be obtained with parallel polarizers. It must be noted further, that gray scale modulation is achieved easily by varying the voltage between the threshold for reorientation (which is a result of elastic properties of LCs) and the saturation field.

Fig. 6: Electrode configurations of LCDs

To display information with a liquid crystal modulator, the transparent indium-tin-oxide (ITO) electrodes are structured by photo-lithographical techniques. Typical electrode configurations are shown in Fig. 6. For simple numerical and alpha-numerical displays as required for watches or simple pocket calculators etc., segmented electrodes are sufficient. All segments are placed on one plate of the display with a common counter electrode at the opposite and can be adressed separately. If more complex data or graphics are desired, a matrix arrangement of electrodes is more convenient. Obvioulsy,

Page 27: Computer Graphics Unit1

adressing of matrix displays is more sophisticated than with segmented electrodes. For large matrices it is impossible to adress each pixel separately with an own supply, since space between pixels is limited and the number of connetctions increases as R x C, if R and C are the numbers of rows and columns, respectively.

Fig. 7: Passive matrix displays

The number of electrical connections with a matrix display can be reduced by using crossed row and column stripe electrodes at the front and exit glass plates, as schematically drawn in Fig. 7. The Pixels are determined by the crossing points between the row and column electrodes, and, in principle, can be adressed by applying a voltage between the corresponding stripes. However, arbitrary pixel patterns can not be generated by this, but need a more complex driving scheme to allow independent adressing of pixels and to avoid cross talk problems. These driving schemes are known as time multiplexing, which are discussed below. Since time multiplexing with crossed electrodes

Page 28: Computer Graphics Unit1

takes advantage of the inherent dynamic properties of liquid crystals, they are often called passive matrix displays.

Fig. 8: Time multiplexing

A simplified scheme of time multiplexing with passive matrix displays is shown in Fig. 8. The pixel are adressed by gated AC voltages (only the envelopes are shown) with a complex temporal structure. A short pulse is applied periodically to the rows as a strobe signal, whereas the columns carry the information signals. A pixel is only selected if a difference in potential and hence an electrical field is present, i.e. only if row and column are not on a low or high level at the same time. More precisely, the pixel is selected if the RMS voltage is above the threshold for reorientation.

An important consequence of passive time multiplexing is that the selection ratio UON/UOFF approaches unity for large pixel numbers as e.g. required with standard VGA

Page 29: Computer Graphics Unit1

computer displays or better. Therefore, liquid crystal modulators with rather steep electro-optical characteristics are required to achieve sufficient optical contrast with weak selection ratios. This was the reason for the development of TN-modulators with larger twist angles than 90 deg, so-called Super-Twist-Nematic or STN modulators.

In a passive dual scan modulator the number of pixel can be doubled without loss in optical contrast by cutting the row stripes in the center of the display and supplying two strobe signals at each half.

Fig. 9: Comparison between TN- and STN-displays

The main differences between conventional TN- and STN-LCDs are summarized in Fig. 9. L1 and L2 indicate the orientation at the front and back plate, respectively, P1 and P2 are the corresponding polarizer axes. To support the strong twist of approximately 180 deg with a STN modulator, the nematic liquid is doped with a small amount of a cholesteric

Page 30: Computer Graphics Unit1

liquid crystal. The stronger twist results in a steeper electro-optical transmision characteristic, which is important for passive matrix displays as discussed above. On the other hand, the viewing angle and colour sensitivity of STN-LCDs is more critical than with TN-LCDs.

Fig. 10: Thin-Film-Transistor or active matrix displays

Another technique to obtain good optical contrast with time multiplexing of a matrix display is to provide each pixel with an own electronic switch. These are Thin-Film-Transistor (TFT) or active matrix displays, shown in Fig. 10. Rows and columns, cross-over insulators, thin film transistors (amorphous silicon) and transparent pixel electrodes are placed on one of the glass plates by photo-lithographical techniques. The other plate contains the common counter electrode. Since the electro-optically active part of the display is reduced by the TFTs to typically between 30 and 60 %, additional masks are required to block light leaking through the non-active parts and hence to improve the

Page 31: Computer Graphics Unit1

contrast of the display. In total, 6 to 9 lithographical steps are necessary, which makes the fabrication process of those displays more expensive. On contrary, conventional TN-liquid crystals with better viewing angle and colour performance can be used instead of STN.

Fig. 11: additive colour mixing

For colour displays each pixel is devided into three or four subpixel, which are covered with colour filters to allow for additive mixing of three basic colours. In four subpixel arrangements an additional neutral gray scale pixel is incorporated to improve brightness control.

A History of Electroluminescent Displays

Page 32: Computer Graphics Unit1

 IntroductionElectroluminescent displays (ELDs) have their origins in scientific discoveries in the first decade of the twentieth century, but they did not become commercially viable products until the1980s. ELDs are particularly useful in applications where full color is not required but where ruggedness, speed, brightness, high contrast, and a wide angle of vision is needed. Color ELD technology has advanced significantly in recent years, especially for microdisplays. The two main firms that have developed and commercialized ELDs are Sharp in Japan and Planar Systems in the United States. What Is Electroluminescence?There are two main ways of producing light: incandescence and luminescence. In incandescence, electric current is passed through a conductor (filament) whose resistance to the passage of current produces heat. The greater the heat of the filament, the more light it produces. Luminescence, in contrast, is the name given to "all forms of visible radiant energy due to causes other than temperature."i[1]

 There are a number of different types of luminescence, including (among others): electroluminescence, chemiluminescence, cathodoluminescence, triboluminescence, and photoluminescence. Most "glow in the dark" toys take advantage of photoluminescence: light that is produced after exposing a photoluminescent material to intense light. Chemiluminescence is the name given to light that is produced as a result of chemical reactions, such as those that occur in the body of a firefly. Cathodoluminescence is the light given off by a material being bombarded by electrons (as in the phosphors on the faceplate of a cathode ray tube). Electroluminescence is the production of visible light by a substance exposed to an electric field without thermal energy generation.ii[2]  An electroluminescent (EL) device is similar to a laser in that photons are produced by the return of an excited substance to its ground state, but unlike lasers EL devices require much less energy to operate and do not produce coherent light. EL devices include light emitting diodes, which are discrete devices that produce light when a current is applied to a doped p-n junction of a semiconductor, as well as EL displays (ELDs) which are matrix-addressed devices that can be used to display text, graphics, and other computer images. EL is also used in lamps and backlights. There are four steps necessary to produce electroluminescence in ELDs: 1. 1.      Electrons tunnel from electronic states at the insulator/phosphor interface;2. 2.      Electrons are accelerated to ballistic energies by high fields in the phosphor;3. 3.      The energetic electrons impact-ionize the luminescent center or create electron-

hole pairs that lead to the activation of the luminescent center; and4. 4.      The luminescent center relaxes toward the ground state and emits a photon. All ELDs have the same basic structure. There are at least six layers to the device. The first layer is a baseplate (usually a rigid insulator like glass), the second is a conductor,

Page 33: Computer Graphics Unit1

the third is an insulator, the fourth is a layer of phosphors, and the fifth is an insulator, and the sixth is another conductor.  

Figure 1. Structure of an Electroluminescent Display

Source: Planar Systems. ELDs are quite similar to capacitors except for the phosphor layer. You can think of an ELD as a "lossy capacitor" in that it becomes electrically charged and then loses its energy in the form of light.iii[3] The insulator layers are necessary to prevent arcing between the two conductive layers. An alternating current (AC) is generally used to drive an ELD because the light generated by the current decays when a constant voltage is applied. There are, however, EL devices that are DC driven (see below). Scientific OriginsElectroluminescence was first observed in silicon carbide (SiC) by Captain Henry Joseph Round in 1907.iv[4] Round reported that a yellow light was produced when a current was passed through a silicon carbide detector.v[5] Round was an employee of the Marconi Company and a personal assistant to Guglielmo Marconi. He was an inventor in his own right with 117 patents to his name by the end of his life.vi[6]

 The second reported observation of electroluminescence did not occur until 1923, when O.V. Lossev of the Nijni-Novgorod Radio Laboratory in Russia again reported electroluminescence in silicon carbide crystals.vii[7]

 B. Gudden and R.W. Pohl conducted experiments in Germany in the late 1920s with phosphors made from zinc sulfide doped with copper (ZnS:Cu). Gudden and Pohl were solid state physicists at the Physikalisches Institut at the University of G`ttingen.viii[8] They reported that the application of an electrical field to the phosphors changed the rate of photoluminescent decay.ix[9]  The next recorded observation of electroluminescence was by Georges Destriau in 1936, who published a report on the emission of light from zinc sulfide (ZnS) powders after

Page 34: Computer Graphics Unit1

applying an electrical current.x[10] Destriau worked in the laboratories of Madame Marie Curie in Paris. The Curies had been early pioneers in the field of luminescence because of their research on radium. According to Gooch, Destriau first coined the word "electroluminescence" to refer to the phenomenon he observed. Gooch also argues that one should keep in mind the differences between the "Lossev effect" and the "Destriau effect:" 

The Lossev effect should be distinguished from the Destriau effect. Destriau's work involved zinc sulphide phosphors, and he observed that those phosphors could emit light when excited by an electric field…[The Lossev effect, in contrast, involves electroluminescence] in p-n junctions.xi[11]

 During World War II, a considerable amount of research was done on phosphors in connection with work on radar displays (which was later to benefit the television industry in the form of better cathode ray tubes). Wartime research also included work on the deposition of transparent conductive films for deicing the windshields of airplanes. That work was later to make possible a whole generation of new electronic devices. In the 1950s, GTE Sylvania fired various coatings, including EL phosphors onto heavy steel plates to create ceramic EL lamps.xii[12] During this period, most research focused on powder EL phosphors to get bright lamps requiring minimal power and with a potentially long lifetime. Research funding was cut back when it was determined that product lifetimes were too short (approximately 500 hours).xiii[13]

 The first thin-film EL structures were fabricated in the late 1950s by Vlasenko and Popkov.xiv[14] These two scientists observed that luminance increased markedly in EL devices when they used a thin film of Zinc Sulfide doped with Manganese (ZnS:Mn). Luminance was much higher in thin film EL (TFEL) devices than in those using powdered substances. Such devices however were still too unreliable for commercial use. Russ and Kennedy introduced the idea of depositing insulating layers under and above the phosphor layer on a TFEL device.xv[15] The implications for reliability of TFEL devices was not appreciated at the time, however. Soxman and Ketchpel conducted research between1964 and 1970 that demonstrated the possibility of matrix addressing a TFEL display with high luminance, but again unreliability of the devices remained a problem.xvi[16]

 In the mid-1960s, there was a revival of EL research in the United States focused on display applications. Sigmatron Corporation first demonstrated a thin-film EL (TFEL) dot-matrix display in 1965. Unfortunately, Sigmatron was unable to successfully commercialize these displays and it folded in 1973.xvii[17]  

Page 35: Computer Graphics Unit1

In 1968, Aron Vecht first demonstrated a direct current (DC) powered EL panel using powdered phosphors.xviii[18] Research on powdered phosphor DC-EL devices continued, especially for use in watch dials, nightlights and backlights, but most subsequent research on ELDs focused on thin-film AC driven devices. An early example was the work of Peter Brody and his associates at Westinghouse Research Laboratories on EL and AM-EL devices between 1968 and 1974.xix[19]

 In 1974, Toshio Inoguchi and his colleagues at Sharp Corporation introduced an alternating current (AC) TFEL approach to ELDs at the annual meeting of the Society for Information Display (SID). The Sharp device used zinc sulfide doped with manganese (ZnS:Mn) as the phosphor layer and yttrium oxide (Y2O3) for the sandwiching insulators. This was the first high-brightness long-lifetime ELD ever made. Sharp introduced a monochrome ELD television in 1978. The paper Inoguchi published on his group's research helped to reinvigorate EL research in the rest of the world, including at Tektronix, a U.S. electronics firm based in Portland, Oregon.xx[20]

 Tektronix' research on EL began in 1976. The management at Tektronix were familiar with the work reported by Inoguchi's team. They decided to start a new program on ELDs at Tektronix Applied Research Laboratories. The work begun there was continued when the Tektronix researcher left to create a spinoff firm called Planar Systems. Several other large U.S. companies also were conducting research on ELDs in the 1970s, including: IBM, GTE, Westinghouse, Aerojet General, and Rockwell. All these companies realized that ELDs had potential advantages over existing LCD technology in the following areas: 1. 1.      Contrast2. 2.      Multiplexing, and3. 3.      Viewing angle. The most important problem that had to be solved before mass production of ELDs could begin was increasing the reliability of the EL thin film stack. Since the devices operated at very high field levels -- about 1.5 MV/cm -- there was a high probability that they would break down, especially if there was insufficient uniformity in the stack. Sharp, Tektronix, and Lohja Corporation in Finland were able to solve this problem between 1976 and 1983 using slightly different approaches. The second major problem was to get access to high-voltage drivers for the displays. Sharp ended up developing their own; Tom Engibous developed drivers for EL displays at Texas Instruments by modifying the design his group had done for plasma displays.xxi

[21] Planar used the TI drivers in its products until it could find additional suppliers. The introduction to the market in 1985 of Grid and Data General laptops with EL displays from Sharp and Planar respectively helped to build the foundations for the nascent laptop computer industry at a time when LCDs did not have sufficient brightness or contrast to be used in commercial products. Both Planar and Sharp monochrome ELDs used a phosphor layer made from zinc sulfide doped with manganese (ZnS:Mn).

Page 36: Computer Graphics Unit1

These displays gave off an amber (orange-yellow) color that was bright but also pleasing to the eye. 

Figure 2. A Monochrome (Amber) EL Notebook Display

Source: Planar Systems.  Research on Color ELDsOne of the key disadvantages of ELDs relative to liquid crystal displays (LCDs) was that until 1981 ELDs were not capable of displaying more than one color. Even after 1981, color ELDs were limited to a limited range of colors (red, green, and yellow) until 1993 when a blue phosphor was discovered. In 1981, Okamoto reported that a rare-earth doped ZnS could be used in the phosphor layer of a TFEL device.xxii[22]

 In 1984, William Barrow of Planar and his colleagues announced that they were able to get blue-green emissions from strontium sulfide doped with cerium (SrS:Ce). In 1985, Shosaku Tanaka at Tottori University and his colleagues reported that they had duplicated the work done at Planar on SrS:Ce phosphors but added that they had gotten calcium sulfide (CaS) to emit a deep red color. In 1988, Tanaka's group announced that they had gotten white light from a TFEL display using a combination of SrS:Ce and SrS doped with Europium (SrS:Eu). The idea here was to use the white light in connection with a color filter to produce a full color display analogously to the way that it is done in liquid crystal displays. The advantage of doing this with ELDs was that such a display would not require a backlight. The main disadvantage was the added cost and difficulty of introducing a color filter. In 1994, Soininen and coworkers at Planar International in Finland announced that a SrS:Ce/ZnS:Mn white phosphor deposited by atomic layer epitaxy achieves sufficient luminance and stability for use in color EL display products.xxiii[23]

 

Page 37: Computer Graphics Unit1

Further work on blue phosphors was done by Reiner Mach and his colleagues at the Heinrich Hertz Institute in Berlin. Additional work on SrS:Ce blue phosphors was done at Westaim Corporation. A SrS:Cu blue phosphor showing improved blue color and efficiency was reported by Sey-Shing Sun of Planar in 1997. Planar demonstrated true white color EL prototype displays using this blue phosphor in a SrS:Cu/ZnS:Mn multi-layer structure. The SrS:Cu phosphor will enable color EL displays to be produced with a wider color gamut.xxiv[24]

 Barrow and his team at Planar announced a prototype of a multi-color EL display using ZnS:Mn and ZnS:Tb phosphor layers in 1986. By 1988, they had a prototype full-color display using a patterned phosphor structure. Commercial production of multicolor ELDs did not occur until 1993 at Planar, however, and full color ELDs have been produced only in the form of microdisplays (see section below on AMEL microdisplays).xxv[25] These color AMEL microdisplays used the ALE SrS:Ce/ZnS:Mn white phosphor with either sequential or spatial color filtering.  Planar SystemsPlanar Systems, Inc., was formed in 1983 as a spinoff from Tektronix. It was founded by three senior managers from Tektronix' Solid State Research and Development Group: John Laney, James Hurd, and Christopher King.xxvi[26] Hurd became the President and CEO, Laney worked on manufacturing issues, and King became the firm's chief technical officer. Tektronix gave Planar its rights to certain technologies in exchange for an equity stake (in 1994 its share was still 7.5 percent).xxvii[27] Planar remained privately held until it went public in 1993. In 1984, Planar opened its first manufacturing facility in Beaverton, Oregon. It shipped its first bulk order in 1985 to Nippon Data General for an early laptop computer with a CGA (640x200) EL panel. Once volume manufacturing of ELDs began, a number of additional problems had to be solved in order to improve prospects for sales in the competitive markets for flat panel displays: 1. 1.      Luminous efficiency had to be increased;2. 2.      Better driving methods were needed; and3. 3.      Gray scale capability of ELDs had to be enhanced. The initial ELD prototypes had brightness levels of only about 20 foot lamberts (fLs). Commercial products in the 1990s were to have brightness levels of 100 fLs. The initial drive scheme for ELDs at Planar was to apply a single polarity voltage pulse to each line of the display and then an opposite polarity pulse to the entire panel. This was called "the refresh method." In 1984/85, it was determined that this drive method led to "burn in" -- some pixels would become unusable over time. A new drive scheme

Page 38: Computer Graphics Unit1

invented by Tim Flegal called symmetric drive replaced the refresh method. In symmetric drive, pulses of alternate polarities were applied to each line so that a net zero dc voltage was developed. This prevented "burn in." Tim Flegal was also responsible for pioneering a variety of gray scale driving methods, including pulse width, analog voltage, and frame rate modulation. High performance analog drivers at reasonable prices were difficult to obtain, and Planar had difficulty getting Texas Instruments to supply them because of the relatively low volumes involved (from TI's perspective), but eventually Planar found a new supplier for these circuits: Supertex.xxviii[28]

 One of Planar's key markets after the decline in demand for monochrome displays for laptop computers was military displays. Planar provided EL displays to defense contractors like Norden Systems and Computing Devices Canada, Ltd. (CDC). These displays were monochrome with limited gray scaling. Planar diversified its sales out of military applications toward industrial and medical equipment. By the mid 1990s, over a third of Planar's sales were to medical equipment firms.   

Figure 3. Sales of Planar Displays by Type of Application, c. 1998.

  Because of Planar's willingness to work with customers to adapt products for specific applications, it was able to command a price premium over the products of its main competitor, Sharp. By the late 1980s, Planar controlled over 90 percent of the world market for ELDs. Planar purchased the Finlux Display Electronics unit of Lohja Oy (Finland) in December 1990. Finlux was renamed Planar International, Ltd. Its headquarters remained in Espoo, Finland. The main reason for the purchase of Finlux was to obtain a marketing and production base in Europe but an important secondary reason was to get access to Finlux's atomic layer epitaxy (ALE) technology (see the section on Finlux below).xxix[29]

 EL displays were not well suited to military applications by the early 1990s. By that time, the military wanted color displays that were bright enough to be seen in airplane cockpits and tanks under a variety of environmental lighting conditions. In August 1994,

Page 39: Computer Graphics Unit1

Planar purchased the avionics display operations of Tektronix and formed a wholly owned subsidiary called Planar Advance to manage this business.xxx[30] Planar Advance initially invested about $10 million in CRT-based displays for cockpits, but was blindsided by the DoD's policy of switching to ruggedized TFT LCDs. In response to this shift, Planar Advance purchased TFT LCD glass from dpiX and assembled them into "mil spec" units for the DoD. This move permitted Planar to diversify its display offerings out of ELDs but it also necessitated a redefinition of the core competence of the firm. In 1992, Planar helped to organized a consortium to develop color ELDs called the American Display Consortium. This consortium was funded by the Department of Commerce under the Advanced Technology Program (ATP) created by the Clinton administration. The total funding for the consortium was to be $30 million; half funded by the government and half by the consortium's private firms. The National Institute for Standards and Technology (NIST) supervised the consortium on behalf of the Department of Commerce. Other members of this consortium were: Candescent Technologies, dpiX, Electro Plasma, FED Corporation, Kent Display Systems, Lucent Technologies, OIS, Photonics Imaging, SI Diamond, Standish Industries, Three-Five Systems, and Versatile Information Products. In Spring 1995 Planar organized a consortium to develop the next generation of High Resolution and Color TFEL Displays. This consortium was funded by the Department of Defense under the DARPA managed Technology Reinvestment Program (TRP). The total funding for the consortium was to be $30 million; half funded by the government and half by the consortium’s private firms. Other members of the consortium were: AlliedSignal Aerospace, Computing Devices of Canada, Ltd., Advanced Technology Materials, Boeing, CVC Products, Georgia Tech Research Institute, Hewlett Packard, Honeywell, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oregon State University, Positive Technologies and the University of Florida.xxxi[31]

 In 1989, the Defense Advanced Research Projects Agency (DARPA) began to fund work on advanced displays as part of its High Definition Systems program. DARPA issued a Broad Area Announcement in that year and in subsequent years asking for proposals.  Planar won one of the first grants from DARPA in 1990 and used the funds to set up a laboratory to develop color ELDs.  Planar participated in a variety of DARPA programs, but perhaps the most significant was its work with Kopin and the David Sarnoff Research Center on active matrix EL (AMEL) microdisplays beginning in 1993. 

The AMEL device is processed on a silicon wafer substrate using the inverted EL structure with a transparent ITO top electrode. The lower EL electrode is the top metallization layer of the silicon IC. 

Page 40: Computer Graphics Unit1

ALE was used to make the device because of its excellent "conformal coating" characteristics. ALE resulted in very few pinhole defects, a key requirement for reliable EL devices with top electrodes.  The pixel size of the first generation of AMEL displays was 24 microns. The second generation of displays used pixels of 12 microns. Smaller pixels meant higher resolution, lower power consumption and lower cost of production for a given display format.xxxii[32]  In October 1995, Planar announced an arrangement to supply AMEL displays to Virtual I-O, a Seattle-based manufacturer of consumer head mounted displays for virtual reality entertainment systems.xxxiii[33] Unfortunately, Virtual I-O went bankrupt in 1997 before any of these displays could be sold to the public. 

In March 1996, Planar was a awarded a DoD contract to supply an AMEL-based head mounted display (HMD) for the military's Land Warrior Program.xxxiv[34] On May 16, 1996, Planar announced that it had developed an AMEL microdisplay that was one-inch square, 3mm thick, and weighed only 4 grams. In 1997 Planar announced that it had developed a 0.75 inch diagonal full-color VGA AMEL microdisplay using an LC sequential color shutter.xxxv[35](ref: R. Tuenge, et al., SID 97 Digest (1997), p.862 ). Planar now has a brighter full-color microdisplay capable of displaying 32k colors that does not require the LC shutter.

   Planar has experienced a steady growth in revenues (see Figure 5 below).

Figure 5. Annual Revenues for Planar Systems in $Millions, 1984-98 

Figure 4. Picture of a Color AMEL Microdisplay

Source: Planar Systems.

Page 41: Computer Graphics Unit1

  Its profits also steadily increased in both absolute terms and per share but with a decline in 1998. Planar went public with an IPO in 1993.  A Brief History of Sharp's EL OperationsThe head of research at Sharp, Sanai Mita, was convinced that ELDs could be used eventually to make flat TVs. Mito was formerly a professor at Osaka Municipal University. He and his team mounted a major effort in the mid 1970s to develop TFELs.xxxvi[36]

The key research at Sharp was done by Toshio Inoguchi and his colleagues. The successful demonstration of a working TFEL display in September 1978 at the Consumer Electronics Show in Chicago was the "finest hour" of Inoguchi's group. This display was only a few inches in diagonal, but it was also only 3 cm thick.  Sharp began mass production of ELDs in 1983. One of its earliest displays was used in the U.S. Space Shuttle's obital navigation system in that same year.xxxvii[37] Another early application of a Sharp ELD was in a Grid laptop computer. This display provided resolution equivalent to a quarter VGA (320x240). 1983 was also the year that Shinji Morozumi at Seiko announced that his group was able to build a TFT LCD television. That announcement took Sharp by surprise and they redirected their efforts toward catching up with Seiko in LCDs. By 1987, Sharp was able to market their own TFT LCD television.xxxviii[38] They were able to capitalize on their lead in mass production of STN LCDs for calculators to quickly develop production technologies for high-volume TFT manufacturing. After 1987, TFT LCD production was far more important to Sharp's corporate strategy than EL production. Nevertheless, the firm remained active in both research and production of ELDs, providing strong competition to Planar and Lohja. Sharp continues to market EL displays for niche markets. A Brief History of the Finlux Display Division of Lohja Oyxxxix[39]

In 1975, a research group headed by Dr. Tuomo Suntola recognized that thin film electroluminescence would be an ideal flat panel display technology provided that

Page 42: Computer Graphics Unit1

luminance stability and reliability problems could be overcome. To solve these problems a new thin-film deposition method called atomic layer epitaxy (ALE) was developed (see Figure 6). The basic idea was to build thin films layer by layer using surface-controlled chemical exchange reactions. The result is a dense, pinhole film with very good step coverage properties. This research activity started in a small company called Intrumentarium that was acquired in 1977 by Lohja Oy, a Finnish conglomerate which was primarily a manufacturer of construction material. Lohja was the second largest Finnish electronics company after Nokia, and the new ELD technology was considered a good fit for its strategy of diversification into electronics.  Figure 6. ALE sequences for a binary compound (courtesy of Tuomo Suntola)

A. First precursor reacts with the surface. Chemi-sorption occurs through ligand exchange between the precursor molecules and the bonding sites.

B. When all bonding sites are filled the surface reaction is saturated. Bonding sites for the second precursor have been created.

C. Second precursor reacts with the surface created in steps A and B. Chemisorption occurs as long as bonding sites are available, until saturation …

D. and the formation of bonding sites for the first precursor begins again. The cycle of sequences A to D are repeated the necessary number of times for the desired layer thickness. 

  Excellent ELD results based on its proprietary ALE technology were for the first time presented at the annual meeting of the Society for Information Display (SID) in 1980, where they received a lot of attention. In 1983, three large information boards were delivered to the Helsinki Vantaa airport. Each of these was comprised of more than 700 character modules. They proved that ALE technology could meet reliability requirements necessary for commercial use. That technology was licensed to Sintra Alcatel in France in 1983. However, the driver costs of the ELD character modules were too high to make them commercially viable, and as a result Finlux began development of a 9-inch 512x256 matrix display for computer and industrial applications. A large manufacturing plant was constructed in a new science park set up in Espoo close to Helsinki. Core manufacturing technologies, including ALE deposition equipment, were developed in-house, which delayed the start of mass production until 1986. Half-page ELD matrix displays with resolutions of 640x200, 650x350 and 640x400 were subsequently manufactured at this plant.  

Page 43: Computer Graphics Unit1

The investments and development costs for ELDs were essentially funded internally by Lohja Oy because little public or customer-paid funding was available. This situation changed when color ELD development was started in 1988 as part of an EU-supported international consortium. The first color EL display based on an innovative device structure was brought to market in 1993. Lohja Corporation was never able to make the Finlux Display Division profitable because of a lack of experience in managing microelectronics businesses. The Finnish economy benefited from rapid economic growth from the late 1970s until the late 1980s. But when the Soviet Union broke apart in 1991, the Finnish economy suffered because of its dependence upon the Soviet Union as a customer for exports. In 1991, the Finlux Display Division was sold to Planar Systems and was renamed Planar International.  The two ELD operations were of approximately the same size at the time of the merger. The merger permitted savings in marketing costs and materials purchases. Planar Systems succeeded in making Planar International profitable in just a few years by using more experienced management, but without changing manufacturing technology and with only minor changes in staffing. The ALE manufacturing technology still forms the basis for the production of high volume ELDs at both Planar Systems and Planar International. Much of the color development results that were achieved in Finland were also of direct benefit to the work on color ELDs at Planar Systems in the United States, and in particular the AMEL microdisplays discussed above. In addition, in 1996, Planar Systems began to market a new generation of monochrome ELDs called ICEBrite displays. The ICEBrites combined ALE grown phosphors and insulators with high contrast layers developed by Eric Dickey in the late 1980s.1[40]   

Figure 7. Picture of an ICEBrite Display

1

Page 44: Computer Graphics Unit1

Source: Planar Systems.   

 Organic Light Emitting Diodes (OLEDs)In the late 1990s, several research laboratories announced that they had made breakthroughs in getting thin films of organic materials to emit light analogously to EL devices. Because organic materials offered a number of process advantages over inorganic phosphors, these announcements were taken very seriously by potential investors. This is not the place to go into the details of these developments. Suffice it to say that the emergence of OLEDs led to a relative decline in interest in further work on color ELDs. Planar Systems set up its own OLED program in collaboration with ___ as did several other display manufacturers. It is possible that inability to solve the technological problems that have to be solved in order to manufacture OLEDs in high volumes will result in a return to research on color ELDs and other alternatives to TFT LCDs. For the moment, however, the momentum is with the OLED research groups. ConclusionsElectroluminescent displays (ELDs) have a venerable history starting with the experiments of Captain Henry J. Round in 1907, O.V. Lossev in the Soviet Union, and Georges Destriau in France. Electroluminescence was mostly a scientific curiosity until the invention of thin film deposition techniques and the discovery that a sandwich of conductors, insulators and phosphors could result in a very efficent and long-lasting form of emissive display. ELDs were very important in the early days of the laptop computer industry and remained important in niche markets for military, medical and industrial equipment where high brightness, speed, contrast, and ruggedness are necessary. The rise of the color TFT LCD display forced the ELD producers to engage in research on color ELDs with the result that there are now multicolor ELDs on the market and full-color AMELs in development for microdisplays. The ELD industry is currently limited to two major players: Planar and Sharp. Planar acquired its only European competitor,

Page 45: Computer Graphics Unit1

the Finlux Display Division of Lohja Oy, in 1990. Sharp remains committed to competing in ELDs but its main focus is on liquid crystal displays. Most of the important research on ELDs remains within the corporate laboratories of Planar and Sharp, but several publicly funded research laboratories and consortia have also made important contributions to ELD technology. 

What is an LED?

LED's are special diodes that emit light when connected in a circuit. They are frequently used as "pilot" lights in electronic appliances to indicate whether the circuit is closed or not. A a clear (or often colored) epoxy case enclosed the heart of an LED, the semi-conductor chip.

LED leads

<-- -->side lead on flat

side of bulb = negative

The two wires extending below the LED epoxy enclosure, or the "bulb" indicate how the LED should be connected into a circuit. The negative side of an LED lead is indicated in two ways: 1) by the flat side of the bulb, and 2) by the shorter of the two wires extending from the LED. The negative lead should be connected to the negative terminal of a battery. LED's operate at relative low voltages between about 1 and 4 volts, and draw currents between about 10 and 40 milliamperes. Voltages and currents substantially above these values can melt a LED chip. ]

The most important part of a light emitting diode (LED) is the semi-conductor chip located in the center of the bulb as shown at the right. The chip has two regions separated

Page 46: Computer Graphics Unit1

by a junction. The p region is dominated by positive electric charges, and the n region is dominated by negative electric charges. The junction acts as a barrier to the flow of electrons between the p and the n regions. Only when sufficient voltage is applied to the semi-conductor chip, can the current flow, and the electrons cross the junction into the p region.

In the absence of a large enough electric potential difference (voltage) across the LED leads, the junction presents an electric potential barrier to the flow of electrons.

What Causes the LED to Emit Light and What Determines the Color of the Light?

When sufficient voltage is applied to the chip across the leads of the LED, electrons can move easily in only one direction across the junction between the p and n regions. In the p region there are many more positive than negative charges. In the n region the electrons are more numerous than the positive electric charges. When a voltage is applied and the current starts to flow, electrons in the n region have sufficient energy to move across the junction into the p region. Once in the p region the electrons are immediately attracted to the positive charges due to the mutual Coulomb forces of attraction between opposite electric charges. When an electron moves sufficiently close to a positive charge in the p region, the two charges "re-combine".

Each time an electron recombines with a positive charge, electric potential energy is converted into electromagnetic energy. For each recombination of a negative and a positive charge, a quantum of electromagnetic energy is emitted in the form of a photon of light with a frequency characteristic of the semi-conductor material (usually a combination of the chemical elements gallium, arsenic and phosphorus). Only photons in a very narrow frequency range can be emitted by any material. LED's that emit different

Page 47: Computer Graphics Unit1

colors are made of different semi-conductor materials, and require different energies to light them.

Digital Differential Analyzer for Lines  

Overview

I've always been interested in how things work, not just how they are applied. When I first heard about some special techniques used to draw rasterized lines and other mathematical shapes, I was curious. But it wasn't until some time later that I learned what one of these methods is called and still later before I was finally able to actually read about some of the details.

If you don't already know what a rasterized line is, it's just what happens when you try to draw a line on graph paper by only using completely filled squares of the grid. Instead of drawing the line with a ruler, you are forced to instead select and fill in entire squares in a way that gives the better appearance of the line that you intended to illustrate. This is the same problem faced when drawing lines by setting pixels on an IBM PC graphics screen, for example.

My first contact with digital differential analysis, applied to lines, was frustrating. The information was sparse and didn't make sense to me. The writer seemed to skip around and jump to conclusions I just couldn't reach and, finally, left me without an algorithm I could use to test the ideas. When I was finally able to understand how to do these things on my own, I discovered that the information in that article was also wrong. I guess I shouldn't be surprised.

A goal here is to present the reasoning behind the use of digital differential analysis to draw lines and to develop an algorithm you can use to test the ideas. Another goal is to do this clearly and in a fashion that can appeal to folks with only two years of high school algebra and an interest in computer programming. Finally, I'd like to shine some light in

Page 48: Computer Graphics Unit1

the direction of applying this kind of analysis in drawing circles, ellipses and other mathematical shapes.

Let's start by looking at the mathematical line.

The Line Equation

If you've had to do any graphing in algebra, you've probably encountered the following general equation for a line:

1.

This equation shows an exact relationship between x and y values. The value of m is often called the "slope" of the line because larger values generate steeper lines. And the value of b is called the y-intercept because it is the place along the y-axis where the line intersects it (when x is zero.) This style of expressing the line, when m and b are given specific values, helps to quickly picture what the line might look like.

But this form of a line is rarely used in computer graphics. When drawing lines segments as rasters, you are usually given a starting point and an ending point, instead of a slope

and y-intercept. Let's call the starting point, , and the ending point, .

We can calculate the slope and y-intercept from these points and modify equation [1], accordingly. To start, recall that the slope of a line is rate of change-in-y versus change-in-x. Expressed this way:

2.

Now that we know how to compute m, we can replace m in equation [1] and reorganize it to solve for the value of b:

3.

The only problem with equation [3] is that we still have the general variables, x and y. We can fix that by replacing them with the values from the first or second point, our choice. (I'll use the first point):

4.

Page 49: Computer Graphics Unit1

Now, let's take equations [3] and [4] and apply them to equation [1], to get:

5.

That last part isn't too bad. It simply says that y can be computed from x by taking the y-value of the starting point and adding a portion of the span between the y-values of the two points. The portion here is the ratio of how far x is from the x-value of the starting point and the span between the x-values of the two points. Makes sense?

Our version of the general line equation, specified by two points, is:

6.

That's the basics. Now I plan to ticker with this equation a bit to gain some more familiarity with lines. If this is already well understood to you, you might skip to the end of the next section.

More Fun with the Line Equation

There's a minor "difficulty" with equation [6]. What if the two points are arranged vertically, so that their x-values are equal? The zero in the divisor will cause some nasty problems. It turns out that there is another way to express a line, which handles this

problem. Just multiply both sides by :

7.

Looks worse? Well, let me move a piece of it to the other side and combine terms:

8.

Do you see the symmetry in this? Let's stop for a moment and imagine that these two products each represent some kind of area. Before we go on, it's time for a bit of geometry.

Page 50: Computer Graphics Unit1

i

ii

iii

iv

v

vi

vii

viii

ix

x

xi

xii

xiii

xiv

xv

xvi

xvii

xviii

xix

xx

xxi

xxii

xxiii

xxiv

xxv

xxvi

xxvii

xxviii

Page 51: Computer Graphics Unit1

The above chart shows an example to illustrate the areas suggested by equation [8]. The left hand side of the equation is the area covered by the horizontal lines. The right side of the equation is the area covered by the vertical lines. These two areas overlap in the cross-hatched area.

The line shown divides the large rectangle into two triangles with equal areas. The lower left and upper right quadrant rectangles are similarly divided by the line, so we can safely remove their respective areas from the larger triangles and still know that the remaining smaller rectangles are still equal. If we now add the area in the lower left hand corner to both, we get back the two areas described in equation [8]. That's it for the geometry side-bar. It's just good to pause a moment and see how that works out.

Equation [8] removes the possibility of dividing by zero. But it does something else, too. With integer coordinates, we can now perform all of the calculations with integers. This means we are probably on the right track.

Equation [8] is still a bit quirky, in spite of it's balance. If you know what x is, it's kind of hard to figure out y, and visa versa. But before we make any more adjustments to equation [8], we need to decide whether it's more important to calculate x from y or to calculate y from x.

Independent and Dependent Variables

When plotting rasterized lines between two points, you are faced with two choices. You can move x around independently between the x-values of your two points and compute the value of y from this value of x or you can move y around independently between the y-values of your two points and compute the value of x from that. Either way works, mathematically. As a practical matter though, you need to make a choice.

When the equation is set up so that you are free to adjust the value of x and where the value of y depends on this setting, such as in equation [1], then x is considered the independent variable and y is considered the dependent variable. Of course, you can switch this around in order to reverse the roles. (Notice that equation [8] doesn't show a preference for either variable as independent or dependent.)

It turns out that when you are plotting rasterized lines (and most anything else, too) that it's important to make the right choice. The goal is to select the axis that forms the long

xxix

xxx

xxxi

xxxii

xxxiii

xxxiv

xxxv

xxxvi

xxxvii

xxxviii

xxxix

Page 52: Computer Graphics Unit1

side of your rectangle between the two points as your independent variable. In other

words, you'd pick x as your independent variable when is larger than , and you'd pick y as your independent variable and when the opposite holds. When equal, you can go either way. But why is this true?

Well, let's look at this simple equation and imagine plotting it using x as the independent variable, going from a point at (2,20) to a point at (5,50). Let's recall equation [6] and use it for this purpose:

9.

The points would be (2,20), (3,30), (4,40), and (5,50). But this is only four points and if you tried plotting them, your line would look very sparse. What we'd rather plot is (2,20), (2,21), (2,22), (2,23), (2,24), (3,25), (3,26), (3,27), and so on. This way, the line will look properly solid. In this case, we should have reorganized the equation so that y was the independent variable and x was the dependent one, like this:

10.

That will give us enough points to make a reasonable line. Of course, this argument depends on whether the range along one axis is more or less than it is on the other axis.

Keep this in mind as we continue. The algorithm to follow will have to worry about such things.

The Keystone Equation

We've covered the general equation for lines, given two points, in equation [6]. We've suggested an approach to help avoid dividing by zero in equation [8], although it may still be a mystery how to make much use of it. And we've discussed independent and dependent variables and how to choose which should be which, in and around equations [9] and [10].

I think you are about ready. It's time to develop something useful.

Let's rearrange equation [8] like this:

11.

I'm also going to create two new terms that are pretty easy to understand and will help us when it comes time to make an algorithm. These two terms are simply the height and

Page 53: Computer Graphics Unit1

width of the rectangle formed by the two points between which we are supposed to draw a line. But I'll use a mathematically inclined name, if you don't mind:

12.

Using these two, we can rewrite equation [11] into our golden rule of lines:

13.

This equation isn't just another way of looking at the same old line. It's the key to developing a fast algorithm for drawing lines.

Some Definitions

This is the point where we are going to convert the general equations above into specific and detailed rules to use in drawing lines. I've glossed over some of the details, but now that I'm ready to take a narrower focus I need to cover a few definitions I've neglected.

I'll be using the convention of x and y axes, where the x axis is horizontal the positive x direction is to the right, the y axis is vertical, and the positive y direction is up. This isn't always the case, in practice, since both computer screens and printers often have the positive y direction going down. But for the discussion to follow, I'm sticking with the usual mathematical convention and leaving it to you to apply it to particular circumstances.

Octant Independent Dependent1 x     +1 y     +fraction

Page 54: Computer Graphics Unit1

Octant Independent Dependent2 y     +1 x     +fraction3 y     +1 x     -fraction4 x     -1 y     +fraction5 x     -1 y     -fraction6 y     -1 x     -fraction7 y     -1 x     +fraction8 x     +1 y     -fraction

There are eight possible categories of lines, for drawing purposes. If you imagine that

, is located in the center of the adjacent diagram and that the point we'll draw to,

. is located somewhere in the surrounding vicinity, then this destination will fall into one of the octants shown.

The table shown below the diagram illustrates the eight different octants and the important criteria that differentiates them. The independent variables are always stepped by +1 or -1, as the line is plotted. The dependent variable will then vary by a fractional value, for each of these independent variable steps.

Take a careful look at this table and make sure that you understand it. If you need to, refer back to the discussion on independent and dependent variables and how to choose between them.

Next, we're going to focus on the details of plotting lines where the destination is in octant 1. Once we've determined how to handle one of these, we can see what's different for the other cases and figure out how to adapt the algorithm to them.

Octant 1: Asking the Right Question

Let's return to our infamous equation, the seed corn we'll use for the algorithm to follow:

14. , where and .

Equation [14] is zero for any point that is exactly on the line. If we take any given x and y pair, we can use equation [14] to see where the point is. If the value is zero, the point is on the line; if the value is positive, the point is below the line; and if the value is negative, the point is above the line. (You should test yourself to see if you can spot the exact reason why this is true.) We're going to use these facts to help us plot our points.

Plotting the first point of our line is rather easy. We just plot the first point we were

given, . It's the next point that makes us think.

For endpoints in octant 1, x is the independent variable with a +1 increment between adjacent points along the line that we'll plot. Since y is the dependent variable, it will

Page 55: Computer Graphics Unit1

have a positive fractional change between adjacent points along the line, so the next point

we need to plot will be either at or at , depending on whether that fractional amount is less or more than 0.5. And, no matter which one we choose, it will probably not fall exactly on the line we'd like to be drawing. So we'll need to keep track of our errors as we proceed, too.

So which of the next two points do we choose? What we'd really like is the answer to, is:

Which next point does the line come closer to, or ?

If we knew the answer to that, we'd know which one to plot. Before we could answer that, we'd need to measure how closely the line went by each of the two possible points and then we'd need to compare them to see which was the smaller distance.

But another way of asking the same question more directly would be something like this:

Does the line cross above or below the halfway point at ?

Notice that the point used in the above question is halfway between the two points we are

considering. If the line crosses over this point, then it must be closer to and that's the point we should use. If the line crosses below this point, then it must be closer

to and that's the point to use, instead. If you are having any difficulty seeing why this is a good choice, take out a piece of graph paper and check it out. I think you'll see how it works.

Octant 1: Digital Differential Analysis (DDA) Line Algorithm

Let's define a new equation. We'll use it to help us decide between the two possible choices, as each step in plotting a line in octant 1:

15.

For lines plotted in octant 1, this equation answers the question, "What is the value of the

line, at the next x value just to the right of the current point we've plotted and halfway between the current value of y and the next possible value of y?" If the value is positive, then the line must be passing above our halfway point and this means we should

plot as the next point. If the value is negative, then the line must be passing

under our halfway point and this means we should plot as the next point. An exact zero would suggest that we could pick either one as our next point to plot.

Our remaining problem now is to find a way to efficiently calculate the successive values

of . It turns out that we can do this by ignoring for a moment and thinking about how it changes as we plot points in octant 1, instead.

Page 56: Computer Graphics Unit1

The first value will be . The next value will depend on that value, because we use it to decide which of the two points we will plot next. This means we have two cases to consider:

16.

You will have this same pair of cases to choose between, after each plotted point. And the result, which ever one is picked, will determine the following point to plot. The will repeat over and over again until the line is drawn.

Repeating sequences like this, with an initial boundary condition and repeating logic, are called recurrences. Mathematically, [16] can be generally expressed better as the following recurrence:

17.

We've just used something called mathematical deduction to describe the "local" effects

of the function, . It loses sight of the big picture, but tells us a lot about what is going on near our point of interest. Recurrences are ripe for mathematical induction, which reverses this process. But that's for another day.

To compute successive values of , let's expand on its recurrence in [17]:

18.

Now we need to do some algebra to see if this can be simplified. This is detailed, but not hard to follow. So, let's start with the first case listed in [18]. Going back to equations [14] and [15] and substituting:

Page 57: Computer Graphics Unit1

Now, let's do the same thing for the second case:

Now we can restate [18] in a more concrete form:

19.

Take a quick breath for a moment, because this problem just got a lot easier! Notice that the change in our condition function, used to decide which point to plot in case you've already forgotten, only depends on the original some very simple values that we can pre-compute before starting out. This is good news for drawing lines.

We've one more detail to take care of, the initial value:

Page 58: Computer Graphics Unit1

That fraction will causes us some slight trouble if we plan to use integers throughout. For that reason alone, we will just multiply everything by 2. That will clear up the problem completely.

Let's fix up recursion [17] with this new information:

20.

Heck. The code in any language will practically write itself from that!

I'll bet you were thinking that there can't be any more, right? Hehe. Well, that just covered cases where we are plotting a line in octant 1. There are seven more of them, you know. I'm not the least bit tired, so shall we continue?

The Rest of the DDA Story

Let's take a look at all the octants for a moment. The following table shows each of the decision functions to use when selecting between two alternate points along the line. There are four with x as the independent variable and four with y. Of each of these four, two are positive directed and two are negative. Look it over and verify in your own mind that these make sense.

Octant Decision Equation1 g(x,y)=f(x+1,y+1/2)2 g(x,y)=f(x+1/2,y+1)3 g(x,y)=f(x-1/2,y+1)4 g(x,y)=f(x-1,y+1/2)5 g(x,y)=f(x-1,y-1/2)6 g(x,y)=f(x-1/2,y-1)7 g(x,y)=f(x+1/2,y-1)8 g(x,y)=f(x+1,y-1/2)

Now it also turns out that the eight equations can be cut down to four by recognizing that all octant 3 lines can be converted to octant 7 lines by just swapping the starting and ending points. Similarly, octant 4 becomes octant 8, octant 5 becomes octant 1, and octant 6 becomes octant 2. You can take any octant drawing direction and convert it to the diametrically opposite octant this way, in fact. So you can convert eight situations into just four. That's a bit better.

Actually, it's lots better. Intuitively, you might notice that all these patterns amount to simple reflections of each other and that you probably need just one basic piece of logic.

Page 59: Computer Graphics Unit1

It turns out that the pattern of selecting between the two possible points as the line is

drawn varies based on the magnitudes, and . It's the selection of the independent variable and the dependent variable and their respective directions that then tell the full story. Two possible independent variables, two directions for the independent variable, and two possible directions for the dependent variable: 2*2*2=8.

I won't tabulate the eight recurrences for you. You can do those as an exercise, if you like. Just for fun, here's the recurrence for octant 8:

21.

The details start getting pretty tedious at this point, as if they haven't already been that way. It turns out that you can use a single routine to handle drawing all eight octants, if you think of the DDA routine as manipulating the variables as independent and dependent, rather than as x and y.

There are two basic orientations for drawing the lines. One when the dependent variable increases as the independent variable increases and one when the opposite occurs, when the dependent variable decreases as the independent variable increases. So the DDA routine will need the starting and ending points for the independent variable, the starting point for the dependent variable and its orientation to the independent variable, and the magnitude of the spans between the points. With that in place, you can use a single piece of DDA logic to get the whole job done.

Even though the DDA algorithm can handle vertical and horizontal lines quite well, it's often better to handle a few situations as special cases in your code for better efficiency. Horizontal and vertical lines are simple and don't need decision variables. In fact, you can cover both vertical and horizontal lines in a single routine, if you like. Or to take advantage of special capabilities on the hardware, you may prefer to keep these separate. Similarly, lines exactly along 45-degree lines can use special code and can be all handled in a single routine, too.

Now let's see some code.

Sample Line-Drawing Code

I'm using the C language to illustrate the routines that follow. The code does not deal with pixel colors or clipping. Although most practical routines have to deal with such things, they aren't always needed and they are beyond the scope here. Also, the SetPixel routine isn't shown. You'll need to provide it, if you plan to test these routines.

Page 60: Computer Graphics Unit1

Finally, I don't show the special case code I mentioned earlier, to handle vertical, horizontal, 45-degree lines, and isolated points. The code below copes with those special cases just fine without any special handling and I figure you will have little problem adding such code, if you want it. The point here is to showcase the DDA algorithm.

extern void SetPixel (int x, int y);

typedef void SetPixelFunc (int a, int b);

void xySetPixel (int x, int y) { SetPixel (x, y); }

void yxSetPixel (int y, int x) { SetPixel (x, y); }

void PlotLineDDA (SetPixelFunc* pfPlot, int a1, int a2, int b, int da, int db, int bc) { int g1= db + db; int g2= g1 - da - da; int g= g1 - da; for (int a= a1; a < a2; ++a) { pfPlot (a, b); if (g < 0) g= g + g1; else { g= g + g2; b= b + bc; } } pfPlot (a, b); }

void PlotLine (int x1, int y1, int x2, int y2) { int dxabs= abs (x2 - x1); int dyabs= abs (y2 - y1); if (dxabs >= dyabs) { if (x1 < x2) PlotLineDDA (xySetPixel, x1, x2, y1, dxabs, dyabs, y1 <

y2? 1:-1); else PlotLineDDA (xySetPixel, x2, x1, y2, dxabs, dyabs, y2 <

y1? 1:-1); } else { if (y1 < y2) PlotLineDDA (yxSetPixel, y1, y2, x1, dyabs, dxabs, x1 <

x2? 1:-1); else PlotLineDDA (yxSetPixel, y2, y1, x2, dyabs, dxabs, x2 <

x1? 1:-1); } }

Page 61: Computer Graphics Unit1

The Bresenham Line-Drawing Algorithm

The basic Bresenham algorithmConsider drawing a line on a raster grid where we restrict the allowable slopes of the line

to the range .

If we further restrict the line-drawing routine so that it always increments x as it plots, it becomes clear that, having plotted a point at (x,y), the routine has a severely limited range of options as to where it may put the next point on the line:

It may plot the point (x+1,y), or: It may plot the point (x+1,y+1).

So, working in the first positive octant of the plane, line drawing becomes a matter of deciding between two possibilities at each step.

We can draw a diagram of the situation which the plotting program finds itself in having plotted (x,y).

In plotting (x,y) the line drawing routine will, in general, be making a compromise between what it would like to draw and what the resolution of the screen actually allows it to draw. Usually the plotted point (x,y) will be in error, the actual, mathematical point

on the line will not be addressable on the pixel grid. So we associate an error, , with

each y ordinate, the real value of y should be . This error will range from -0.5 to just under +0.5.

In moving from x to x+1 we increase the value of the true (mathematical) y-ordinate by an amount equal to the slope of the line, m. We will choose to plot (x+1,y) if the difference between this new value and y is less than 0.5.

Page 62: Computer Graphics Unit1

Otherwise we will plot (x+1,y+1). It should be clear that by so doing we minimise the total error between the mathematical line segment and what actually gets drawn on the display.

The error resulting from this new point can now be written back into , this will allow us to repeat the whole process for the next point along the line, at x+2.

The new value of error can adopt one of two possible values, depending on what new point is plotted. If (x+1,y) is chosen, the new value of error is given by:

Otherwise it is:

This gives an algorithm for a DDA which avoids rounding operations, instead using the

error variable to control plotting:

This still employs floating point values. Consider, however, what happens if we multiply

across both sides of the plotting test by and then by 2:

Page 63: Computer Graphics Unit1

All quantities in this inequality are now integral.

Substitute for . The test becomes:

This gives an integer-only test for deciding which point to plot.

The update rules for the error on each step may also be cast into form. Consider the floating-point versions of the update rules:

Multiplying through by yields:

which is in form.

Using this new ``error'' value, , with the new test and update equations gives Bresenham's integer-only line drawing algorithm:

Page 64: Computer Graphics Unit1

Integer only - hence efficient (fast). Multiplication by 2 can be implemented by left-shift.

This version limited to slopes in the first octant, .

Here is a C++ implementation of the Bresenham algorithm for line segments in the first octant.

void linev6(Screen &s, unsigned x1, unsigned y1, unsigned x2, unsigned y2, unsigned char colour ) { int dx = x2 - x1, dy = y2 - y1, y = y1, eps = 0; for ( int x = x1; x <= x2; x++ ) { s.Plot(x,y,colour); eps += dy; if ( (eps << 1) >= dx ) { y++; eps -= dx; } } }

This is an all-integer function, employs left shift for multiplication and eliminates redundant operations by tricky use of the eps variable.

This implementation of Bresenham's algorithm is incomplete, it does not check the validity of its arguments. A real implementation should do this. In fact, a real implementation of Bresenham's algorithm should do more than simply reject lines with slopes lying outside the first octant, it should handle lines of arbitrary slope.

Page 65: Computer Graphics Unit1

Handling multiple slopesIf we try out the C++ implementation of the Bresenham algorithm, we find it has some peculiar properties.

As expected, it fails to plot lines with negative slopes (try it and see what happens). It also fails to plot lines of positive slope greater than 1 (this is an interesting case, try it also and see if you can explain what is happening).

More unusually, we find that the order in which the endpoints are supplied to this routine is significant, it will only work as long as x1 is smaller than x2.

In fact, if we have two line segments with the same endpoints, and the same slope, this routine may draw one of them successfully but fails to draw the other one.

Of course, this is not surprising really, when we consider that the function works by incrementing x. It does emphasise, however, that the routine is plotting vectors, direction is significant. Considering all the vectors from (x1,y1) to (x2,y2) we find that there are eight regions, (the ``octants'') and the basic Bresenham algorithm works in only one of them.

Page 66: Computer Graphics Unit1

A full implementation of the Bresenham algorithm must, of course, be able to handle all combinations of slope and endpoint order.

Some of the regions in the plane, those for which x2 is smaller than x1 can be handled by exchanging the endpoints of the line segment.

It is also clear that we will need a piece of code to handle large slopes by stepping over y instead of x values.

However, careful consideration of the diagram will reveal that there is one case which cannot be reduced to a version of the algorithm we have already looked at. If we want to draw a line having a small negative slope, we will have to consider a modification of the basic Bresenham algorithm to do this. (The same point applies to lines of large negative slope as well, but the code for small negative slopes may be adapted to this case by stepping over y instead of x).

Page 67: Computer Graphics Unit1

Bresenham for negative slopes

Consider a line with negative slope between 0 and 1 (i.e., small negative slope. Given that a line-drawing algorithm has plotted a point at (x,y), its choice about where to plot the next point is between (x+1,y-1) and (x+1,y).

As usual there will be an error, , associated with y. Choice of the next point to plot will be based on an attempt to minimise error, so plot (x+1,y) if:

Otherwise plot (x+1,y-1). Notice that the error generated by the above is negative. A little manipulation gives a decision inequality:

It is worth comparing this with the decision inequality for the case of positive slope.

The error update rules are also subtly different for this case of negative slope.

If plotting (x+1,y) the new value of error is given by:

Otherwise, plotting (x+1,y-1) gives new error:

Page 68: Computer Graphics Unit1

A pseudocode algorithm for this routine may be written as:

This is cast in terms of floating-point values.. It is, however, a trivial matter to convert the algorithm into an integer-only form.

Bresenham's line algorithm

Bresenham's line algorithm is an algorithm that determines which points on a 2-dimensional raster should be plotted in order to form a close approximation to a straight line between two given points. It is commonly used to draw lines on a computer screen, as it only uses integer math and simple arithmetic operations which makes it fast on digital computers. It is one of the earliest algorithms discovered in the field of computer graphics

Page 69: Computer Graphics Unit1

The algorithm

Illustration of the result of Bresenham's line algorithm.

The line is drawn between two points (x0, y0) and (x1, y1), where these pairs indicate column and row, respectively, increasing in the down and right directions. We will initially assume that our line goes down and to the right, and that the horizontal distance x1-x0 exceeds the vertical distance y1-y0 (that is, the line has a slope less than 1.) Our goal is, for each column x between x0 and x1, to identify the row y in that column which is closest to the line and plot a pixel at (x,y).

Now, how do we figure out which pixel is closest to the line for a given column? The general formula for the line between the two points is given by:

Since we know the column, x, the pixel's row, y, is given by rounding this quantity to the nearest integer:

However, explicitly calculating this value for each column, x, is silly; we need only note that y starts at y0, and each time we add 1 to x, we add the fixed value (y1-y0)/(x1-x0), which we can precalculate, to the exact y. Moreover, since this is the slope of the line, by assumption it is between 0 and 1; in other words, after rounding, in each column we either use the same y as in the previous column, or we add one to it.

We can decide which of these to do by keeping track of an error value which denotes the vertical distance between the current y value and the exact y value of the line for the current x. Each time we increment x, we increase the error by the slope value above. Each time the error surpasses 0.5, the line has become closer to the next y value, so we add 1 to y, simultaneously decreasing the error by 1. The procedure looks like this, assuming plot(x,y) plots a point and abs takes absolute value

Page 70: Computer Graphics Unit1

Expressed in pseudo code , the algorithm is:

function line(x0, x1, y0, y1) int deltax := abs(x1 - x0) int deltay := abs(y1 - y0) real error := 0 real deltaerr := deltay ÷ deltax int y := y0 for x from x0 to x1 plot(x,y) error := error + deltaerr if error ≥ 0.5 y := y + 1 error := error - 1.0

[

Generalizing

This first version only handles lines that descent to the right. We would of course like to be able to draw all lines. The first case is allowing us to draw lines that still slope downwards, but head in the opposite direction. This is a simple matter of swapping the initial points if x0 > x1. Trickier is determining how to draw lines that go up. To do this, we check if y0 >= y1; if so, we step y by -1 instead of 1. Lastly, We still need to generalize the algorithm to drawing lines in all directions. Up until now we have only been able to draw lines with a slope less than one. To be able to draw lines with a steeper slope, we take advantage of the fact that a steep line can be reflected across the line y=x to obtain a line with a small slope. The effect is to switch the x and y variables throughout, including switching the parameters to plot. The code looks like this:

function line(x0, x1, y0, y1) boolean steep := abs(y1 - y0) > abs(x1 - x0) if steep then swap(x0, y0) swap(x1, y1) if x0 > x1 then swap(x0, x1) swap(y0, y1) int deltax := x1 - x0 int deltay := abs(y1 - y0) real error := 0 real deltaerr := deltay ÷ deltax int y := y0 if y0 < y1 then ystep := 1 else ystep := -1 for x from x0 to x1 if steep then plot(y,x) else plot(x,y) error := error + deltaerr if error ≥ 0.5 y := y + ystep error := error - 1.0

The function now handles all lines and implements the complete Bresenham's algorithm.

Page 71: Computer Graphics Unit1

Optimization

The problem with this approach is that computers operate relatively slowly on fractional numbers like error and deltaerr; moreover, error can accumulate over many floating-point additions. Working with integers will be both more accurate and faster. The trick we use is to multiply all the fractional numbers above by deltax, which enables us to express them as integers. The only problem is the constant 0.5—to deal with this, we multiply both sides of the inequality by 2. The resulting multiplication by two can be implemented with a bit shift instead of a relatively expensive multiplication, further speeding up the algorithm. The new program looks like this:

function line(x0, x1, y0, y1) boolean steep := abs(y1 - y0) > abs(x1 - x0) if steep then swap(x0, y0) swap(x1, y1) if x0 > x1 then swap(x0, x1) swap(y0, y1) int deltax := x1 - x0 int deltay := abs(y1 - y0) int error := 0 int deltaerr := deltay int y := y0 if y0 < y1 then ystep := 1 else ystep := -1 for x from x0 to x1 if steep then plot(y,x) else plot(x,y) error := error + deltaerr if 2×error ≥ deltax y := y + ystep error := error - deltax

History

The algorithm was developed by Jack E. Bresenham in 1962 at IBM. In 2001 Bresenham wrote:

"I was working in the computation lab at IBM's San Jose development lab. A Calcomp plotter had been attached to an IBM 1401 via the 1407 typewriter console. [The algorithm] was in production use by summer 1962, possibly a month or so earlier. Programs in those days were freely exchanged among corporations so Calcomp (Jim Newland and Calvin Hefte) had copies. When I returned to Stanford in Fall 1962, I put a copy in the Stanford comp center library. A description of the line drawing routine was accepted for presentation at the 1963 ACM national convention in Denver, Colorado. It was a year in which no proceedings were published, only the agenda of speakers and topics in an issue of Communications of the ACM. A person from the IBM Systems Journal asked me

Page 72: Computer Graphics Unit1

after I made my presentation if they could publish the paper. I happily agreed, and they printed it in 1965."

Bresenham's Algorithm

Circle GeneratorsMany incremental methods exist to plot circles and arcs.1) If the hardware exists for line generation, then circles are generated using short line segments.2) If the line generation hardware does not exist, then circles are generated using closely spaced dots. Central SymmetryA usable fact is that if the center of the circle is at location (0,0) and a point (x,y) lies on the circumference of the circle, then we know 7 more points on the circle.

We can set eight symmetric pixels that correspond to a point (x,y) on the circle using the following procedure:

procedure set_eight(x,y,xcenter,ycenter:integer);beginplot_pixel(x+xcenter,y+ycenter);plot_pixel(-x+xcenter,y+ycenter);plot_pixel(x+xcenter,-y+ycenter);plot_pixel(-x+xcenter,-y+ycenter);plot_pixel(y+xcenter,x+ycenter);plot_pixel(-y+xcenter,x+ycenter);plot_pixel(y+xcenter,-x+ycenter);plot_pixel(-y+xcenter,-x+ycenter);end;

The book discusses the pixel ratio problem. Essentially, the pixel ratio can be defined as the distance between the centers of two horizontal pixels divided by the distance between the centers of two vertical pixels. In the past, the distance between the horizontal adjacent pixels has been less than the vertical. Presently, the pixel ratio of most monitors is 1 and so you can skip the section "Correction for pixel ratio".

Page 73: Computer Graphics Unit1

Circle Generating Algorithms

Non-DDA Methods

1) Parameter Method - this method consists of calculating values (x,y) on the circle using the following formulas:

x = rcos(u)y = rsin(u)

where u increases incrementally by small steps over the interval 0 to 2(pi). We can use symmetry to improve the execution time so that u is calculated over the interval 0 to pi/4. We also note that if the incremental value of u is <= 1/r, where r is the length of r expressed in pixels, the circle generator advances by approximately one pixel each iteration. This algorithm is slow because of the constant recalulation of sin(u) and cos(u).

As an example, let's assume that we are generating a circle from the origin with radius 10. This would imply the following table of values:

u X=rcos(u) Y=rsin(u)0 10 0.1 9.95 .999.2 9.8 1.99.3 9.55 2.96.4 9.21 3.89.5 8.78 4.79.6 8.25 5.65.7 7.65 6.44.8 6.97 7.17 (Note: pi/4 = .7854)....1.5708 0 10 (Note: pi/2 = 1.5708)

2) Rotation Method - this method consists of using the point (x,y) on the circle and rotating it about the origin. The following equations will produce the desired result:

a) Clockwise

Xrot = Xcos(u) + Ysin(u)Yrot = -Xsin(u) + Ycos(u)

b) Counterclockwise

Xrot = Xcos(u) - Ysin(u)Yrot = Xsin(u) + Ycos(u)

Note: The book is in error!!

To use the rotation method, we assign (X0,Y0) equal to (r,0). We can then use the following equations to generate successive points:

Page 74: Computer Graphics Unit1

Xn+1 = Xncos(u) - Ynsin(u)Yn+1 = Xnsin(u) + Yncos(u)

Discussion Questions:1) Compare and contrast the Parameter Method and the Rotation Method. Which method would you use and why?2) What is the terminating condition for the Rotation Method?3)Can you think of a way to speed the Rotation Method up?

DDA Method

Bresenham's Circle Generation

This circle generation algorithm uses only integer arithmetic and is based on the differential equation of a circle. I will skip this explanation for now. The algorithm as we will discuss it, generates the portion of the circle from 90 degrees to 45 degrees. The circle is at the origin with radius r and generates one new pixel per iteration.

Consider an arbitrary point on the circle (Xn,Yn). The next point on the circle is either (Xn + 1,Yn) or (Xn + 1,Yn -1). Pictorially, this looks like:

How is the next pixel chosen? In general, we know that the equation of a circle is:(X-h)^2 + (Y-k)^2 = r^2

What can you deduce from this general form?

We can somewhat restrict the general equation of a circle to the equation about the origin as follows:X^2 + Y^2 = r^2

It turns out that we will select the next pixel based on a test using the function:

f(X,Y) = X^2 + Y^2 - r^2 as follows:

1) all points (x,y) that satisfy f(x,y)=0 lie exactly on the circle.2) all points (x,y) that satisfy f(x,y)>0 lie outside the circle.3) all points (x,y) that satisfy f(x,y)<0 lie inside the circle.

Page 75: Computer Graphics Unit1

Five possibilities exist with the points Pn, A, and B as follows:

The fact is that we plotted the point Pn and we are looking to plot the next point. The next point will be either A or B and we must use the available information to make that decision. We remember that the point A is (Xn + 1, Yn) and the point B is (Xn + 1, Yn -1), so looking at the sum = f(A) + f(B), we can determine the next point that we wish to plot.

1 2 3 4 5

f(A) neg zero pos pos posf(B) neg neg neg zero possum neg neg ? pos pos

This implies that to make the correct choice, we need to look at the sign of the sum of the two functions, f(A) & f(B).

Consider the following:

Page 76: Computer Graphics Unit1

If the starting pixel is P0=(X0,Y0)=(0,r), we need to look at f(A)+f(B) to make a decision about whether to choose A or B. So what is f(A)+f(B) in general if P0=(0,r)?

Remember, the following equations determine how pixels A and B lie with respect to the circle

f(A)=(Xn + 1)^2 + Yn^2 - r^2f(B)=(Xn + 1)^2 + (Yn - 1)^2 - r^2

Problem: Given a circle whose center is (0,0) with a radius of 10, determine the next point on the circle to be plotted and explain why.

Once we have determined which point A or B is to be plotted, we need to calculate the next sum (f(A)+f(B)) from that point. In particular, we would like to find the next sum needed to go from P1 to P2 such that it can be expressed in terms of the old sum. In general, we would like to show the case Pn with coordinates (Xn,Yn) and consider how the next two pixels are chosen. If this seems confusing, it is. Before deriving this, let's take a look at Bresenham's Circle Generating Algorithm and work through some steps.

procedure brescircle(radius,xcenter,ycenter:integer);var xstart,ystart,sum:integer;begin xstart := 0; ystart := radius; sum := 3-2*radius; while(xstart <= ystart) do begin set_eight(xstart,ystart,xcenter,ycenter); if(sum <= 0) then sum := sum + 4*xstart + 6; else begin sum := sum + 4*(xstart-ystart) + 10; ystart := ystart - 1; end;< xstart := xstart + 1; end;end;

Page 77: Computer Graphics Unit1

Problem: Show the first four points generated from the starting point (0,10) using Bresenham's algorithm.

Question: Where do the following two statements come from:1) sum := sum + 4*xstart + 62) sum := sum + 4*(xstart-ystart) + 10

Consider the following diagram:

Pn is the point just plotted. From the picture we can see that if the sum < 0, then we continue on with pixel Pn+1. Notice that the coordinates for Pn+1 are (Xn +1,Yn). The case of sum=0 and sum>0 will be left for your enjoyment.

The expression for s from a point (Xn,Yn) to point A or B is:

sum = (Xn + 1)^2 + Yn^2 - r^2 + {this is f(A)} (Xn + 1)^2 + (Yn - 1)^2 - r^2 {this is f(B)}

sum = 2Xn^2 + 2Yn^2 + 4Xn - 2Yn - 2r^2 + 3

If (sum < 0) then we continue with Pn+1 = (Xn + 1,Yn)else we continue with Pn+1 = (Xn + 1,Yn - 1)

Since we are assuming that the sum is less than zero, we continue on with the point (Xn + 1,Yn).

In order to go from here to Pn+2 which can be A or B, once again the value of the sum is computed as follows:

sum_new = (Xn + 2)^2 + Yn^2 - r^2 + {f(A)} (Xn + 2)^2 + (Yn - 1)^2 - r^2 {f(B)}

Page 78: Computer Graphics Unit1

sum_new = 2Xn^2 + 2Yn^2 + 8Xn - 2Yn - 2r^2 +9

What's interesting is that a closer look at sum_new shows that it can be written in terms of sum. That is:

sum_new = sum + 4Xn + 6

We can similarly define sum_new for the case when sum >= 0 as:

sum_new = sum + 4*(Xn - Yn) + 10

What is the expense of both of these operations?

(Midpoint Circle Algorithm)

Below is the circle routine that appears on page 102 of Hearn and Baker (modified slightly to work on my MAC).

void circleMidpoint(int xCenter, int yCenter, int radius){int x = 0;int y = radius;int p = 1 - radius;void circlePlotPoints(int xCenter, int yCenter, int x, int y);

/* Plot first set of points */circlePlotPoints(xCenter, yCenter, x, y);

while (x < y) {x++;if (p < 0)

p += 2*x + 1;else {

y--;p += 2 * (x-y) + 1;

}circlePlotPoints(xCenter, yCenter, x, y);

}}void circlePlotPoints(int xCenter, int yCenter, int x, int y){// SetColorPixel(x, y, r, g, b); sets the pixel (x, y) to rgb (r, g, b)

SetColorPixel(xCenter + x, yCenter + y, 0, 0, 0);SetColorPixel(xCenter - x, yCenter + y, 0, 0, 0);SetColorPixel(xCenter + x, yCenter - y, 0, 0, 0);SetColorPixel(xCenter - x, yCenter - y, 0, 0, 0);SetColorPixel(xCenter + y, yCenter + x, 0, 0, 0);SetColorPixel(xCenter - y, yCenter + x, 0, 0, 0);

Page 79: Computer Graphics Unit1

SetColorPixel(xCenter + y, yCenter - x, 0, 0, 0);SetColorPixel(xCenter - y, yCenter - x, 0, 0, 0);

}

Assume that you have a function: void HLine(int x1, int x2, int y);

that efficiently draws the horizontal line from (x1, y) to (x2, y).For each of the shapes on below, show the changes you would make, to the circle code, to generate the shape shown. USE HLine.

Aliasing Antialiasing

(Terms of computer graphics)

When a line in a digital image runs at an angle, then it will often appear with jagged edges. This effect is caused by the regular pixel grid in the image, and is called aliasing.

Page 80: Computer Graphics Unit1

Example of aliased and antialiased line.

To avoid this effect, the process of antialiasing paints some nearby pixels in an intermediate color or brighness. That way, the visuall appearance of the line (or an edge) is smoothed out.

The problem of aliasing is prevalent in synthetic images created by methods such as raytracing. Some programs try to circumvent it by sending more than one ray per pixel (oversampling), and interpolating their result. With the Radiance software, the same result is reached by rendering an image that is two or three times as big as the target size with rpict(1), and then filtering this intermediate image down to scale with pfilt(1). To get even smoother results, the image can also be rendered with jittered sampling.

Original scene.

The bowl of fruit was modeled using constructive solid geometry(CSG).One scanline in the fruitbowl is highlighted. The graph shows the luminosity (brightness) function of the highlighted scan line.

Page 81: Computer Graphics Unit1

Sampling the scene.

The rectangular grid superimposed over the fruit bowl shows the size of the pixels. The dot in the middle of each square shows the position of a sample. The color at the sample point will be the color of the pixel in the rendered image. The graph shows the corresponding sampled luminosity function of the highlighted scanline. Notice that a lot of information has been lost.

Page 82: Computer Graphics Unit1

Rendered image

The rendered image differs greatly from the original scene, as does the luminosity signal. Notice that the green leaf has moved to the right in the rendered image.

Page 83: Computer Graphics Unit1

Effects caused by aliasing.

The errors caused by aliasing are called artefacts. Common aliasing artefacts include jagged profiles, disappearing or improperly rendered fine detail, and disintegrating textures.

Jagged profiles.

The picture on the left shows the sampling grid superimposed on the original scene. The picture on the right is the rendered image. A jagged profile is quite evident in the rendered image. Also known as "jaggies", jagged silhouettes are probably the most familiar effect caused by aliasing. Jaggies are especially noticeable where there is a high contrast between the interior and the exterior of the silhouette.

Page 84: Computer Graphics Unit1

Improperly rendered detail.

The original scene on the left shows a group of small polygons. In the rendered scene, one of the two red rectangles disappears entirely, and the other doubles in width. Two of the orange triangles disappear. Although the two yellow triangles are identical in size, one is larger than the other in the rendered image.

Page 85: Computer Graphics Unit1

Disintegrating textures.

This is a checkered texture on a plane. The checkers should become smaller as the distance from the viewer increases. However, the checkers become larger or irregularly shaped when their distance from the viewer becomes too great. Simply increasing the resolution will not remove this artefact. Increasing the resolution will only move the artefact closer to the horizon.

Page 86: Computer Graphics Unit1

Anti-aliasing

 

One of the most important techniques in making graphics and text easy to read and pleasing to the eye on-screen is anti-aliasing. Anti-aliasing is a cheaty way of getting round the low 72dpi resolution of the computer monitor and making objects appear as smooth as if they'd just stepped out of a 1200dpi printer (nearly).

Take a look at these images. The letter a on the left is un-anti-aliased and looks coarse compared with the letter on the right.

If we zoom in we can see better what's happening. Look at how the un-anti-aliased example below left breaks up curves into steps and jagged outcrops. This is what gives the letter its coarse appearance. The example on the right is the same letter, same point size and everything, but with anti-aliasing turned on in Photoshop's text tool. Notice how the program has substituted shades of grey around the lines which would otherwise be broken across a pixel.

Page 87: Computer Graphics Unit1

But anti-aliasing is more than just making something slightly fuzzy so that you can't see the jagged edges: it's a way of fooling the eye into seeing straight lines and smooth curves where there are none. To see how anti-aliasing works, let's take a diagonal line drawn across a set of pixels. In the example left the pixels are marked by the grid: real pixels don't look like that of course, but the principle is the same.

Pixels around an un-anti-aliased line can only be part of the line or not part of it: so the computer draws the line as a jagged set of pixels roughly approximating the course of our original nice smooth line. (Trivia fact: anti-aliasing was invented at MIT's Media Lab. So glad they do do something useful there....)

When the computer anti-aliases the line it works out how much of each in-between pixel would be covered by the diagonal line and draws that pixel as an intermediate shade between background and foreground. In our simple-minded example here this is shades of grey. This close up the anti-aliasing is obvious and actually looks worse than the un-anti-aliased version, but try taking your glasses off, stepping a few yards back from the screen and screwing up your eyes a bit to emulate the effect of seeing the line on a VGA monitor covered in crud at its right size. Suddenly a nice, smooth line pops into view.

Page 88: Computer Graphics Unit1

So how does one go about anti-aliasing an image? Just be grateful you don't have to do it by hand. Most screen design programs, including Photosop and Paintshop Pro include anti-alias options for things like text and line tools. The important thing is simply to remember to do it, and to do it at the appropriate time.

There are far too many graphics out on the Web that are perfectly well-designed, attractive and fitted to their purpose but end up looking amateurish because they haven't been anti-aliased. Equally, there are plenty of graphics that have turned to visual mush because they've been overworked with the anti-alias tool.

Generally, I guess, the rules are these:

Always anti-alias text except when the text is very small. This is to taste but I reckon on switching off anti-aliasing in Photoshop below about 12 points. If you're doing a lot with text this size, you really oughtn't be putting it in a graphic but formatting ASCII instead.

Always anti-alias rasterised EPSs (see the accompanying page for details). Except when you don't want to, of course.

If attempting to anti-alias something manually, or semi-manually, such as by putting a grey halo round a block black graphic, then only apply the effect at the last possible stage. And always, always, always bear in mind the target background colour. It's a fat lot of good anti-aliasing a piece of blue text on a white background, if the target page is orange, because the anti-aliased halo is

Page 89: Computer Graphics Unit1

going to be shades of white-orange. I spent two hours re-colouring in a logo after doing exactly that. Doh!

Never confuse blur and anti-aliasing. The former is a great help in making things appear nice and smooth if applied to specific parts of an image, but it'll make your image just look runny if used all over.

That's about it. Anti-aliasing is of immense importance, especially in turning EPSs into something pleasant to look at onscreen, as I explain in the next couple of pages.