main project final doc (1) (1).doc

  • Upload
    mahesh

  • View
    261

  • Download
    1

Embed Size (px)

Citation preview

CHAPTER 1

INTRODUCTION

1.INTRODUCTIONIn recent years, it is much easier to alter the content of digital images than before using the popular image editing computer software. Some of these images are irrational with respect to their content, making their fakery obvious to human. However, many of the altered images can not be easily determined as real or fake. Fake images are loosely defined as images that do not express the original content . In other words, what human saw is not necessarily believable. Like it or not, fake images are everywhere, such as those in movies, advertisements, etc. There is no general model to detect them, since the production of fake images leaves no visual clues of fakery. Fig. 1 gives some examples of fake images, which show that fake images are consistent in their signal characteristic with real images, and only their contents are exaggerated. To the fake images that are produced for the sake of cheating, especially for political advantage, they can barely be distinguished only based on their content.

Some similar concepts with image fakery have been introduced in the past few years, such as digital forgery and image splicing . In digital forgeries are referred to the manipulation and alteration of digital images, and method to detect traces of resampling is proposed to expose digital forgeries. Furthermore, some other statistical tools for detecting digital forgeries are also proposed and analyzed in including techniques employing the inconsistences in digital camera imaging techniques digital image sampling techniques, the direction of point light , principal component analysis and higher-order wavelet statistics . In [an image splicing model is proposed to combine different objects in images into a new image. Aiming at detecting spliced images a statistical model using bicoherence feature has been proposed in which was originally designed for detecting human speech signal.

Practically, it is hard to say whether an image is real or fake only from viewing the content of the image, because the purpose of creating fake images is to alter the content of an image by adding, removing or replacing some objects in the image, so that the altered image looks like a real image. For Fig. 1(a), it is illogical for a flying hawk hanging a bomb, so it may be easily concluded that this image is fake. The same conclusion can be applied to Fig. 1(b). Unfortunately, most of the fake images can not be distinguished visually and appropriate methods should be developed to detect them. \Fig 1.1:Fake Images

CHAPTER-2AIM AND SCOPE

2 AIM AND SCOPE It is much easier to alter the content of digital images than before, using the popular image editing computer software. Some of these images are irrational with respect to their content, making their fakery obvious to human. However, many of the altered images can not be easily determined as real or fake. Fake images are loosely defined as images that do not express the original content . In other words, what human saw is not necessarily believable. Like it or not, fake images are everywhere, such as those in movies, advertisements, etc. There is no general model to detect them, since the production of fake images leaves no visual clues of fakery.some examples of fake images, which show that fake images are consistent in their signal characteristic with real images, and only their contents are exaggerated. To the fake images that are produced for the sake of cheating, especially for political advantage, they can barely be distinguished only based on their content. With the great convenience of computer graphics and digital imaging, it becomes much easier to alter the content of images than before without any visually traces to catch these manipulations, i.e., many fake images are produced whose content is feigned. Thus, the images can not be judged whether they are real or not visually. Support vector machine is a technique for universal data classification. In recent years, SVMs have been used for many applications, such as pattern recognition, industrial engineering, digital watermarking and so on. Generally, SVMs are deemed to be easier and better to use than traditional neural network models.CHAPTER-3 DIGITAL IMAGE PROCESSING

3. DIGITAL IMAGE PROCESSING3.1 BACKGROUNDDigital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. An important characteristic underlying the design of image processing systems is the significant level of testing & experimentation that normally is required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time required to arrive at a viable system implementation.

3.2 WHAT IS DIP? An image may be defined as a two-dimensional function f(x, y), where x & y are spatial coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y & the amplitude values of are all finite discrete quantities, we call the image a digital image. The field of DIP refers to processing digital image by means of digital computer. Digital image is composed of a finite number of elements, each of which has a particular location & value. The elements are called pixels.

Vision is the most advanced of our sensor, so it is not surprising that image play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate also on images generated by sources that humans are not accustomed to associating with image.

There is no general agreement among authors regarding where image processing stops & other related areas such as image analysis& computer vision start. Sometimes a distinction is made by defining image processing as a discipline in which both the input & output at a process are images. This is limiting & somewhat artificial boundary. The area of image analysis (image understanding) is in between image processing & computer vision.

There are no clear-cut boundaries in the continuum from image processing at one end to complete vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, & high-level processes. Low-level process involves primitive operations such as image processing to reduce noise, contrast enhancement & image sharpening. A low- level process is characterized by the fact that both its inputs & outputs are images. Mid-level process on images involves tasks such as segmentation, description of that object to reduce them to a form suitable for computer processing & classification of individual objects. A mid-level process is characterized by the fact that its inputs generally are images but its outputs are attributes extracted from those images. Finally higher- level processing involves Making sense of an ensemble of recognized objects, as in image analysis & at the far end of the continuum performing the cognitive functions normally associated with human vision.

Digital image processing, as already defined is used successfully in a broad range of areas of exceptional social & economic value.

3.3 WHAT IS AN IMAGE? An image is represented as a two dimensional function f(x, y) where x and y are spatial co-ordinates and the amplitude of f at any pair of coordinates (x, y) is called the intensity of the image at that point.

3.3.1 GRAY SCALE IMAGE:

A gray scale image is a function I (xylem) of the two spatial coordinates of the image plane.

I(x, y) is the intensity of the image at the point (x, y) on the image plane.

I(xylem) takes non-negative values assume the image is bounded by a rectangle [0, a] ([0, b]I: [0, a] ( [0, b] ( [0, info)3.3.2 COLOR IMAGE It can be represented by three functions, R (xylem) for red, G (xylem) for green and B (xylem) for blue.

An image may be continuous with respect to the x and y coordinates and also in amplitude. Converting such an image to digital form requires that the coordinates as well as the amplitude to be digitized. Digitizing the coordinates values is called sampling. Digitizing the amplitude values is called quantization.

3.4 COORDINATE CONVENTION The result of sampling and quantization is a matrix of real numbers. We use two principal ways to represent digital images. Assume that an image f(x, y) is sampled so that the resulting image has M rows and N columns. We say that the image is of size M X N. The values of the coordinates (xylem) are discrete quantities. For notational clarity and convenience, we use integer values for these discrete coordinates. In many image processing books, the image origin is defined to be at (xylem)=(0,0).The next coordinate values along the first row of the image are (xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify the second sample along the first row. It does not mean that these are the actual values of physical coordinates when the image was sampled. Following figure shows the coordinate convention. Note that x ranges from 0 to M-1 and y from 0 to N-1 in integer increments. The coordinate convention used in the toolbox to denote arrays is different from the preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the notation (race) to indicate rows and columns. Note, however, that the order of coordinates is the same as the order discussed in the previous paragraph, in the sense that the first element of a coordinate topples, (alb), refers to a row and the second to a column. The other difference is that the origin of the coordinate system is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to N in integer increments. IPT documentation refers to the coordinates. Less frequently the toolbox also employs another coordinate convention called spatial coordinates which uses x to refer to columns and y to refers to rows. This is the opposite of our use of variables x and y.

3.5 IMAGE AS MATRICES

The preceding discussion leads to the following representation for a digitized image function:

f (0,0) f(0,1) .. f(0,N-1)

f (1,0) f(1,1) f(1,N-1)

f (xylem)= . . .

. . .

f (M-1,0) f(M-1,1) f(M-1,N-1)

The right side of this equation is a digital image by definition. Each element of this array is called an image element, picture element, pixel or pel. The terms image and pixel are used throughout the rest of our discussions to denote a digital image and its elements.

A digital image can be represented naturally as a MATLAB matrix:

f (1,1) f(1,2) . f(1,N)

f (2,1) f(2,2) .. f (2,N)

. . .

f = . . .

f (M,1) f(M,2) .f(M,N)

Where f (1,1) = f(0,0) (note the use of a monoscope font to denote MATLAB quantities). Clearly the two representations are identical, except for the shift in origin. The notation f(p ,q) denotes the element located in row p and the column q. For example f(6,2) is the element in the sixth row and second column of the matrix f. Typically we use the letters M and N respectively to denote the number of rows and columns in a matrix. A 1xN matrix is called a row vector whereas an Mx1 matrix is called a column vector. A 1x1 matrix is a scalar.

Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array and so on. Variables must begin with a letter and contain only letters, numerals and underscores. As noted in the previous paragraph, all MATLAB quantities are written using mono-scope characters. We use conventional Roman, italic notation such as f(x ,y), for mathematical expressions

3.6 READING IMAGES

Images are read into the MATLAB environment using function imread whose syntax is

Imread (filename)

Format name

DescriptionRecognized extension

TIFFTagged Image File Formattif, .tiff

JPEGJoint Photograph Experts Group.jpg, .jpeg

GIFGraphics Interchange Format.gif

BMPWindows Bitmap.bmp

PNGPortable Network Graphics.png

XWDX Window Dump.png

Table 1.1: Different image formats

Here filename is a spring containing the complete of the image file(including any applicable extension).For example the command line

>> f = imread (8. jpg);

Reads the JPEG (above table) image chestxray into image array f. Note the use of single quotes () to delimit the string filename. The semicolon at the end of a command line is used by MATLAB for suppressing output If a semicolon is not included. MATLAB displays the results of the operation(s) specified in that line. The prompt symbol (>>) designates the beginning of a command line, as it appears in the MATLAB command window.

3.7 DATA CLASSES Although we work with integers coordinates the values of pixels themselves are not restricted to be integers in MATLAB. Table above list various data classes supported by MATLAB and IPT are representing pixels values. The first eight entries in the table are refers to as numeric data classes. The ninth entry is the char class and, as shown, the last entry is referred to as logical data class.

All numeric computations in MATLAB are done in double quantities, so this is also a frequent data class encounter in image processing applications. Class unit 8 also is encountered frequently, especially when reading data from storages devices, as 8 bit images are most common representations found in practice. These two data classes, classes logical, and, to a lesser degree, class unit 16 constitute the primary data classes on which we focus. Many ipt functions however support all the data classes listed in table. Data class double requires 8 bytes to represent a number uint8 and int 8 require one byte each, uint16 and int16 requires 2bytes and unit 32.

Int 32 and single required 4 bytes each. The char data class holds characters in Unicode representation. A character string is merely a 1*n array of characters logical array contains only the values 0 to 1,with each element being stored in memory using function logical or by using relational operators.

NameDescription

DoubleDouble _ precision, floating_ point numbers the Approximate.

Uint16unsigned 16_bit integers in the range [0, 65535] (2byte per element)

Uint 32unsigned 32_bit integers in the range [0, 4294967295](4 bytes per element)

Int8signed 8_bit integers in the range [-128,127] 1 byte per element)

Int 16signed 16_byte integers in the range [32768, 32767] (2 bytes per element)

Int 32Signed 32_byte integers in the range [-2147483648, 21474833647] (4 byte per element)

Singlesingle _precision floating _point numbers with values In the approximate range (4 bytes per elements)

Charcharacters (2 bytes per elements)

Logicalvalues are 0 to 1 (1byte per element)

Table 1.2: Different Data types3.8 IMAGE TYPES:

The toolbox supports four types of images:

1 .Intensity images;

2. Binary images;

3. Indexed images;

4. R G B images.

Most monochrome image processing operations are carried out using binary or intensity images, so our initial focus is on these two image types. Indexed and RGB colour images.3.1 INTENSITY IMAGES:An intensity image is a data matrix whose values have been scaled to represent intentions. When the elements of an intensity image are of class unit8, or class unit 16, they have integer values in the range [0,255] and [0, 65535], respectively. If the image is of class double, the values are floating point numbers. Values of scaled, double intensity images are in the range [0, 1] by convention.

Fig 3.8.1: Intensity Image3.8.2 BINARY IMAGES:Binary images have a very specific meaning in MATLAB.A binary image is a logical array 0s and1s.Thus, an array of 0s and 1s whose values are of data class, say unit8, is not considered as a binary image in MATLAB .A numeric array is converted to binary using function logical. Thus, if A is a numeric array consisting of 0s and 1s, we create an array B using the statement.

B=logical (A)

If A contains elements other than 0s and 1s.Use of the logical function converts all nonzero quantities to logical 1s and all entries with value 0 to logical 0s.Using relational and logical operators also creates logical arrays.To test if an array is logical we use the I logical function: islogical(c).

If c is a logical array, this function returns a 1.Otherwise returns a 0. Logical array can be converted to numeric arrays using the data class conversion functions.

Figure 2.8.2 Binary Image3.8.3 INDEXED IMAGES:An indexed image has two components:

A data matrix integer, x

A color map matrix, map

Matrix map is an m*3 arrays of class double containing floating point values in the range [0, 1].The length m of the map are equal to the number of colors it defines. Each row of map specifies the red, green and blue components of a single color. An indexed images uses direct mapping of pixel intensity values color map values. The color of each pixel is determined by using the corresponding value the integer matrix x as a pointer in to map. If x is of class double ,then all of its components with values less than or equal to 1 point to the first row in map, all components with value 2 point to the second row and so on. If x is of class units or unit 16, then all components value 0 point to the first row in map, all components with value 1 point to the second and so on.

Figure 3.8.3: INDEXED IMAGE3.8.4 RGB IMAGE:

An RGB color image is an M*N*3 array of color pixels where each color pixel is triplet corresponding to the red, green and blue components of an RGB image, at a specific spatial location. An RGB image may be viewed as stack of three gray scale images that when fed in to the red, green and blue inputs of a color monitor

Produce a color image on the screen. Convention the three images forming an RGB color image are referred to as the red, green and blue components images. The data class of the components images determines their range of values. If an RGB image is of class double the range of values is [0, 1].

Similarly the range of values is [0,255] or [0, 65535].For RGB images of class units or unit 16 respectively. The number of bits use to represents the pixel values of the component images determines the bit depth of an RGB image. For example, if each component image is an 8bit image, the corresponding RGB image is said to be 24 bits deep.

Generally, the number of bits in all component images is the same. In this case the number of possible color in an RGB image is (2^b) ^3, where b is a number of bits in each component image. For the 8bit case the number is 16,777,216 colors.

FIGURE 3.8.4 RGB IMAGECHAPTER-4 WATERMARKING4.1 ELEMENTS OF WATERMARKING SYSTEMA watermarking system can be viewed as a communication system consisting of three main elements: an embedded, a communication channel and a detector. Watermark information is embedded into the signal itself, instead of being placed in the header of a file or using encryption like in other security techniques, in such a way that it is extractable by the detector. To be more specific, the watermark information is embedded within the host signal before the watermarked signal is transmitted over the communication channel, so that the watermark can be detected at the receiving end, that is, at the detector.

A general watermarking system is illustrated in Fig. 1. The dotted lines represent the optional components, which may or may not be required according to the application. First of all, a watermark Wo is generated by the watermark generator possibly with a secret watermark generation key Kg. The watermark Wo can be a logo, or be a pseudo- random signal. Figure 4.1 A general watermarking systemInstead of directly embedding it into the host signal, the watermark Wo can be pre- coded to optimize the embedding process, i.e. to increase robustness against possible signal processing operations or imperceptibility of the watermark. This is done by an information coder which may require the original signal So.

The outcome of the information coding component is denoted by symbol W that, together with the original signal So and possibly a secret key K, are taken as input of the embedder. The secret key K is intended to differentiate between authorized users and unauthorized users at the detector in the absence of Kg. The embedder takes in W, So and K, so as to hide W within So in a most imperceptible way with the help of K, and produce the watermarked signal Sw. Afterwards, Sw enters into the communication channel where a series of unknown signal processing operations and attacks may take place. The outcome of the communication channel is denoted by the symbol Sw.At the receiving end, the detector works in an inversely similar way as the embedded, and it may require the secret key Kg, K, and the original signal So. Then the detector reads Sw and decides if the received signal has the legal watermark.

4.2 TYPES OF WATERMARKSWatermarks and watermarking techniques can be divided into various categories in various ways. Watermarking techniques can be divided into four categories according to the type of document to be watermarked as follows: Text Watermarking Image Watermarking Audio Watermarking Video WatermarkingIn other way, the digital watermarks can be divided into three different types as follows: Visible watermark Invisible-Robust watermark Invisible-Fragile watermarkVisible watermark is a secondary translucent overlaid into the primary image. The watermark appears visible to a casual viewer on a careful inspection. The invisible- robust watermark is embedded in such a way that alternations made to the pixel value are perceptually not noticed and it can be recovered only with appropriate decoding mechanism. The invisible-fragile watermark is embedded in such a way that any manipulation or modification of the image would alter or destroy the watermark.

Also, the digital watermarks can be divided into two different types according to the necessary data for extraction:

Informed (or private Watermarking): in which the original unwater marked cover is required to perform the extraction process.

Blind (or public Watermarking): in which the original unwater marked cover is not required to perform the extraction process.4.3 WATERMARK REQUIREMENTSIn this section, we study a number of watermarking system requirements as well as the tradeoffs among them.

Security: The security requirement of a watermarking system can differ slightlydepending on the application. Watermarking security implies that the watermark should be difficult to remove or alter without damaging the host signal. As all watermarking systems seek to protect watermark information, without loss of generality, watermarking security can be regarded as the ability to assure secrecy and integrity of the watermark information, and resist malicious attacks

Imperceptibility: The imperceptibility refers to the perceptual transparency of the watermark. Ideally, no perceptible difference between the watermarked and original signal should exist [4, 5]. A straightforward way to reduce distortion during watermarking process is embedding the watermark into the perceptually insignificant portion of the host signal [5]. However, this makes it easy for an attacker to alter the watermark information without being noticed.

Capacity: Watermarking capacity normally refers to the amount of information that can be embedded into a host signal. Generally speaking, capacity requirement always struggle against two other important requirements, that is, imperceptibility and robustness (Fig. 2). A higher capacity is usually obtained at the expense of either robustness strength or imperceptibility, or both.Robustness

Figure 4.3 The tradeoffs among imperceptibility, Robustness, and capacityRobustness: Watermark robustness accounts for the capability of the watermark to survive signal manipulations. Apart from malicious attacks, common signal processing operations can pose a threat to the detection of watermark, thus making it desirable to design a watermark that can survive those operations. For example, a good strategy to robustly embed a watermark into an image is to insert it into perceptually significant parts of the image. Therefore, robustness is guaranteed when we consider the case of lossy compression which usually discards perceptually insignificant data, thus data hidden in perceptual significant portions is likely to survive lossy compression operation. However, as this portion of the host signal is more sensitive to alterations, watermarking may produce visible distortions in the host signal. The exact level of robustness an algorithm must possess cannot be specified without considering the application scenario [6]. Not all watermarking applications require a watermark to be robust enough to survive all attacks and signal processing operations. Indeed, a watermark needs only to survive the attacks and those signal processing operations that are likely to occur during the period when the watermarked signal is in communication channel. In an extreme case, robustness may be completely irrelevant in some case where fragility is desirable.4.4WATERMARKINGTECHNIQUESSeveral different methods enable watermarking in the spatial domain. The simplest (too simple for many applications) is just to flip the lowest-order bit of chosen pixels. This workswellonlyiftheimageisnotsubject to any modification. A more robust watermark can be embedded by superimposing a symbol over an area of the picture. The resulting mark may be visible or not, depending upon the intensity value. Picture cropping, e.g., (a common operation of image editors), can be used to eliminate the watermark.Spatial watermarking can also be applied using color separation. In this way, the watermark appears in only one of the color bands. This renders the watermark visibly subtle such that it is difficult to detect under regular viewing. However, the mark appears immediately when the colors are separated for printing. This renders the document useless for the printer unless the watermark can be removed from the color band. This approach is used commercially for journalists to inspect digital pictures from a photo-stock house before buying unmarked versions.

Watermarking can be applied in the frequency domain (and other transform domains) by first applying a transform like the Fast Fourier Transform (FFT). In a similar manner to spatial domain watermarking, the values of chosen frequencies can be altered from the original. Since high frequencies will be lost by compression or scaling, the watermark signal is applied to lower frequencies, or better yet, applied adaptively to frequencies that contain important information of the original picture. Since watermarks applied to the frequency domain will be dispersed over the entirety of the spatial image upon inverse transformation, this method is not as susceptible to defeat by cropping as the spatial technique. However, there is more a tradeoff here between invisibility and decodability, since the watermark is in effect applied indiscriminately across the spatial image. Table 1. shows a small comparison between the two different techniques.

Spatial DomainFrequency Domain

Computation CostLowHigh

RobustnessFragileMore Robust

Perceptual QualityHigh ControlLow Control

CapacityHigh (depend on the size of the

image)Low

Example ofApplicationsMainly AuthenticationCopy Rights

Table 4.4 Comparison between Watermarking TechniquesCHAPTER-5FRAGILE WATERMARKING

5. FRAGILE WATERMARKING TECHNIQUE5.1 FRAGILE WATERMARKING A watermark is said to be fragile if the watermark hidden within the host signal is destroyed as soon as the watermarked signal undergoes any manipulation. When a fragile watermark is present in a signal, we can infer, with a high probability, that the signal has not been altered.

Fragile watermarking authentication has an interesting variety of functionalities including tampering localization and discrimination between malicious and non- malicious manipulations. Tampering localization is critical because knowledge of where the image has been altered can be effectively used to indicate the valid region of the image, to infer the motive and the possible adversaries. Moreover, the type of alteration may be determined from the knowledge of tampering localization.

As to the fragile watermarks for authentication and proof of integrity, the attacker is no longer interested in making the watermarks unreadable. Actually, disturbing this type of watermark is easy because of its fragility. The goal of the attackers is, conversely, producing a fake but legally watermarked signal. This host media forgery can be reached by either making undetectable modifications on the watermarked signal or inserting a fake watermark into a desirable signal.

Now, it is necessary to formulate the unique features of fragile watermarking systems in order to demonstrate what features are well sought after. The features can also serve in theoretical analysis for making comparisons among algorithms:1) High resolution tampering localization: This becomes an important merit of fragile watermarking systems as it is one of the features that makes watermarking outweighs cryptography in some applications. The outcome of a detector can be as simple as authentic/tampered, but a result indicating which portions in an image are tampered is more desirable.

2) Tampering detection with low false positive. A good fragile watermarking system should have a sound tamper indication stating both statistical tampering probability and tampering localization with a low false positive rate. If the tampering localization is required to be accurate, the false positive rate must be kept low or localization resolution is compromised. For example, the boundary of tampered region may be obscure when false positive rate is not low enough, even if block size is small.3) Geometric manipulation detectability. The watermark should be correctly read by the detector in the intact portions after geometric manipulations such as image cropping. Further, the ability of the detector to indicate where the cropping took place is of crucial importance in some applications.

4)

Attack identification. With proper settings, the detector is also able to estimate what kind of modification had occurred to an attacked image. This includes the ability to differentiate geometric attacks from other attacks. It implies that cropping a part of the image will not result in disturbing the whole watermark.

5) Proper embedding sequence. It implies that the selection of dependency is limited to the previously watermarked portion of the image. If localization is required, the dependency information for the to-be-watermarked pixel can only be chosen from the neighboring pixels from the dependency selection. Because the to-be-watermarked pixel can only depend on the content information that will not be changed later, otherwise the watermark will not be recognized by the detector. Raster-scan and zig-zag scan order are both widely used [3].

6) Blind detection. For practicality, watermark detectors should not require an original copy, or there would be no necessity for watermarking as the verification can be performed by simply comparing the received image with the original one. The watermark extraction should naturally be blind for practicality.In the following section, we introduce a fragile watermarking scheme which is immune to all forgery attacks and still retains high performance in every aspect of the system.

5.2 FRAGILE WATERMARKING SCHEMEThe combination of contextual and non-deterministic information has been proved to be one of the most effective means to improve the security of a watermarking system. The discussed scheme depends on both of the contextual information and non-deterministic information to perform watermarking. Contextual information, or dependency information, refers to the information from other portions of the image. Non-deterministic information usually relies on some randomly chosen parameters so that it can produce a unique signature. In this way, the non-deterministic signatures of two identical blocks at the same position of two images are different, even when they have the same neighborhood. As it is proved later, dependency information is recognized as effective mean to counter basic forgery attacks while the addition of non-deterministic information can thwart more advanced ones.

5.3. FEATURE CONSTRUCTIONIn order to carry out fake image classification using SVM, first of all, we need to construct a set of the features xi and it is described in detail in this section. Firstly, a fragile digital water-marking scheme is introduced as an assistant, which is used to find out the alteration of images. Then, the feature vector is constructed by convoluting two vectors which are obtained from the detected alteration matrix.In our fake image detection scheme, the altered area in an image should be detected first. Here, we take use of a least significant bit (LSB) based fragile watermarking scheme to watermark the original image, and then the watermarked image is open to public. Digital watermarking has been developed over 10 years. The main contributions are the robust water-marking which emphasizes the existence of the watermark. On the other hand, the fragile watermarking received less attention and it concerns with how to achieve the sensitivity to the image's alteration. With the assistant of fragile watermarking, even a slight alteration to the test images can be detected. In, a fragile watermarking scheme is introduced, where a watermark W with the same size of the original image, i.e., mn, is constructed to embed into the LSB plane of the original image I. Each watermark element is inserted into the LSB of each image pixel. Then, a watermarked image IW obtained. When the watermark is extracted form the test image I, there

will be difference between the extracted watermark We and the original watermark W, if the watermarked image is altered.

Based on the difference, a matrix can be obtained as followA=XOR(W,We)where A is a matrix composed of 0 and 1 with sizemn.It can be easily concluded that when A(s,t)=1, the pixel I(s,t) In the test image is altered. On the other hand, when A(s,t)=0, it is not altered. Here (s,t) is the pixel index with s=1,2,, m and

t=1,2,,n.In Fig.2, a sample detection process in shown

sample detection process in shown.Here, the white pixels inFig. 2(b) denotes the altered pixels.

Figure 5.3 (a) Fake image of lena (b)Area detected with alteration5.4 CONSTRUCTION OF FEATURE VECTORSOnce we get the matrix A, two column vectors U={u1,u2,,um}T and V={v1,v2,, vn} T can be obtained as follows:

which show the number of the altered pixels along the row and the column of A respectively. It is obvious that U and V reveal the altered area in the test image from two directions. In the proposed scheme, we use two methods to construct the

feature vector. The first one is to joint the two vectors U and V as follows:

where + denotes the joint operator. The length l of the feature vector is m+n. The other way to construct the feature vector is to convolute U and V, so that a feature vector xi with length l=m+n1 is obtained as follows:.

Wherexdenotes the convolution operator and the k-th element of the feature vector xi can be obtained from the convolution equation as follows:

Fig shows a sample of U, V and U V curves. Once the feature vector is constructed, the training set can be obtained as T={xi,yi}, where yi is the fakery indicator of the i-th image, i.e.,either1or1.

Figure 5.4The curves of U, V and their convolution U V are shown in (a), (b) and

(c) respectively.CHAPTER-6CONVOLUTION

6.1CONVOLUTIONConvolution is a mathematical way of combining two signals to form a third signal. It is the single most important technique in Digital Signal Processing. Using the strategy of impulse decomposition, systems are described by a signal called the impulse response. Convolution is important because it relates the three signals of interest: the input signal, the output signal, and the impulse response. This chapter presents convolution from two different viewpoints, called the input side algorithm and the output side algorithm. Convolution provides the mathematical framework for DSP; there is nothing more important in this book.

6.2THE DELTA FUNCTION & IMPULSE RESPONSEAn impulse is a signal composed of all zeros, except a single nonzero point. In effect, impulse decomposition provides a way to analyze signals one sample at a time. The input signal is decomposed into simple additive components, each of these components is passed through a linear system, and the resulting output components are synthesized (added). The signal resulting from this divide-and-conquer procedure is identical to that obtained by directly passing the original signal through the system. While many different decompositions are possible, two form the backbone of signal processing: impulse decomposition and Fourier decomposition. When impulse decomposition is used, the procedure can be described by a mathematical operation called convolution. Convolution also applies to continuous signals, but the mathematics is more complicated.

Convolution is a formal mathematical operation, just as multiplication, addition, and integration. Addition takes two numbers and produces a third number, while convolution takes two signals and produces a third signal. Convolution is used in the mathematics of many fields, such as probability and statistics. In linear systems, convolution is used to describe the relationship between three signals of interest: the input signal, the impulse response, and the output signal.6.3 CONVOLUTION PROPERTIES6.3.1 IDENTITY

The result of a convolution with a delta function is the signal itself:x(i) = S(i) * x(i).The result for each individual sample can be computed by the sum of the sample itself (weighted by 1) and all other samples weighted by 0, i.e., the sample value itself

6.3.2 COMMUTATIVEChanging the order of operands does change the result of the convolution operation. That means that the distinction between impulse response and signal is of no mathematical consequence in the context of convolution

h(i)*x(i)=x(i)*h(i).6.3.3 ASSOCIATIVEThe associative property of convolution means that changing the order of subsequent con-volution operations does not change the overall result. When applying two or more filters to a signal, the output will be identical for every order of filters.' This means that

g(i) * h(i)) * x(i) = g(i) * (h(i) * x(i)).

6.3.4 DISTRIBUTIVE

The order of different linear operations is irrelevant due to the distributive property of the convolution, for example:

g(i) * (h(i) + x(i)) = g(i) * h{i) + g(i) * x(i).

This means that two signals, one computed by applying a filter to two different signals and summing them together afterwards, the other computed by applying the filter to the sum of the signals, are identical.6.3.5 CIRCULARITYThe convolution with a periodic signal will result in a periodic output signal. The periodic signal x(i) is the sum of the shifted (fundamental) periods with length N:.

The multiplication of two spectra computed with the DFT will result in a circular convolution; the result will be the convolution of the two periodically continued sample blocksCHAPTER-7MATLAB FUNCTIONS 7.1 IMREAD

Read image from graphics files7.1.1SyntaxA = imread(filename,fmt)

[X,map] = imread(filename,fmt)

[...] = imread(URL,...)[...] = imread(...,idx) (CUR, ICO, and TIFF only)[...] = imread(...,'frames',idx) (GIF only)[...] = imread(...,ref) (HDF only)[...] = imread(...,'BackgroundColor',BG) (PNG only)[A,map,alpha] = imread(...) (ICO, CUR, and PNG only)

7.1.2 DESCRIPTION

The imread function supports four general syntaxes, described below. The imread function also supports several other format-specific syntaxes. See Special Case Syntax for information about these syntaxes.

A = imread(filename,fmt) reads a grayscale or true color image named filename into A. If the f contains a grayscale intensity image, A is a two-dimensional array. If the f i l e contains a true color (RGB) image, A is athree-dimensional (m-by-n-by-3) array.

filename is a string that specifies the name of the graphics f i l e, and fmt is a string that specifies the format of the f i l e. If the f i l e is not in the current directory or in a directory in the MATLAB path, specify the fu l pathname of the location on your system. If imread cannot find a f i l e named filename, it looks for a f i l e named filename.fmt. See Formats for a list of a l the possible values for fmt.

[X,map] = imread(filename,fmt) reads the indexed image in filename into X and its associated colormap into map. The colormap values are rescaled to the range [0,1].

7.2 IMSHOW Display image

7.2.1Syntax

imshow(I) imshow(I,[low,high]) imshow(RGB) imshow(BW) imshow(X,map) imshow(filename) himage=imshow(...) imshow(..., param1, val1, param2, val2,...)

7.2.2 Description

imshow(I) displays the grayscale image I.

imshow(I,[low high]) displays the grayscale image I, specifying the display range for I in [low high]. The value low (and any value less than low) displays as black; the value high (and any value greater than high) displays as white. Values in between are displayed as intermediate shades of gray, using the default number of gray levels. If you use an empty matrix ([]) for [low high], imshow uses [min(I(:)) max(I(:))]; that is, the minimum value in I is displayed as black, and the maximum value is displayed as white.

imshow(RGB) displays the truecolor image RGB.

imshow(BW) displays the binary image BW. imshow displays pixels with the value 0 (zero) as black and pixels with the value 1 as white.

imshow(X,map) displays the indexed image X with the colormap map. A color map matrix may have any number of rows, but it must have exactly 3 columns. Each row is interpreted as a color, with the first element specifying the intensity of red light, the second green, and the third blue. Color intensity can be specified on the interval 0.0 to 1.0.

imshow(filename) displays the image stored in the graphics file filename. The file must contain an image that can be read by imread or dicomread. imshow calls imread or dicomread to read the image from the file, but does not store the image data in the MATLAB workspace. If the file contains multiple images, imshow displays the first image in the file. The file must be in the current directory or on the MATLAB path.

himage = imshow(...) returns the handle to the image object created by imshow.

imshow(..., param1, val1, param2, val2,...) displays the image, specifying parameters and corresponding values that control various aspects of the image display. The following table lists all imshow parameters in alphabetical order. Parameter names can be abbreviated, and case does not matter.

ParameterValue

'Border'Text string that controls whether imshow includes a border around the image displayed in the figure window. Valid strings are 'tight' and 'loose'.

Note: There can still be a border if the image is very small, or if there are other objects besides the image and its axes in the figure.

By default, the border is set to the value returned by iptgetpref('ImshowBorder').

'Colormap'2D, real, m-by-3 matrix specifying a colormap. imshow uses this to set the figure's colormap property. Use this parameter to view grayscale images in false color. If you specify an empty colormap ([]), imshow ignores this parameter.

'DisplayRange'Two-element vector [LOW HIGH] that controls the display range of a grayscale image. See the imshow(I,[low high]) syntax for more details about how to set this parameter.

Note Including the parameter name is optional, except when the image is specified by a filename. The syntax imshow(I,[LOW HIGH]) is equivalent to imshow(I,'DisplayRange',[LOW HIGH]). However, the 'DisplayRange' parameter must be specified when calling imshow with a filename, for example imshow(filename,'DisplayRange'[LOW HIGH]).

'InitialMagnification'A numeric scalar value, or the text string 'fit', that specifies the initial magnification used to display the image. When set to 100, imshow displays the image at 100% magnification (one screen pixel for each image pixel). When set to 'fit', imshow scales the entire image to fit in the window.

On initial display, imshow always displays the entire image. If the magnification value is large enough that the image would be too big to display on the screen, imshow warns and displays the image at the largest magnification that fits on the screen.

By default, the initial magnification parameter is set to the value returned by iptgetpref('ImshowInitialMagnification').

If the image is displayed in a figure with its 'WindowStyle' property set to 'docked', imshow warns and displays the image at the largest magnification that fits in the figure.

Note: If you specify the axes position (using subplot or axes), imshow ignores any initial magnification you might have specified and defaults to the 'fit' behavior.

When used with the 'Reduce' parameter, only 'fit' is allowed as an initial magnification.

'Parent'Handle of an axes that specifies the parent of the image object that will be created by imshow.

'Reduce'Logical value that specifies whether imshow subsamples the image in filename. The 'Reduce' parameter is only valid for TIFF images and you must specify a file name. Use this parameter to display overviews of very large images.

'XData'Two-element vector that establishes a nondefault spatial coordinate system by specifying the image XData. The value can have more than two elements, but only the first and last elements are actually used.

'YData'Two-element vector that establishes a nondefault spatial coordinate system by specifying the image YData. The value can have more than two elements, but only the first and last elements are actually used.

7.2.3 CLASS SUPPORTA true color image can be uint8, uint16, single, or double. An indexed image can be logical, uint8, single, or double. A grayscale image can be logical, uint8, int16, uint16, single, or double. A binary image must be of class logical.

For grayscale images of class single or double, the default display range is [01]. If your image's data range is much larger or smaller than the default display range, you might need to experiment with setting the display range to see features in the image that would not be visible using the default display range. For all grayscale images having integer types, the default display range is [intmin(class(I)) intmax(class(I))].Examples

Display an image from a fileimshow('board.tif')Display an indexed image.

[X,map] = imread('trees.tif');

imshow(X,map)

Display a grayscale image.

I = imread('cameraman.tif');

imshow(I)

Display the same grayscale image, adjusting the display range.

h = imshow(I,[0 80]);7.3 BITAND

7.3.1 SYNTAXC = bitand(A, B)

7.3.2 DESCRIPTIONC = bitand(A, B) returns the bitwise AND of arguments A and B, where A and B are unsigned integers or arrays of unsigned integers.Examples

Example 1

The five-bit binary representations of the integers 13 and 27 are 01101 and 11011, respectively. Performing a bitwise AND on these numbers yields 01001, or 9:

C = bitand(uint8(13), uint8(27))C =9

Example 2

Create a truth table for a logical AND operation:

A = uint8([0 1; 0 1]);

B = uint8([0 0; 1 1]);

TT = bitand(A, B)

TT =

0 0

0 1

7.4 BITOR

7.4.1 SYNTAXC = bitor(A, B)

7.4.2 DESCRIPTIONC = bitor(A, B) returns the bitwise OR of arguments A and B, where A and B are unsigned integers or arrays of unsigned integers.

Examples

Example 1

The five-bit binary representations of the integers 13 and 27 are 01101 and 11011, respectively. Performing a bitwise OR on these numbers yields 11111, or 31.

C = bitor(uint8(13), uint8(27))

C =31

Example 2

Create a truth table for a logical OR operation:

A = uint8([0 1; 0 1]);

B = uint8([0 0; 1 1]);

TT = bitor(A, B)

TT =

0 1

1 1

7.4 Abs7.4.1 SYNTAXabs(X)

7.4.2 DESCRIPTIONabs(X) returns an array Y such that each element of Y is the absolute value of the corresponding element of X.

If X is complex, abs(X) returns the complex modulus (magnitude), which is the same as

sqrt(real(X).^2 + imag(X).^2)

Examples

abs(-5)

ans =5 abs(3+4i)

ans =57.5 IMNOISEAdd noise to image

7.5.1 SYNTAXJ=imnoise(I,type)J=imnoise(I,type,parameters) J=imnoise(I,'gaussian',m,v)J=imnoise(I,'localvar',V)J=imnoise(I,'localvar',image_intensity,var)J=imnoise(I,'poisson')J=imnoise(I,'salt&pepper',d)J=imnoise(I,'speckle',v)7.5.2 DESCRIPTIONJ = imnoise(I,type) adds noise of a given type to the intensity image I. type is a string that can have one of these values.

ValueDescription

'gaussian'Gaussian white noise with constant mean and variance

'localvar'Zero-mean Gaussian white noise with an intensity-dependent variance

'poisson'Poisson noise

'salt & pepper'On and off pixels

'speckle'Multiplicative noise

J = imnoise(I,type,parameters) Depending on type, you can specify additional parameters to imnoise. All numerical parameters are normalized; they correspond to operations with images with intensities ranging from 0 to 1.

J = imnoise(I,'gaussian',m,v) adds Gaussian white noise of mean m and variance v to the image I. The default is zero mean noise with 0.01 variance.

J = imnoise(I,'localvar',V) adds zero-mean, Gaussian white noise of local variance V to the image I. V is an array of the same size as I.

J = imnoise(I,'localvar',image_intensity,var) adds zero-mean, Gaussian noise to an image I, where the local variance of the noise, var, is a function of the image intensity values in I. The image_intensity and var arguments are vectors of the same size, and plot(image_intensity,var) plots the functional relationship between noise variance and image intensity. The image_intensity vector must contain normalized intensity values ranging from 0 to 1.

J = imnoise(I,'poisson') generates Poisson noise from the data instead of adding artificial noise to the data. If I is double precision, then input pixel values are interpreted as means of Poisson distributions scaled up by 1e12. For example, if an input pixel has the value 5.5e-12, then the corresponding output pixel will be generated from a Poisson distribution with mean of 5.5 and then scaled back down by 1e12. If I is single precision, the scale factor used is 1e6. If I is uint8 or uint16, then input pixel values are used directly without scaling. For example, if a pixel in a uint8 input has the value 10, then the corresponding output pixel will be generated from a Poisson distribution with mean 10.

J = imnoise(I,'salt & pepper',d) adds salt and pepper noise to the image I, where d is the noise density. This affects approximately d*numel(I) pixels. The default for d is 0.05.

J = imnoise(I,'speckle',v) adds multiplicative noise to the image I, using the equation J = I+n*I, where n is uniformly distributed random noise with mean 0 and variance v. The default for v is 0.04.

Note The mean and variance parameters for 'gaussian', 'localvar', and 'speckle' noise types are always specified as if the image were of class double in the range [0, 1]. If the input image is of class uint8 or uint16, the imnoise function converts the image to double, adds noise according to the specified type and parameters, and then converts the noisy image back to the same class as the input.

7.2.3 CLASS SUPPORTFor most noise types, I can be of class uint8, uint16, int16, single, or double. For Poisson noise, int16 is not allowed. The output image J is of the same class as I. If I has more than two dimensions it is treated as a multidimensional intensity image and not as an RGB image.

Fig:7.2.3 IMAGE OF SPECKLE NOISE

Fig 7.2.3: IMAGE OF POISSON NOISE

Fig:7.2.3 IMAGE OF SALT & PEPPER NOISECHAPTER-8RESULT &ANALYSISRESULTS & ANALYSIS

Figure 8..1 Base image

Figure8.2 Marked Image

Figure 8.3 Watermark Image

Figure 8.4 : Marked Image after watermark

Figure 8.5 Noise ImageBIBLOGRAPHY[1] DIGITAL Image Processing by RAFEL C.GONZALEZ& RICHARD WOODS

[2] Digital Signal Processing by PROAKIS

[3] WIKIPEDIA.ORG

[4] R.D.Fiete,Photofakery.URLhttp://oemagazine.com/fromTheMagazine/jan05/photofakery%.html .

[5] A.C.Popescu,H. Farid, Exposing digital forgeries by detecting traces of resampling, IEEE Transactions on Signal Processing 53 (2) (2005) 758767.

[6] .-T. Ng, S.-F. Chang, A model for image splicing, IEEE Int. Conf. on Science and Engineering from the Chinese University Image Processing (ICIP), vol. 2, 2004, pp. 11691172.[7] A. Popescu, H. Farid, Statistical tools for digital forensics, 6th

[8] Inter-national Workshop on Information Hiding, Vol. 3200, Toronto,Cananda, University. His current research interests include 2004, pp. 128 147.[9] A. Popescu, H. Farid, Exposing digital forgeries in color filter array inter- polated images, IEEE Transactions on Signal Processing 53(10)(2005) application in medicine 3948-3959 [10] M. Johnson, H. Farid, Exposing digital forgeries by detecting incon- sistencies in lighting, ACM Multimedia and Security Workshop, New York, 2006, pp. 19.[11] A.P. Farid, Exposing Digital Forgeries by Detecting Duplicated Image Regions, Tech. Rep. TR2004-515, Department of Computer Science, Dartmouth College, 2004.[12] H. Farid, S. Lyu, Higher-order wavelet statistics and their application to digital forensics, IEEE Workshop on Statistical Analysis in Computer Vision (in conjunction with CVPR), Madison, Wisconsin, 2003, pp. 1622.[13] T.-T. Ng, S.-F. Chang, Q. Sun, Blind detection of photomontage using higher order statistics, IEEE Int. Symposium on Circuits and Systems (ISCAS), vol. 5, 2004, pp. 688691.[14] H. Lu, R. Shen, F.-L. Chung, Fragile watermarking scheme for image authentication, Electronics Letters 39 (12) (2003) 898900.i. . APPENDIX

ABOUT MATLAB INTRODUCTION TO MAT LABMATLAB is a high-performance language for technical computing. It integratescomputation,visualization,andprogramminginaneasy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include

MATLABisapackagethathasbeenpurpose-designedtomake computations easy, fast and reliable. It is installed on machines run by Bath University Computing Services (BUCS), which can be accessed in the BUCS PC Labs such as those in 1 East 3.9, 1 West 2.25 or 3 East 3.1, as well as from any of the PCs in the Library. The machines which you will use for running MATLAB are SUN computers that run on the UNIX operating system. However you will need only a small number of UNIX commands when you are working with MATLAB. There is a glossary of common UNIX commands in Appendix B.)

MATLAB started life in the 1970s as a user-friendly interface to certain clever but complicated programs for solving large systems of equations. The idea behind MATLAB was to provide a simple way of using these programs that hid many of the complications. The idea was appealing to scientists who needed to use high performance software but had neither the time nor the inclination (nor in some cases the ability) to write it from scratch. Since its introduction, MATLAB has expanded to cover a very wide range of applications and can now be used as a very simple and transparent programming language where each line of code looks

very much like the mathematical statement it is designed to implement.Basic MATLAB is good for the followingComputations, including linear algebra, data analysis, signal processing, polynomials and interpolation, numerical integration (quadrature), and numerical solution of differential equations.

Graphics, in 2-D and 3-D, including colour, lighting, and animation. It also hascollectionsofspecialisedfunctions, calledtoolboxesthatextendits functionality. In particular, it can do symbolic algebra, e.g. it can tell you that (x+y)^2 is equal to x^2+2*x*y+y^2.

It is important not to confuse the type of programming that we shall do in this course with fundamental programming in an established high-level language like C, JAVA or FORTRAN. In this course we will take advantage of many of the built-in features of MATLAB to do quite complicated tasks but, in contrast to programming in a conventional high-level language, we shall have relatively little control over exactly how the instructions which we write are carried out on the machine. As a result, MATLAB programs for complicated tasks may be somewhat slower to run than programs written in languages such as C. However the MATLAB programs are very easy to write, a fact which we shall emphasis here. We shall use MATLAB as a vehicle for learning elementary programming skills and applications. These are skills which will be useful independently of the language you choose to work in. Some students in First Year will already know some of these skills, but we shall not assume any prior knowledge.

Typical uses of MATLAB1. Math and computation

2. Algorithm development

3. Data acquisition

4. Modeling, simulation, and prototyping

5. Data analysis, exploration, and visualization

6. Scientific and engineering graphics

7. Application development, including graphical user interface building.The main features of MATLAB1)Advance algorithmforhighperformance numericalcomputation, especially in the field matrix algebra.

2) A large collection of predefined mathematical functions and the ability to

define ones own functions.

3) Two-and three dimensional graphics for plotting and displaying data.

4) A complete online help system.

5) Powerful, matrix or vector oriented high level programming language for individual applications.

6) Toolboxes available for solving advanced problems in several application areas

Figure Features of MATLABCapabilities of MATLABMATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar non interactive language such as C or FORTRAN.

The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix computation.

MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis.

MATLAB features a family of add-on application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.

The MATLAB SystemThe MATLAB system consists of five main parts:

Development EnvironmentThis is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the

MATLAB desktop and Command Window, a command history, an editor anddebugger, and browsers for viewing help, the workspace, files, and the search path. This consists of

the MATLAB command window, created when you start a MATLAB session, possibly supplementedwith

a figure window which appears when you do any plotting,

an editor/debugger window, used for developing and checking MATLAB

programs, and

a help browser.

The command window is the one with the >> prompt. In version 7 it has three sub windows,

the current directory window, listing all the files in your current directory,

the workspace window (click on the tab), listing all the variables in your workspace, and

thecommand history window, containing all your commands going back over multiple sessions.

You can do things like running M-files from your directory, plotting variables from your workspace, and running commands from your command history or transferring them to a new M-file, by clicking (or right-clicking or double-clicking) on them.

To see the figure window in action, type

Z = peaks; surf(Z)

And then hit RETURN. (Note that it is crucial to get the punctuation in the above command correct.) You should get a 3D plot of a surface in the Figure window called Figure 1. (Note that peaks is a built-in function that produces the data for this plot, and surf is a built-in function that does the plotting. When you come to do your own plots you will be creating your own data of course.) You can play with the buttons at the top of the Figure window (for example to rotate the

figure and see it fromdifferent perspectives).To obtain the editor/debugger, type edit. You can then create files by typing them into this window, and save them by clicking on File --> Save As. You must call the file something like file1.m, i.e. save it with a .m extension to its name, so that MATLAB recognizes it as an M-file. If the file is a script, you can run unit by typing its name (without the .m). Create a script file containing the single line Z = peaks; surf (Z), save it as file1.m, and run it.

To obtain help you can type help win, which produces a simple browser containing the MATLAB manual. A much more sophisticated web-based help facility is obtained by typing helpdesk, but at peak times this may be slow in starting up. If you have MATLAB on your home PC you may well prefer the helpdesk.

Finally, if you know the name of the command you are interested in, you can simply ask for help on it at the MATLAB prompt. For example to find out about the for command, type help for or doc for. Do each of these now: for is one of the loop-constructing commands that will be used in the programs you write in this course.

Use any help facility to find the manual page for the command plot, which resides in the directorygraph2d. Read the manual page for plot.

The MATLAB Mathematical FunctionThis is a vast collection of computational algorithms ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.

The MATLAB LanguageThis is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both "programming in the small" to rapidly create quick and dirty throw- away programs, and "programming in the large" to create complete large and

complex application programs. GraphicsMATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions for two-dimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. It also includes low-level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications.

The MATLAB Application Program Interface (API)This is a library that allows you to write C and Fortran programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.

M-Files Script M-FilesThe heart of MATLAB lies in its use of M-les. We will begin with a script M- le, which is simply a text file that contains a list of valid MATLAB commands. To create an M-le, click on File at the upper left corner of your MATLAB window, then select New, followed by M-le. A window will appear in the upper left corner of your screen with MATLAB's default editor. (You are free to use an editor of your own choice, but for the brief demonstration here, let's stick with MATLAB's. It's not the most powerful thing you'll ever come across, but it's not a complete slouch either.) In this window, type the following lines (MATLAB reads everything following a % sign as a comment to be ignored):

%JUNK: A script file that computes sin(4),

%where 4 is measured in degrees.t=4;%Define a variable for no particularly good reason radiant=pi*t/180; %Converts 4 degrees to radians

s=sin(radiant) %No semicolon means output is printed to screen

Save this file by choosing File, Save As from the main menu. In this case, save the file as junk. m, and then close or minimize your editor window. Back at the command line, type simply help junk, and notice that the description you typed in as the header of your script file appears on the screen. Now, type junk at the prompt, and MATLAB will report that s=.0698. It has simply gone through your file line by line and executed each command as it came to it. One more useful command along these lines is type .Try the following:

>>type junk

The entire text of your file junk.m should appear on your screen. Since you've just finished typing this stuff in, this isn't so exciting, but try typing, for example, type mean. MATLAB will display its internal M-file mean.m. Per using MATLAB's internal M-files like this is a great way to learn how to write them yourself.

In fact, you can often tweak MATLAB's M-files into serving your own purposes. Simply use type file name in the Command Window, then choose and copy the M-file text using the Edit option. Finally, copy it into your own M-file and edit it. (Though keep in mind that if you ever publish a routine you've worked out this way, you need to acknowledge the source.)

Working in Script M-filesEven if you're not writing a program, you will often find that the best way to work in MATLAB is through script M-files.

Certainly, this is a straightforward calculation that we could carryout step by step in the Command Window, but instead, we will work entirely in an M-#file(see

template.m below).We begin our M-file with the commands clear, which clears theworkspace of all variables ,clc, which clears the Command Window of all previously issued commands, and clf, which clears the figure window. In addition, we delete the(assumed)diary file junk.m and restart it with the command diary junk.out. Finally,

we set echo on so that the commands we type here will appear in the command window. We now type in all commands required for plotting and maximizing the function f(x).

%TEMPLATE:MATLABM-file containing a convenient

%workplace template. clear; clc; clf;

delete junk.out diary junk.out echo on

%

fprime=diff(x*exp(-x^4/(1+x^2)))

pretty(fprime)

r = eval(solve(fprime))

%

Diary off

Echo off

The only new command in template.m is pretty(), which simply instructs MATLAB to present the expression fprime in a more readable format. Once template.m has been run, the diary filejunk.out will be created. Observe that the entire session is recorded, both inputs and outputs. It's also worth noting that while pretty()makes expressions look better in the MATLAB Command Window,

its output in diary files is regrettable. Function M-filesThe second type of M-file is called a function M-file and typically (though not inevitably) these will involve some variable or variables sent to the M-file and processed. Function M-files need neither accept input nor return output. Every function M-file begins with the command function. Function M-files can have sub functions (script M-files cannot have sub functions).

Debugging M-filesSince MATLAB views M-files as computer programs, it offers a handful of tools for debugging. First, from the M-file edit window, an M-file can be saved and run by clicking on the icon with the white sheet and downward-directed blue arrow (alternatively, choose Debug, Run or simply type F5). By setting your cursor on a line and clicking on the icon with the white sheet and the red dot, you can set a marker at which MATLAB's execution will stop. A green arrow will appear, marking the point where MATLAB's execution has paused. At this point, you can step through the rest of your M-file one line at a time by choosing the Step icon (alternatively Debug, Step or F6).Unless you're a phenomenal programmer, you will occasionally write a MATLAB program (M-file) that has no intention of stopping any time in the near future. You can always abort your program by typing Control-c, though you must be in the MATLAB Command Window for MATLAB to pay any attention to this.

File Management from MATLABThere are certain commands in MATLAB that will manipulate file on its primary directory. For example, if you happen to have the file junk.m in your working MATLAB directory, you can delete it simply by typing delete junk.m at the MATLAB command prompt. Much more generally, if you precede a command with an exclamation point, MATLAB will read it as a unix shell commandof these notes for more on Unix shell commands). So, for example, the three commands !lsjunk.m more junk.m, and files serve to list the contents of the directory you happento be in, copy the file junk.m to the file more junk.m, and list the file again to make sure it's there. Try it.

The Command WindowOccasionally, the Command Window will become too cluttered, and you will essentially want to start over. You can clear it by choosing Edit, Clear Command Window. Before doing this, you might want to save the variables in your workspace. This can be accomplished with the menu option File, Save Workspace As, which will allow you to save your workspace as a .mat file. Later, you can open this file simply by choosing File, Open, and selecting it. A word of warning, though: This does not save every command you have typed into your workspace; it only saves your variable assignments

The Command HistoryThe Command History window will open with each MATLAB session, displaying a list of recent commands issued at the prompt. Often, you will want to incorporate some of these old commands into a new session. A method slightly less gauche than simply cutting and pasting is to right-click on a command in the Command History window, and while holding the right mouse button down, to choose Evaluate Selection. This is exactly equivalent to typing your selection into the Command Window.

The MATLAB WorkspaceAs we've seen, MATLAB uses several types of data, and sometimes it can be difficult to remember what type each variable in your session is. Fortunately, this information is all listed for you in the MATLAB Workspace. Look in the upper left corner of your MATLAB window and see if your Workspace is already open. If not, choose View, Workspace from the main MATLAB menu and it should appear. Each variable you define during your session will be listed in the

Workspace, along with its size and type Miscellaneous Useful CommandsIn this section I will give a list of some of the more obscure MATLAB commands that Findparticularly useful. As always, you can get more information on each of these commands by using MATLAB's help command.

strcmp(str1,str2) (string compare) Compares the strings str1 and str2 and returns logical true (1)ifthe two are identical and logical false (0) otherwise.

char(input) Converts just about any type of variable input into a string

(character array).

num2str(num) Converts the numeric variable num into a string.

str2num(str) Converts the string str into a numeric variable. (See also

str2double().)

strcat(str1,str2,...) Horizontally concatenates the strings str1, str2, etc.

Graphical User InterfaceEver since 1984 when Apple's Macintosh computer popularized Douglas Engelbart's mouse-driven graphical computer interface, users have wanted something fancier than a simple command line. Unfortunately, actually coding this kind of thing yourself is a full-time job. This is where MATLAB's add-on GUIDE comes in. Much like Visual C, GUIDE is a package for helping you develop things like pop-up windows and menu-controlled _les. To get started with GUIDE, simply

choose File, New, GUI from MATLAB's main menu. SIMULINKSIMULINK is a MATLAB add-on taylored for visually modeling dynamical systems. To get started with SIMULINK, choose File, New, Model.

M-bookM-book is a MATLAB interface that passes through Microsoft Word, apparently allowing for nice presentations. Unfortunately, my boycott of anything Microsoft precludes the possibility of my knowing anything about it.

Further information on MATLABLook in the library catalogue for titles containing the keyword MATLAB. In particular, the following are useful.

Getting started with MATLAB 6: a quick introduction for scientists and engineers, by Rudra Pratap.

This is a user-friendly book that you might wish to buy.

Essential MATLAB for scientists and engineers, by Brian D. Hahn

The MATLAB 5 handbook, by Eva Part-Enander and Anders Sjoberg

The MATLAB handbook, by Darren Redfern and Colin Campbell

MATLAB Guide, by D.J. Higham and N.J. Higham

The MATLAB web-site http://www.mathworks.com is maintained by the company

The MathWorks which created and markets MATLAB.

1