4
A Blind Watermarking Algorithm Based on Neural Network Xinhong Zhang and Fan Zhang College of Computer and Information Engineering Henan University Kaifeng 475001, P. R. China E-mail: [email protected] Abstract-A blind digital watermarldng algorithm based on Hopfield neural network is proposed. The host image and original watermark are stored by Hopfield neural network. The noise visibility function (NVF) is used for adaptive watermark embedding. In watermark extraction, the host image and the original watermark are retrieved by neural network. The experimental results show that this watermarldng algorithm has a good preferment. I. INTIRODUCTION One of the biggest technological events of the last two decades was the invasion of digital media in an entire range of everyday life aspects. As digital audio/video/images and multimedia documents reach an ever-expanding consumer base, their domination in entertainment, arts, education etc. is just a matter of time. Digital data can be stored efficiently and with a very high quality and manipulated easily using computers. Furthermore, digital data can be transmitted in a fast and inexpensive way through data communication networks [1], [2]. Digital watermarking is based on the science of steganography or data hiding. Steganography comes from the Greek meaning 'covered writing'. It is an area of research of communicating in a hidden manner. Steganography and watermarking rely on imperfections of human senses. The eyes and ears are not perfect detectors and cannot detect minor change therefore can be tricked into thinking two images or sounds are identical but actually differ, for example in luminance or frequency shift. The human eye has a limited dynamic range so low quality images can be hidden within other high quality images [3]. The blind watermarking means that the watermarks are detected or extracted without the original image, and the non-blind watermarking means that the original image is needed during the watermark detecting or extracting. Attractor networks such as Hopfield networks are used as auto associative content addressable memories. The aim of such networks is to retrieve a previously learnt pattern from an example that is similar to or a noisy version of one of the previously presented patterns. To do this the network associates each element of a pattern with a binary neuron. These neurons are fully connected and are updated asynchronously and in parallel. They are initialized with an input pattern and the network activations converge to the closest learnt pattern. Hopfield neural network can functions as an associative memory, the patterns are stored as dynamical attractors, and the network has error-correcting capability. Basin of attraction is the set of points in the space of system. The radius of attraction basin is defined as the largest Hamming distance within which almost all states flow to the pattern. For a trained network, the average attraction radiuses of stored patterns gives a measure of the network completion capability [4]. Patterns must not only be stored but also be able to be recalled. The recall process involves moving from the network start state into some fixed state. The basin of attraction of a fixed point is defined to be the set of states that are attracted to that fixed point. Different learning rules produce different basins of attraction. Clearly capacity and attraction basins are related. Here we consider what extra information about the performance of the new learning rule can be gained from the study of attraction basins [5], [6]. Large attraction basins are desirable because then more states are attracted to stored patterns rather than spurious fixed points. Round attraction basins make the network more predictable whether or not a state is attracted to a stored pattern is reliably dependent on its Hamming distance away from that pattern. An even distribution of basin sizes also increases predictability and ensures there is no large bias against any particular pattern. This paper presents a blind digital watermarking algorithm based on Hopfield neural network. The host image and original watermark are stored by Hopfield neural network. The noise visibility function is used for adaptive watermark embedding. In watermark extraction, the host image and the original watermark are retrieved by neural network. The rest of this paper is organized as follows. In section II, we introduce a method to embedded the maximum allowable watermark amplitude of each wavelet coefficient while keep watermark's invisibility. And we present a blind watermarking algorithm using Hopfield neural network. The 0-7803-9422-4/05/$20.00 ©2005 IEEE 1073

[IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A Blind Watermarking

  • View
    219

  • Download
    1

Embed Size (px)

Citation preview

A Blind Watermarking Algorithm Based on Neural

NetworkXinhong Zhang and Fan Zhang

College of Computer andInformation Engineering

Henan UniversityKaifeng 475001, P. R. ChinaE-mail: [email protected]

Abstract-A blind digital watermarldng algorithm based onHopfield neural network is proposed. The host image andoriginal watermark are stored by Hopfield neural network.The noise visibility function (NVF) is used for adaptivewatermark embedding. In watermark extraction, the hostimage and the original watermark are retrieved by neuralnetwork. The experimental results show that thiswatermarldng algorithm has a good preferment.

I. INTIRODUCTION

One of the biggest technological events of the last twodecades was the invasion of digital media in an entire rangeof everyday life aspects. As digital audio/video/images andmultimedia documents reach an ever-expanding consumerbase, their domination in entertainment, arts, education etc.is just a matter of time. Digital data can be stored efficientlyand with a very high quality and manipulated easily usingcomputers. Furthermore, digital data can be transmitted in afast and inexpensive way through data communicationnetworks [1], [2].

Digital watermarking is based on the science ofsteganography or data hiding. Steganography comes fromthe Greek meaning 'covered writing'. It is an area ofresearch of communicating in a hidden manner.Steganography and watermarking rely on imperfections ofhuman senses. The eyes and ears are not perfect detectorsand cannot detect minor change therefore can be tricked intothinking two images or sounds are identical but actuallydiffer, for example in luminance or frequency shift. Thehuman eye has a limited dynamic range so low qualityimages can be hidden within other high quality images [3].The blind watermarking means that the watermarks are

detected or extracted without the original image, and thenon-blind watermarking means that the original image isneeded during the watermark detecting or extracting.

Attractor networks such as Hopfield networks are used asauto associative content addressable memories. The aim ofsuch networks is to retrieve a previously learnt pattern froman example that is similar to or a noisy version of one of thepreviously presented patterns. To do this the networkassociates each element of a pattern with a binary neuron.

These neurons are fully connected and are updatedasynchronously and in parallel. They are initialized with aninput pattern and the network activations converge to theclosest learnt pattern.

Hopfield neural network can functions as an associativememory, the patterns are stored as dynamical attractors, andthe network has error-correcting capability. Basin ofattraction is the set of points in the space of system. Theradius of attraction basin is defined as the largest Hammingdistance within which almost all states flow to the pattern.For a trained network, the average attraction radiuses ofstored patterns gives a measure of the network completioncapability [4].

Patterns must not only be stored but also be able to berecalled. The recall process involves moving from thenetwork start state into some fixed state. The basin ofattraction of a fixed point is defined to be the set of statesthat are attracted to that fixed point. Different learning rulesproduce different basins of attraction. Clearly capacity andattraction basins are related. Here we consider what extrainformation about the performance of the new learning rulecan be gained from the study of attraction basins [5], [6].

Large attraction basins are desirable because then morestates are attracted to stored patterns rather than spuriousfixed points. Round attraction basins make the networkmore predictable whether or not a state is attracted to astored pattern is reliably dependent on its Hamming distanceaway from that pattern. An even distribution of basin sizesalso increases predictability and ensures there is no largebias against any particular pattern.

This paper presents a blind digital watermarkingalgorithm based on Hopfield neural network. The hostimage and original watermark are stored by Hopfield neuralnetwork. The noise visibility function is used for adaptivewatermark embedding. In watermark extraction, the hostimage and the original watermark are retrieved by neuralnetwork.

The rest of this paper is organized as follows. In section II,we introduce a method to embedded the maximumallowable watermark amplitude of each wavelet coefficientwhile keep watermark's invisibility. And we present a blindwatermarking algorithm using Hopfield neural network. The

0-7803-9422-4/05/$20.00 ©2005 IEEE1073

experimental results are shown in section Im. Theconclusions of this paper are drawn in section IV.

HI. ADAPTIVE WATERMARYING

The Human Vision System (HVS) models have beenstudied for many years. These works describe human visionmechanism such as spatial frequency orientation, sensitivityon local contrast, adaptation and masking. Noise VisibilityFunction (NVF) is the function that characterizes localimage properties, it identifies the texture and the edgeregions of an image where the watermark should be morestrongly embedded [7], [8]. It can be used in either thespatial domain or the wavelet domain.Noise Visibility Function is derived based on a

mathematic statistics model of images. Comparing withother human experience based perceptual models; the NVFis more appropriate to theoretic -analysis,. such as theanalysis of the watermarking capacity. We think that theNVF is the best bridge that connects the watermarkingcapacity to the content of images.

There are two kinds of NVF, based on either anon-stationary Gaussian model of the image, or a stationaryGeneralized Gaussian model [9]. Because watermarkingproblem has a close relationship with the local properties ofimage, we think that the NVF based on non-stationaryGaussian model more suits for watermarking problem.Assuming the host image is a Gaussian process. The NVFcan be written as:

NVF(ij j) w(i, j)onw(i,j)a + (i,j) (1)

where ur2 is the noise variance, qx2 is the local variance ofthe image in a window centered on the pixel withcoordinates (i, j), 1<(i j)< N. w(i, f) is weighting functiondepends on shape parameter y. w(i,j) can be written as:

W(i, j) = y[q]r IIr( )112r

A(i,j) = (1- NVF(i,j)) * So + NVF(i, j) * SI, (4)where So and S1 are the maximum allowable distortion in thetexture and the flat regions of image respectively. TypicallySo is as high as 30, while S1 is usually about 3 according tothe experiences. In the flat regions of image, the NVF tendsto 1, so the first term of Eq. 3 tends to 0, and consequentlythe allowable distortion is determined by S1. Intuitively thismakes sense, since we expect that the distortion is morevisible in the flat regions and more invisible in the textureregions. According to Eq. 3, the watermarks in the texture orthe edge regions are stronger than that in the flat regions.

Hopfield neural network using a computational energyfunction to evaluate the stability property, and the energyfunction always decrease toward a state of lowest energy.The energy function always decreases toward a state of thelowest energy. Starting from any point of state space,system will always evolves to a stable state, an attractor.Neurons state of Hopfield neural network are usually

binary {+1, -1}. For the sake of neural network can store astandard grayscale test image, we separate image into eightbit planes. In watermarking extracting, network canassociative recall the host image (original image) form astego image (watermarked image) or a noised image, do notneed the host image.A block diagrams is shown in Figure 1 to illustrate the

watermark embedding.

(2)

Fig. 1. Illustration of the watermark embedding

where r(i, j) x(i, I) -(i ] 97(Y) = , andax ~~~F(D)

F(t) = f' &euuldu, x(i, j) is gamma function. The

parameter y is called shape parameter. x(i, j) is thelocal mean of the image.

If we assume that the original image subjects to thegeneralized Gaussian distribution, the NVF of each pixelcan be expressed as,

NVF(i, j)=1( (3)

The maximum allowable distortion of each pixel can becalculated as follows,

Above method is half blind watennarking, inwatermarking extracting; do not need the host image, butoriginal watermark is necessary. We should improve thismethod to blind watemarking. There are two projects:

Project one. A random function is used to confirmwhether a pixel should be modified. Marking 0 if no changetakes place and marking 1 if a change takes place, then wecan get a matrix that compose of {0,, }. We name the matrixWatermark Bit Plane (WBP),. which marked the points ofwatermark embedding. Eq. 4 decides watermark amplitude.During the learning of neural network, not only image bitplanes but also Watermark Bit Plane are stored. Inwatermarking extracting, network associative recalls thehost image and WBP, and then we compare the associative

1074

host image with stego image, extract watermark according athreshold. Finally, we compare associative WBP withextracted watermark, and judge whether watermark exist inthe image using correlation test.

Project two. Eight bit planes are stored. Then we extractmessages digest from weight matrix of neural network usingHash function. In our experiments, Hash algorithm is SecureHash Algorithm (SHA) which extracting 160 bits messagesdigest. Next, we structure a pseudo-random sequence usingthe Hash messages digest as seed, and the length ofpseudo-random sequence equal to the number of imagepixels. According to the value of pseudo-random sequence,we confirm whether a pixel is modified. 0 denotes do notmodify this pixel and 1 denotes modify this pixel. Eq. 4decides amplitude. In watermarking extracting, neuralnetwork associative recalls the host image, then calculateswatermark from network weight matrix using Hash functionand pseudo-random sequence generator. We compare theassociative host image with stego image, and extractwatermark. Finally, we judge whether watermark exist inthe image using correlation test.

Fig. 2. Original Lena image (a) and its stego image (b), PSNR is31.87 dB

Fig. 3. The watermarked Baboon image (a) and the reconstruction image(b)

IH. EXPERIMENTAL RESULTS

In the experiments, three 256 X 256 standard test imagesBaboon, Peppers and Lena are used. Assume noise is whiteGaussian noise which variance equal to 4. Discrete Hopfieldneural network is used in our experiments. If the number ofneurons equals to the number of pixels, the computation willbe very complex. So we should reduce the dimension ofneural network to reduce the commutative complexity. TheNVF is calculated in a local image region, a windowcentered on a pixel. We divide image into many non-overlapregions according to the window size of the NVF. Eachregion corresponds to a neuron. If the NVF calculationregion is a window of size 5 X 5, then a 256 X 256 imagecan be divided into 51 X 51 regions, and the dimension of

1075

neural network is 51 X 51.According to Eq. 4, we can calculate the maximum

allowable watermark amplitude of each wavelet coefficientwhile keep watermark's invisibility.

In watermark extracting, we assume that the probepatterns are stego image, noised stego image, host imageand noised host image respectively. We experimented1200 times, one hundred times in above conditions foreach test image. Table 1 shows the result of ourexperiments. Positive detection (presence of watermark)times of one hundred times experiments per test images indifferent conditions. The experimental results show thatthis watermarking algorithm has a good preferment. Fig. 2shows the original Lena image and its stego image. Thepeak signal to noise ratio (PNSR) is 31.87 dB. Thewatermarked Baboon image and its reconstruction imageare shown in Fig. 3.

IV. CONCLUSION

This paper presents a blind digital watermarkingalgorithm based on Hopfield neural network. The hostimage and original watermark are stored by Hopfieldneural network. The noise visibility function is used foradaptive watermark embedding. In watermark extraction,the host image and the original watermark are retrieved by

neural network.

REFERENCES

[1] Voyatzis, G., Nikolaidis, N., Pitas, I., Digital Watermarking: AnOverview IX European Signal Processing Conference (EUSIPCO98),1 (1998) 9-12.

[2] Cox, I., Linnartz, J., Some General Methods for Tampering withWatermarksiEEE Journal ofSelectedAreas in Communications, 16 (4)(1998) 587-593

[3] Zhu, B., Tewfik, A., Gerek, O., Low Bitrate Near-Transparent ImageCoding. SPIE Conf on Wavelet Applications II, 2491 (1995) 173-184

[4] Li, J., Michel, A., Analysis and synthesis of a class of NN: variablestructure systems with infinitive gain. IEEE Transactions on circuitsand systems, 36 (5) (1989) 713-731

[5] Davey, N., Hunt, S., The Capacity and Attractor Basins of AssociativeMemory Models. In: Proc. 5th International Work-Conference onArtificial andNatural Neural Networks, (1999) 340-357

[6] McEliece, R., Posner, C., Rodemich, R. et al., The capacity of theHopfield associative memory. IEEE Transactions on InformationTheory, 33 (4) (1987) 461-482

[7] Voloshynovskiy, S., Pereira, S., Iquise, V., Pun, T., Attack modeling:Towards a second-generation benchmark. Signal Processing, 81 (6)(2001) 1177-1214

[8] Pereira, S., Voloshynovskiy, S., Pun, T., Optimal transform domainwatermark embedding via linear programming. Signal Processing, 81(6) (2001) 1251-1260

[9] Voloshynovskiy, S., Deguillaume, F., Pun, T., A stochastic approachto content adaptive digital image watermarking. InternationalWorkshop on Information Hiding. Lecture Notes in Computer Science.1768 (1999) 212-236

1076