[IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - A Blind Watermarking Algorithm Based on Neural Network

  • Published on

  • View

  • Download

Embed Size (px)


  • A Blind Watermarking Algorithm Based on NeuralNetwork

    Xinhong Zhang and Fan ZhangCollege of Computer andInformation Engineering

    Henan UniversityKaifeng 475001, P. R. ChinaE-mail: hnkfzxh@l63.com

    Abstract-A blind digital watermarldng algorithm based onHopfield neural network is proposed. The host image andoriginal watermark are stored by Hopfield neural network.The noise visibility function (NVF) is used for adaptivewatermark embedding. In watermark extraction, the hostimage and the original watermark are retrieved by neuralnetwork. The experimental results show that thiswatermarldng algorithm has a good preferment.


    One of the biggest technological events of the last twodecades was the invasion of digital media in an entire rangeof everyday life aspects. As digital audio/video/images andmultimedia documents reach an ever-expanding consumerbase, their domination in entertainment, arts, education etc.is just a matter of time. Digital data can be stored efficientlyand with a very high quality and manipulated easily usingcomputers. Furthermore, digital data can be transmitted in afast and inexpensive way through data communicationnetworks [1], [2].

    Digital watermarking is based on the science ofsteganography or data hiding. Steganography comes fromthe Greek meaning 'covered writing'. It is an area ofresearch of communicating in a hidden manner.Steganography and watermarking rely on imperfections ofhuman senses. The eyes and ears are not perfect detectorsand cannot detect minor change therefore can be tricked intothinking two images or sounds are identical but actuallydiffer, for example in luminance or frequency shift. Thehuman eye has a limited dynamic range so low qualityimages can be hidden within other high quality images [3].The blind watermarking means that the watermarks are

    detected or extracted without the original image, and thenon-blind watermarking means that the original image isneeded during the watermark detecting or extracting.

    Attractor networks such as Hopfield networks are used asauto associative content addressable memories. The aim ofsuch networks is to retrieve a previously learnt pattern froman example that is similar to or a noisy version of one of thepreviously presented patterns. To do this the networkassociates each element of a pattern with a binary neuron.

    These neurons are fully connected and are updatedasynchronously and in parallel. They are initialized with aninput pattern and the network activations converge to theclosest learnt pattern.

    Hopfield neural network can functions as an associativememory, the patterns are stored as dynamical attractors, andthe network has error-correcting capability. Basin ofattraction is the set of points in the space of system. Theradius of attraction basin is defined as the largest Hammingdistance within which almost all states flow to the pattern.For a trained network, the average attraction radiuses ofstored patterns gives a measure of the network completioncapability [4].

    Patterns must not only be stored but also be able to berecalled. The recall process involves moving from thenetwork start state into some fixed state. The basin ofattraction of a fixed point is defined to be the set of statesthat are attracted to that fixed point. Different learning rulesproduce different basins of attraction. Clearly capacity andattraction basins are related. Here we consider what extrainformation about the performance of the new learning rulecan be gained from the study of attraction basins [5], [6].

    Large attraction basins are desirable because then morestates are attracted to stored patterns rather than spuriousfixed points. Round attraction basins make the networkmore predictable whether or not a state is attracted to astored pattern is reliably dependent on its Hamming distanceaway from that pattern. An even distribution of basin sizesalso increases predictability and ensures there is no largebias against any particular pattern.

    This paper presents a blind digital watermarkingalgorithm based on Hopfield neural network. The hostimage and original watermark are stored by Hopfield neuralnetwork. The noise visibility function is used for adaptivewatermark embedding. In watermark extraction, the hostimage and the original watermark are retrieved by neuralnetwork.

    The rest of this paper is organized as follows. In section II,we introduce a method to embedded the maximumallowable watermark amplitude of each wavelet coefficientwhile keep watermark's invisibility. And we present a blindwatermarking algorithm using Hopfield neural network. The

    0-7803-9422-4/05/$20.00 2005 IEEE1073

  • experimental results are shown in section Im. Theconclusions of this paper are drawn in section IV.


    The Human Vision System (HVS) models have beenstudied for many years. These works describe human visionmechanism such as spatial frequency orientation, sensitivityon local contrast, adaptation and masking. Noise VisibilityFunction (NVF) is the function that characterizes localimage properties, it identifies the texture and the edgeregions of an image where the watermark should be morestrongly embedded [7], [8]. It can be used in either thespatial domain or the wavelet domain.Noise Visibility Function is derived based on a

    mathematic statistics model of images. Comparing withother human experience based perceptual models; the NVFis more appropriate to theoretic -analysis,. such as theanalysis of the watermarking capacity. We think that theNVF is the best bridge that connects the watermarkingcapacity to the content of images.

    There are two kinds of NVF, based on either anon-stationary Gaussian model of the image, or a stationaryGeneralized Gaussian model [9]. Because watermarkingproblem has a close relationship with the local properties ofimage, we think that the NVF based on non-stationaryGaussian model more suits for watermarking problem.Assuming the host image is a Gaussian process. The NVFcan be written as:

    NVF(ij j) w(i, j)onw(i,j)a + (i,j) (1)

    where ur2 is the noise variance, qx2 is the local variance ofthe image in a window centered on the pixel withcoordinates (i, j), 1

  • host image with stego image, extract watermark according athreshold. Finally, we compare associative WBP withextracted watermark, and judge whether watermark exist inthe image using correlation test.

    Project two. Eight bit planes are stored. Then we extractmessages digest from weight matrix of neural network usingHash function. In our experiments, Hash algorithm is SecureHash Algorithm (SHA) which extracting 160 bits messagesdigest. Next, we structure a pseudo-random sequence usingthe Hash messages digest as seed, and the length ofpseudo-random sequence equal to the number of imagepixels. According to the value of pseudo-random sequence,we confirm whether a pixel is modified. 0 denotes do notmodify this pixel and 1 denotes modify this pixel. Eq. 4decides amplitude. In watermarking extracting, neuralnetwork associative recalls the host image, then calculateswatermark from network weight matrix using Hash functionand pseudo-random sequence generator. We compare theassociative host image with stego image, and extractwatermark. Finally, we judge whether watermark exist inthe image using correlation test.

    Fig. 2. Original Lena image (a) and its stego image (b), PSNR is31.87 dB

    Fig. 3. The watermarked Baboon image (a) and the reconstruction image(b)


    In the experiments, three 256 X 256 standard test imagesBaboon, Peppers and Lena are used. Assume noise is whiteGaussian noise which variance equal to 4. Discrete Hopfieldneural network is used in our experiments. If the number ofneurons equals to the number of pixels, the computation willbe very complex. So we should reduce the dimension ofneural network to reduce the commutative complexity. TheNVF is calculated in a local image region, a windowcentered on a pixel. We divide image into many non-overlapregions according to the window size of the NVF. Eachregion corresponds to a neuron. If the NVF calculationregion is a window of size 5 X 5, then a 256 X 256 imagecan be divided into 51 X 51 regions, and the dimension of


  • neural network is 51 X 51.According to Eq. 4, we can calculate the maximum

    allowable watermark amplitude of each wavelet coefficientwhile keep watermark's invisibility.

    In watermark extracting, we assume that the probepatterns are stego image, noised stego image, host imageand noised host image respectively. We experimented1200 times, one hundred times in above conditions foreach test image. Table 1 shows the result of ourexperiments. Positive detection (presence of watermark)times of one hundred times experiments per test images indifferent conditions. The experimental results show thatthis watermarking algorithm has a good preferment. Fig. 2shows the original Lena image and its stego image. Thepeak signal to noise ratio (PNSR) is 31.87 dB. Thewatermarked Baboon image and its reconstruction imageare shown in Fig. 3.


    This paper presents a blind digital watermarkingalgorithm based on Hopfield neural network. The hostimage and original watermark are stored by Hopfieldneural network. The noise visibility function is used foradaptive watermark embedding. In watermark extraction,the host image and the original watermark are retrieved by

    neural network.


    [1] Voyatzis, G., Nikolaidis, N., Pitas, I., Digital Watermarking: AnOverview IX European Signal Processing Conference (EUSIPCO98),1 (1998) 9-12.

    [2] Cox, I., Linnartz, J., Some General Methods for Tampering withWatermarksiEEE Journal ofSelectedAreas in Communications, 16 (4)(1998) 587-593

    [3] Zhu, B., Tewfik, A., Gerek, O., Low Bitrate Near-Transparent ImageCoding. SPIE Conf on Wavelet Applications II, 2491 (1995) 173-184

    [4] Li, J., Michel, A., Analysis and synthesis of a class of NN: variablestructure systems with infinitive gain. IEEE Transactions on circuitsand systems, 36 (5) (1989) 713-731

    [5] Davey, N., Hunt, S., The Capacity and Attractor Basins of AssociativeMemory Models. In: Proc. 5th International Work-Conference onArtificial andNatural Neural Networks, (1999) 340-357

    [6] McEliece, R., Posner, C., Rodemich, R. et al., The capacity of theHopfield associative memory. IEEE Transactions on InformationTheory, 33 (4) (1987) 461-482

    [7] Voloshynovskiy, S., Pereira, S., Iquise, V., Pun, T., Attack modeling:Towards a second-generation benchmark. Signal Processing, 81 (6)(2001) 1177-1214

    [8] Pereira, S., Voloshynovskiy, S., Pun, T., Optimal transform domainwatermark embedding via linear programming. Signal Processing, 81(6) (2001) 1251-1260

    [9] Voloshynovskiy, S., Deguillaume, F., Pun, T., A stochastic approachto content adaptive digital image watermarking. InternationalWorkshop on Information Hiding. Lecture Notes in Computer Science.1768 (1999) 212-236



View more >