1 - Next Generation for Robust and Imperceptible Audio Data Hiding

Embed Size (px)

Citation preview

  • 8/6/2019 1 - Next Generation for Robust and Imperceptible Audio Data Hiding

    1/4

    NEXT G ENERATION TECHNIQUES FOR ROBUST A ND IM PERCEPTIBLE AUDIO DATAHIDINGJim Chou, Kannan Ramch andrad Antonio Ortega

    ' University of California - BerkeleyDepartment of EECSBerkeley, CA 94708USCDepartment of EELo s Angeles, CA

    ABSTRACT MsIn this work, we combine recent theoertical and algorithmicadvances in the area of information-hiding with the currentmature knowledge-base in the human audio perception sys-tem to propose a novel audio data-hiding technique that sig-

    nificantly pushes the state-of-the-art in the field. O ur workis based on a com bination of advances in two disjoint fields:information-hiding and human auditory masking. The fieldof information-hiding has recently seen a resurgence due toadvances in the understanding of fundamental bounds frominformation theory. By integrating this with the human per-ceptual system knowledge that has been successfully ex-ploited for several years in the audio compression commu-nity, we derive a new and improved a udio data-hiding tech-nique that finds application in a number of exciting sce-narios like music enhancement an d digital communicationsover analog data channels. Ou r preliminary results showthat we can em bed data at an order of magnitude higher ratethan existing audio data hiding systems, while being robustto channel noise.

    Fig. 1. Th e data hiding problem

    can be hidden in a signal when the signal is i . i .d. Gaussianand the noise that the data is su bjected to is a concatenationof the known signal and unknown i . i .d . Gaussian noise. Inthis work, we w ill formulate a method in w hich the audiosignal can be modeled as a set of parallel Gau ssian channelsand show how data can be hidden in an imperceptible androbust fashion into the audio signal.We will then show how our metho d of audio data hid-ing can b e extended to exciting applications ranging fromembedding extra information onto CDs to increasing thethroughput of existing analog com munication channels. Cur-rent methods of aud io data hiding [? ] can embed a signifi-cant amou nt of information into audio signals but is typi-cally not robust to channel noise, and is hence not appli-cable to the above applications. From our simulations, wewill show that our method of audio data hiding is an orderof m agnitude above existing audio data hiding techniquesan d is also robust to channel noise.

    1. INTRODUCTIONIt is a well known fact in the audio compression commu-nity that only a few bits per sample are needed to representcompact disk (CD) quality music. In fact, [11 pointed outthat two to three bits per sample are usually sufficient forrepresenting most genres of music. This implies that foruncompressed music, noise can be injected into the signalwithout it being perceptible to the end user. We utilize thisfact, not for comp ression, but instead for hiding data in m u-sic. In particular, we will leverage recent promising w ork[2] in the field of data hiding to show how large amountsof data can be hidden in uncompressed aud io signals. Themethod of data hiding is a constructive attempt at bridgingthe gap between what is currently available in data hidingtechnology and what is theoretically possible [ 3 ] . In thework of [3], bounds were given on the amount of data that

    *Phillips and Microsoft Research.

    2. GENER AL DATA HIDINGIn general, the data hiding prob lem is formulated as follows(see [2]). The encoder has access to two signals; the infor-mation (an index set), M, to be embedded, and the signalthat the information is to be embedded in. The outpu t of theencoder, W ,will then be subjected to random noise. Thedecoder will receive the corrupted encoded signal and willattempt to recover the embedded data. The goal, then, isto embed as much data as possible into the signal withoutaltering the fidelity of the original sign al. The fidelity con-straint can be posed as a distortion constraint between theoriginal signal and the en coded signal w here the distortion

    0-7803-7041-4/01/$10.00 02001 IEEE 1349

  • 8/6/2019 1 - Next Generation for Robust and Imperceptible Audio Data Hiding

    2/4

    Fig. 2. Audio data hiding system diagram.measure can range from a Euclidean-based measure to someperceptual measure. Mathem atically, he goal is to solve thefollowing constrained minimization problem:

    minllW-S1I2

  • 8/6/2019 1 - Next Generation for Robust and Imperceptible Audio Data Hiding

    3/4

    Fig. 3 . Codebook represented as a lattice partition: the root code-book specifies the channel code. The leaves of th e lattice partitionspecify th e source code. The bits specifying a path to a leaf are thebits to be hidden in the data.where the su m ranges over all groups o f coefficients and theGaussian noise that corrupts each group of coefficients hasvariance N;.3.2. Code ConstructionsOne m ethod of encoding for attempting to achieve capac-ity (see (4)) entails generating a codebook and partitioningthe codebook into subcodebooks. The data to be embed-ded will index a particular subcodebook, and this subcode-book will be used to encode a group of coefficients. In [9],it was shown that one could design a codebook to achievecapacity by distributing the codewords on a hyper-sphereof radius specified by two parameters; the variance of theGaussian rand om variable in which the data is to be hidden,oi and the power constraint Pi.n practice, the encodingcomplexity using such a codebook would be exponentialand hence impractical. A more practical construction en-tails taking some code, CO and partitioning it into subcodes,using another code C1 . The union of C1 with its cosets willconstitute the subcodes. The message should then have aone to on e correspondence with the quotient group COIC I .It was shown by Fomey [113 that the partition can be d onein accordance with error correction codes. In this case, thecodewords of the error correction code will correspond tothe subcode C1 . The cosets of C1 will then correspond tothe cosets of the codeword s of the error correction code. Asa result, we can represent the messages that are to be em-bedded in the data as the syndrom es of the error correctioncode. There are comp utationally efficient ways to calculatethe syndrome; hence ou r method of em bedding data will beeasy and cost effective to implement. In the above exam-ple, we considered a single partition of COusing C1. This iseasily generalized into multiple partitions, by furthe r parti-tioning C1 using another code C2 and partitioning Cz usingC3 . This process can continue indefinitely. The messageto be embedded will then correspond to the syndrome as-sociated w ith the group Co/CI /C2/C3/ ... The advantage of

    having multiple partitions is that a variable number of bitscan be easily embedd ed into the data by using this method.In terms of choosing CO and C1 , C2, ... we would like C O tohave a large shaping gain and a large coding gain and forC1, C2, . to have a large granular gain and a large bound-ary gain [ 7 ] For the interested reader, we provide generalcodebook co nstructions in [ 7 ] .As an example we consider the case where CO = 2,C1 = 2 2 , C1 = 4 2 , an d so on, where 2 is the integerlattice. The n-dimensional code will then simply be theproduct space of the above one-dimensional codes. A rep-resentation of this codebook for each dimension is givenin Fig. 3. As can be seen from the figure, the codebookconsists of lattice partitions which form a binary tree. Theleaves of the tree, will represent the subcodebooks that areused for encoding the audio signal. The bits specifying thepath to a particular leaf, will specify the bits to be hidden inthe audio signal. A nd, the root of the tree will represent thecomposite channel codebook that is used at the decoder todecode the hidden bits. As an example, consider the case o fthe user wanting to hide two bits corresponding to (1 , l ) intoan audio coefficient; the user wou ld then use the right-mostcodebook (see Fig. 3) to encode the audio coefficient, andtransmit the encoded audio coefficient across the channel.The decod er would receive the encoded au dio coefficient inaddition to some noise and decode the received coefficientto the closest codeword (relative to some distortion metric)in the root codebook. The subcodebook containing the de-coded codeword is then found , and the bits that s pecify thepath leading to the subcodebook is declared to be the de-coded data bits. One can observe from Fig. 3 that 6 , th edistance between codewords in the root codebook , governsthe amount of noise that the decoder can tolerate from thechannel and still recover the hidden bits successfu lly. ForAWGN channels, the probability of bit error can be fou ndas

    P =acg1 (6 )where N represents the variance of the noise from the chan-nel. For n samples, probability of decoding error becomes:

    P, = 1- (1 p ) " (7)Furthermore, one can dedu ce an estimate of the distortionthat is introduced by calculating the ex pected distortion us-ing the probability distribution of the quantization noise.One should note, that if the channel does not introduce anynoise, then 6 can be arbitrarily sm all and the number of lev-els in the root codebook can be arbitrarily large. In thiscase, the lattice tree can be made to be infinite, and hencean infinite number o f bits can be hidden w ithin an audio co-efficient, while meeting the distortion constraint that is im-posed by the perceptual mask! We are now equipp ed with ageneral method for hiding data in audio.

    1351

  • 8/6/2019 1 - Next Generation for Robust and Imperceptible Audio Data Hiding

    4/4

    The general method for hiding data within audio cannow be summarized by the following steps (refer to Fig. 2and Fig. 3) (1) Choose a codebook for a group of waveletcoefficients. Encode this gro up of coefficients using thesubcodebook that is specified by the data to be hidden inthe coefficients. (2) Send encoded coefficients across thechannel. The decoder will receive the encoded coefficientsand decode each group of coefficients using the com positecodebo ok for that group of coefficients. On e problem, thatthe decod er will encounter is that the d ecoder will not knowwhich codebook was used for encoding which group of co-efficients. If we use an n-d imen siona l lattice partition treeas ou r codebook, then the encoder can use different levelsof the tree for encoding each gro up of coefficients and sendthe level of the tree as side informa tion to the decoder. Ingeneral, this method o f data hiding will require 10.9~ n ) itsof side information per group of coefficients encoded. Thethrougput representing the n umb er of bits hidden within theaudio coefficients can be op timize d in a rate-distortion sensesimilar to the work d one in [101. This throughp ut optimiza-tion, however, will also depend upon the am ount of channelnoise that the decod er is designed to tolerate.

    4. APPLICATIONS

    Up to now, we have only described the lattice-tree parti-tion as a possible codebook to use for hiding data withinaudio. Using the principles of [113, one can design bettercodebooks using similar principles to the tree partition. Forexample, a trellis codebook that is partitioned into trellissubcodebooks can be designed in a manner similar to de-signing the lattice tree partition. Using the lattice-tree par-tition, we found that w e could hid e 140 kbps of data withinCD quality audio (44.1 kHz) w ithout altering the quality ofthe audio. Furthermore, the hidden bits could be perfectlydecoded giv en a signal-to-noise ratio (SNR) of 15 dB. Theseresults, however, did not accout for the side-informationnecessary to specify to the decoder which codebook wasused to encode which group of coefficients. Accounting forthe side-information, we found that we could successfullyhide 100 kbps of data without altering the quality of the au-dio.is to hide data within CD s for quality enhancement. An-other more exciting application, is to hide data within ana-log communication channels. To do so , one would send theanalog audio through an Analog-to-Digital (N D ) converterand feed the output of the N D straight to the data hidingsystem described by Fig. 2.The output of the hiding systemis then fed throug h a Digital-to-Analog (D/A) converter andthe output of the D/A is modulated onto the analog com-munications channel. This application is useful for usersthat want to receive extra data but do not have the requisite

    One possible applications of our au dio data hiding scheme

    bandwidth for transmitting the extra data. Because we havetargeted high-capacity robust data hiding, ou r method m aybe used to transmit significant am ounts of ex tra informationfor various applications.

    5. CONCLUSIONIn this paper we have introduced a robust m ethod of imper-ceptible audio data hiding. We developed a method of datahiding that represents the codebook as a tree structure andvaries the he ight of the tree based on p erceptual constraintsgiven by the audio signal. Our meth od of audio data hid-ing can embed over 100kbps of data in C D quality audioand still be robust to noise; this is significantly higher thanexisting audio data hiding techniques in the literature. As aresult, we can employ ou r audio data hiding system in vari-ous applications to significantly improve perform ance.

    6 . REFERENCES[I ] J. Johnston, Transform coding of audio signals using per-ceptual noise criteria, IEEE Journal on Selected Areas of

    Communication, vol. 6 , no. 2, pp. 314-323, Feb 1988.[2 ] J. Chou, S. Sandeep Pradhan, L. El Ghaoui, and K. Ram-chandran, Solutions to the data hiding problem using dis-tributed source coding principles, Proceedings of ICIR Van-couver, September 2000.

    [3] M Costa, Writing on dirty paper, IEEE Trans.on Znforma-tion Theory, vol. 29, pp . 439 441 , May 1983.

    [4] C Heegard and A El Gamal, On the capacity of computermemory with defects, IEEE Trans. on Information Theory,vol. 29, pp . 731-739, September 1983.[5] S Gelfand and M Pinsker, Coding for channel with randomparameters, Problems of Control and Information Theory,[6] B. Chen and G. W. Wom ell, Preprocessed and postpro-cessed quantization index modulation methods for digitalwatermarking, Proc. SPIE Security and Watermarking Mul-

    timedia Contents, vol. 3971, Jan 1999.[7] J. Chou, S. Sandeep Pradhan, and K. Ramchandran, Meth-ods of code construction for channel coding with side infor-mation, In Preparation fo r submission to IEEE Trans. on

    Comm., 2000.[SI International Standard, Coding of moving pictures and as-sociated audio, ISOIIEC 11 172-3, Aug 1993.[9] S. Sandeep Pradhan, J. Chou, and K . Ramchandran, Onthe duality between distributed source coding and channelcoding with side information, Submitted to IEEE Trans. on

    IT , 2000.[101 P. Prandoni and M. Vetterli, Wd op timal data hiding, Proc.of SPIE, Jan 1999.[ I 11 G. Forney, Coset codes - part 1: Introduction an d geometri-cal classification, IEEE Trans. on Info Theory, vol. 34, no.

    vol. 9, pp. 19-3 1, 1980.

    5,pp. 1123-1151, Sep 1988.

    1352