3
P6 3:30 pm - 5:30 pm Performance Comparison of Equalization and Low-pass Coding for Holographic Storage Venkatesh Vadde and B.V.K. Vijaya Kumar Data Storage Systems Center, ECE Department, Carnegie Mellon University, Pittsburgh, PA 15213 Tel: 972 894 4309 Fax: 972 894 4589 1. Introduction Volume holographic optical storage has attracted a great deal of attention recently owing to its ability to provide parallel, high throughput access to data recorded at high densities [ 1,2]. A common approach to optimizing the storage density in a holographic data storage system (HDSS) is to store several adjacent stacks of holograms with a focal plane aperture used to selectively readout holograms from a given stack. In general, employing smaller focal plane apertures can lead to closely spaced hologram stacks and thereby high storage density [3]. However, the disadvantage of using small apertures is that they introduce inter-pixel interference through optical diffraction. This is often called inter-symbol interference (ISI) and is detrimental to the quality of read-back data, especially in the presence of other noise sources. There are two approaches to tackling the problem of IS1 at high storage densities: digital equalization (post-processing) of read-back data or low-pass encoding the data (pre-processing) through suitable modulation codes [4,5]. While both equalization and low-pass codes alleviate the bit error rate (BER), they normally entail different amounts of data overhead. In this paper we perform a comparative investigation of the utility of equalization and low-pass codes for improving the data density in holographic storage systems. 2. Equalization and Low-pass Coding A vast majority of the errors that occur during read-back under reasonable SNR conditions can be attributed to ISI. Equalization and low-pass coding are two popular approaches to handling the problem of ISI. In the former approach namely equalization, IS1 is permitted in the system and later rectified digitally. For instance, in zero- forcing equalization or Wiener filtering, the IS1 in read-back data is estimated and subtracted off. Such equalization is computationally simple, but it often leads to undesirable noise enhancement. Another type of equalization lets controlled amounts of IS1 remain in the system, avoids noise amplification, and then deals with IS1 during detection. This approach is referred to as partial response (PR) equalization. Since the IS1 introduced is known to us, it is possible during detection to account for the ISI, thereby reducing errors due to ISI. In practice, PR equalized data is detected using the maximum likelihood sequence detection (MLSD) technique. This technique is preferred because it is often optimal and yet computationally manageable. PR equalization with ML detection has been widely studied in data storage applications, and we adapt this method for 2D holographic data pages here. In this summary, we will confine our results to PRML equalization. The other approach to tackling IS1 is to encode the data so that some or all of the detrimental IS1 patterns in the system are prohibited. It turns out that the more harmful IS1 patterns usually tend to be substantially high-frequency in content, such as alternating Os and Is. The solution thus consists of employing low-pass codes to prevent such high-frequency patterns from occurring in the system. The name ‘low-pass codes’ derives directly from the property of these codes to reduce ‘high-pass’ content. Similar to other type of codes, the use of low-pass or LP codes also entails a data overhead related to the code rate. LP codes are designed to prevent ISI-induced errors from occurring by simply eliminating error-prone IS1 patterns. As such, LP codes only require specific constraints to be satisfied by the user data before it gets recorded in the system. LP codes can therefore be decoded by using a simple threshold. 3. Low-pass Codes Studied We will now enumerate six different codes (LP Codel - LP Code6) studied by us from a low-pass coding perspective. The severity of the constraint increases from LP Codel to LP Code6. These codes, first investigated by Marcus and Ashley [5], are designed for 2D-IS1 systems such as holographic memory systems. We studied codes that only impose constraints on the immediate 8 neighbors of any given bit in the 2D bit-stream. Thus, we restrict our focus to the neighboring 3x3-pixel region for any given pixel. The different constraints in the codes along with the maximum achievable code rates are summarized in Table1 . For construction of the different low-pass codes, we have followed the method suggested by Ashley and Marcus [5], where the data page is built by vertically stacking several strips of data. The advantage of this approach is that one can easily envision the systematic design of data strips rather than of entire data pages at a time. By means of suitably defined strip constraints, we ensure that data strips can always be stacked on top of one another without violating any (horizontal or vertical) LP code constraints. 0-7803-5950-X/00/$10.0002000 IEEE 113

[IEEE 2000 Optical Data Storage. Conference Digest - Whisler, BC, Canada (14-17 May 2000)] 2000 Optical Data Storage. Conference Digest (Cat. No.00TH8491) - Performance comparison

  • Upload
    bvk

  • View
    214

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 2000 Optical Data Storage. Conference Digest - Whisler, BC, Canada (14-17 May 2000)] 2000 Optical Data Storage. Conference Digest (Cat. No.00TH8491) - Performance comparison

P6 3:30 pm - 5:30 pm

Performance Comparison of Equalization and Low-pass Coding for Holographic Storage

Venkatesh Vadde and B.V.K. Vijaya Kumar Data Storage Systems Center, ECE Department,

Carnegie Mellon University, Pittsburgh, PA 152 13 Tel: 972 894 4309 Fax: 972 894 4589

1. Introduction

Volume holographic optical storage has attracted a great deal of attention recently owing to its ability to provide parallel, high throughput access to data recorded at high densities [ 1,2]. A common approach to optimizing the storage density in a holographic data storage system (HDSS) is to store several adjacent stacks of holograms with a focal plane aperture used to selectively readout holograms from a given stack. In general, employing smaller focal plane apertures can lead to closely spaced hologram stacks and thereby high storage density [3]. However, the disadvantage of using small apertures is that they introduce inter-pixel interference through optical diffraction. This is often called inter-symbol interference (ISI) and is detrimental to the quality of read-back data, especially in the presence of other noise sources. There are two approaches to tackling the problem of IS1 at high storage densities: digital equalization (post-processing) of read-back data or low-pass encoding the data (pre-processing) through suitable modulation codes [4,5]. While both equalization and low-pass codes alleviate the bit error rate (BER), they normally entail different amounts of data overhead. In this paper we perform a comparative investigation of the utility of equalization and low-pass codes for improving the data density in holographic storage systems.

2. Equalization and Low-pass Coding

A vast majority of the errors that occur during read-back under reasonable SNR conditions can be attributed to ISI. Equalization and low-pass coding are two popular approaches to handling the problem of ISI. In the former approach namely equalization, IS1 is permitted in the system and later rectified digitally. For instance, in zero- forcing equalization or Wiener filtering, the IS1 in read-back data is estimated and subtracted off. Such equalization is computationally simple, but it often leads to undesirable noise enhancement. Another type of equalization lets controlled amounts of IS1 remain in the system, avoids noise amplification, and then deals with IS1 during detection. This approach is referred to as partial response (PR) equalization. Since the IS1 introduced is known to us, it is possible during detection to account for the ISI, thereby reducing errors due to ISI. In practice, PR equalized data is detected using the maximum likelihood sequence detection (MLSD) technique. This technique is preferred because it is often optimal and yet computationally manageable. PR equalization with ML detection has been widely studied in data storage applications, and we adapt this method for 2D holographic data pages here. In this summary, we will confine our results to PRML equalization.

The other approach to tackling IS1 is to encode the data so that some or all of the detrimental IS1 patterns in the system are prohibited. It turns out that the more harmful IS1 patterns usually tend to be substantially high-frequency in content, such as alternating Os and Is. The solution thus consists of employing low-pass codes to prevent such high-frequency patterns from occurring in the system. The name ‘low-pass codes’ derives directly from the property of these codes to reduce ‘high-pass’ content. Similar to other type of codes, the use of low-pass or LP codes also entails a data overhead related to the code rate. LP codes are designed to prevent ISI-induced errors from occurring by simply eliminating error-prone IS1 patterns. As such, LP codes only require specific constraints to be satisfied by the user data before it gets recorded in the system. LP codes can therefore be decoded by using a simple threshold.

3. Low-pass Codes Studied

We will now enumerate six different codes (LP Codel - LP Code6) studied by us from a low-pass coding perspective. The severity of the constraint increases from LP Codel to LP Code6. These codes, first investigated by Marcus and Ashley [5], are designed for 2D-IS1 systems such as holographic memory systems. We studied codes that only impose constraints on the immediate 8 neighbors of any given bit in the 2D bit-stream. Thus, we restrict our focus to the neighboring 3x3-pixel region for any given pixel. The different constraints in the codes along with the maximum achievable code rates are summarized in Table1 . For construction of the different low-pass codes, we have followed the method suggested by Ashley and Marcus [5], where the data page is built by vertically stacking several strips of data. The advantage of this approach is that one can easily envision the systematic design of data strips rather than of entire data pages at a time. By means of suitably defined strip constraints, we ensure that data strips can always be stacked on top of one another without violating any (horizontal or vertical) LP code constraints.

0-7803-5950-X/00/$10.0002000 IEEE 113

Page 2: [IEEE 2000 Optical Data Storage. Conference Digest - Whisler, BC, Canada (14-17 May 2000)] 2000 Optical Data Storage. Conference Digest (Cat. No.00TH8491) - Performance comparison

LP Codel: r = 1 Same as raw user data No constraint.

LP Code4: r = 0.67 Prohibit the patterns x x x x 1 x x x x x 0 x 1 0 1 , x 0 1 , 0 1 0 , x 1 0 x l x x l x x o x x o x

LP Code3: r = 0.86 Prohibit the patterns

x l x x o x 1 0 1 and 0 1 0 x l x x o x

LP Code6: r = 0.25 Over-sample every bit in the data grid four times (twice horizontally and twice vertically).

LP Code2: r = 0.9 Prohibit the patterns

1 1 1 0 0 0 1 0 1 and 0 1 0 1 1 1 0 0 0

LP Code5: r = 0.47 Prohibit the patterns 101 and 010 both horizontally and vertically in the2-D datagrid.

Table 1. Data constraints and maximum rate (r) for 6 different low-pass codes studied. An ‘x’ indicates that the bit could be a 0 or a 1.

4. Simulation Results

In order to study the two approaches to tackling ISI, we simulated several data pages with and without low-pass codes. We simulated data pages for the case of a 100% SLM fill-factor and a 40% CCD fill-factor with a sub- Nyquist aperture. The aperture size chosen was 0.85 W N x WN, where WN is the Nyquist aperture width [6,7]. IS1 was simulated considering optical diffraction from the 12 most interfering neighbor pixels. Two types of channels were considered: an optical noise dominated and electronic noise dominated channel. Since our approach is to first simulate IS1 and then add complex optical as well as electronic noise, we can vary the noise level independently at each stage of simulation. For the study of equalization, we used uncoded (LP Codel) data pages. Since the aperture chosen has a channel spectrum that can be conveniently equalized to a partial response, we employed PR equalization followed by ML detection. The PR target chosen was a (1+D) target. Since PRML is applied usually to 1D channels, we first employed zero-forcing equalization to eliminate IS1 along the columns of the page. This reduced the 2D IS1 in the page to 1D IS1 along the rows, which was resolved using PRML. We will refer to this as ZF-PRML (i.e., ZF iny, PRML in x). For the LP coded data pages, we simply employ threshold detection.

s 2

5 10 15 20

(a) SNR (dB)

4

V

t- LPGode9 -9- LPCadd

10 12 14 16 18 20 22 (b) SNR (dB)

Fig.1 (a) BER versus SNR for equalized and LP coded data with an optical noise dominated channel. (b) The corresponding BER -vs- SNR curves for an electronic noise dominated channel.

Fig. 1 shows BER of ZF-PRML equalized and LP-coded data plotted versus SNR, for optical noise dominated and electronic noise dominated channels. In order to perform the density calculations, we first fix a target BER of 10 -3. With this target fixed, we compare the S N R required for coded or equalized data with a baseline SNR i.e., raw data (no equalization or coding) using the Nyquist aperture. In Table2, we list the SNR required after equalization or low-pass coding for a 10” BER for the optical and electronic noise dominated channels. A code not listed implies that it is not capable of giving the target BER of lO”f0r any reasonable SNR. We first quantify the SNR gain into an

114

Page 3: [IEEE 2000 Optical Data Storage. Conference Digest - Whisler, BC, Canada (14-17 May 2000)] 2000 Optical Data Storage. Conference Digest (Cat. No.00TH8491) - Performance comparison

equivalent hologram density gain. In order to translate the SNR gain (or loss) to density results we assume as usual that the SNR scales as 1 / N for an optical noise dominated channel, and as 1/ N2 for an electronic noise dominated channel [7], N being the number of holograms per stack. Then we factor the gain in density due to the aperture size, and finally we account for the code rate, wherever needed.

SNR required for a target BER of 10”

Baseline Equalized LP Code4 LP Code5 LP Code6

Optical noise Electronic noise Channel Channel 18.57 dB 21.8 dB 15.31 dB 18.54 dB 14.20 dB 15.94 dB 13.47 dB 15.36 dB 12.50 dB 14.63 dB

Table 2. SNR needed for a 10” target BER for equalized and low-pass coded data

First we note from Table2 that for the desired target BER, we only have ZF-PRML equalization and low-pass codes 4, 5 and 6 as the contenders. We find that for the equalized data and for LP Codes 4, 5 and 6, we obtain SNR gains of 3.26 dB, 4.37 dB, 5.1 dB and 6.07 dB, respectively over the baseline case. Following the 1/ N scaling law, 17.5% density gain due to aperture-size and code-rates, these translate to density changes of 7 1 %, 30%, - 1 % and -40% for the equalized data, LP Code4, LP Code5 and LP Code6 data respectively. Thus, equalization appears to outperform LP codes in providing a density gain. For the electronic noise dominated case with the sub-Nyquist aperture chosen, the baseline SNR is 21.8 dB. We note again that LP Code 2 and 3 are not contenders, as they can’t provide us a I O -3 target BER at any reasonable SNR. The SNR gains obtained with ZF-PRML equalization and the LP codes 4 , 5 and 6 respectively are 3.26 dB, 5.86 dB, 6.84 dB and 7.17 dB. Invoking the scaling laws etc., we find that these translate to overall density improvements of 42%, lo%, -20% and -55%. Consistent with the optical noise dominated case, we find that equalization does better with 42% density improvement while the LP codes perform poorly with the best code (LP Code4) providing about 10% improvement in density.

Clearly, the dB gains in SNR possible with coding are impressive. However, it appears that the penalty paid in terms of the data overhead (code rate loss) is very high. For instance, while LP Code6 yields a 7.17 dB gain, it takes a 75% data overhead to achieve that. In comparison, equalization provides noticeable density gains sacrificing very little in code rate. As a result, the overall ability to provide an effective density gain is greater in equalization than in low-pass codes. We note that the % density gains for the electronic noise channel are smaller. This is because the dB gains translate to lower density gains owing to 1/ N2 scaling of SNR. We also investigated MMSE equalization in comparison to low-pass codes for the Nyquist aperture case. These and other results will be presented at the conference.

5. Conclusions

We evaluated the comparative use of equalization and low-pass codes to tackle the IS1 problem in a 2D-IS1 channel, specifically holographic storage channel. We presented PRML equalization results with 6 different low-pass codes having various low-pass constraints. Detailed channel simulations were performed and the SNR gains through equalization or coding were translated into equivalent density gains using standard scaling laws. We showed that equalization tends to perform better than low-pass codes from the standpoint of overall density gain. We found that for both optical noise dominated and electronic noise dominated cases, equalization performs better. This is mostly because the density gains of low-pass codes were usually overwhelmed by the penalty paid in terms of the code-rate.

References

[ l ] J.H. Hongetal , Opt. Eng.,34, 8, 2193-2203 (1995). [2] L. Hesselink and M.C. Bashaw, Opt. & Qtm. El., 25, s61 I-s661 (1993). [3] V. Vadde, B.V.K. Vijaya Kumar, G.W. Burr et a / , Proc. SPIE, 3409, 194-200 (1998). [4] G.W. Burr, J. Ashley, H. Coufal e t a l , Opt. Lett., 22,9, 639-641 (1997). [5] B.H. Marcus and J.J. Ashley, IEEE Trans. Comm., 46, 6,724-727 (1998). [6] M. -P. Bemal, G.W. Burr et al, Appl. Opt., 37,5377-5385 (1998). [7] V. Vadde and B.V.K. Vijaya Kumar, Appl. Opt.,38, 20, 4374-4386 (1999).

115