35
Channel Coding for Quantum Key Distribution Gottfried Lechner [email protected] Institute for Telecommunications Research University of South Australia July 7, 2011 HiPANQ Workshop, Vienna 1 / 26

Channel coding for quantum key distribution

Embed Size (px)

Citation preview

Page 1: Channel coding for quantum key distribution

Channel Coding for Quantum Key Distribution

Gottfried Lechner

[email protected]

Institute for Telecommunications ResearchUniversity of South Australia

July 7, 2011HiPANQ Workshop, Vienna

1 / 26

Page 2: Channel coding for quantum key distribution

Outline

BasicsChannel CodingSlepian-Wolf CodingBinning and the Dual ChannelLinear Block Codes, Syndrome and Rates

QKD ReconciliationSystem SetupExampleOptimisation

Conclusions

2 / 26

Page 3: Channel coding for quantum key distribution

Channel Coding Shannon: Commiunication in the Presence of Noise

ony, this operation consists of merely changing soundpressure into a proportional electrical current. In teleg-

the channel capacity may be defined as

C9g2 MT,= -oo TT--a T

Fig. 1-General communications system.

raphy, we have an encoding operation which produces a

sequence of dots, dashes, and spaces corresponding tothe letters of the message. To take a more complexexample, in the case of multiplex PCM telephony thedifferent speech functions must be sampled, compressed,quantized and encoded, and finally interleaved properlyto construct the signal.

3. The channel. This is merely the medium used totransmit the signal from the transmitting to the receiv-ing point. It may be a pair of wires, a coaxial cable, a

band of radio frequencies, etc. During transmission, or

at the receiving terminal, the signal may be perturbedby noise or distortion. Noise and distortion may be dif-ferentiated on the basis that distortion is a fixed opera-

tion applied to the signal, while noise involves statisticaland unpredictable perturbations. Distortion can, inprinciple, be corrected by applying the inverse opera-

tion, while a perturbation due to noise cannot always beremoved, since the signal does not always undergo thesame change during transmission.

4. The receiver. This operates on the received signaland attempts to reproduce, from it, the original mes--sage. Ordinarily it will perform approximately the math-ematical inverse of the operations of the transmitter, al-though they may differ somewhat with best design inorder to combat noise.

5. The destination. This is the person or thing forwhom the message is intended.

Following Nyquist' and Hartley,2 it is convenient touse a logarithmic measure of information. If a device hasn possible positions it can, by definition, store logbn unitsof information. The choice of the base b amounts to a

choice of unit, since logb n = logb c log, n. We will use thebase 2 and call the resulting units binary digits or bits.A group of m relays or flip-flop circuits has 2'" possiblesets of positions, and can therefore store log2 2m =m bits.

If it is possible to distinguish reliably M different sig-nal functions of duration T on a channel, we can saythat the channel can transmit log2 M bits in time T. Therate of transmission is then log2 M/T. More precisely,

1 H. Nyquist, "Certain factors affecting telegraph speed," BellSyst. Tech. Jour., vol. 3, p. 324; April, 1924.

2 R. V. L. Hartley, "The transmission of information," Bell Sys.Tech. Jour., vol. 3, p. 535-564; July, 1928.

A precise meaning will be given later to the requirementof reliable resolution of the M signals.

II. THE SAMPLING THEOREM

Let us suppose that the channel has a certain band-width W in cps starting at zero frequency, and that weare allowed to use this channel for a certain period oftime T. Without any further restrictions this wouldmean that we can use as signal functions any functionsof time whose spectra lie entirely within the band W,and whose time functions lie within the interval T. Al-though it is not possible to fulfill both of these condi-tions exactly, it is possible to keep the spectrum withinthe band W, and to have the time function very smalloutside the interval T. Can we describe in a more usefulway the functions which satisfy these conditions? Oneanswer is the following:THEOREM 1: If a function f(t) contains no frequencies

higher than W cps, it is completely determined by givingits ordinates at a series of points spaced 1/2W secondsapart.

This is a fact which is common knowledge in the com-

munication art. The intuitive justification is that, if f(t)contains no frequencies higher than W, it cannotchange to a substantially new value in a time less thanone-half cycle of the highest frequency, that is, 1/2 W. Amathematical proof showing that this is not onily ap-

proximately, but exactly, true can be given as follows.Let F(w) be the spectrum of f(t). Then

1 a00f(t) = 2f7( )eF(w,)eitdw

+29rW=

2F(w)ewtodco,

-1_2iW

(2)

(3)

since F(c) is assumed zero outside the band W. If welet

nt=-

2W (4)

where n is any positive or negative integer, we obtain

f (2T) = 27r 2W7F(w)ei-2W do. (5)

On the left are the values of f(t) at the sampling points.The integral on the right will be recognized as essen-

tially the nth coefficient in a Fourier-series expansion ofthe function F(w), taking the interval - W to + W as a

fundamental period. This means that the values of thesamples f(n2W) determine the Fourier coefficients inthe series expansion of F(w). Thus they determine F(w,),since F(w) is zero for frequencies greater than W, and for

(1)

1949 11

I transmit data reliably from source to destinationI maximise the code rate Rc such that the probability of error Pe < ε

I the maximum rate is given by the capacity of the channel C

Channel Coding Theorem [Shannon 1948]

For any ε > 0 and Rc < C, for large enough N, there exists a code oflength N and rate Rc and a decoding algorithm, such that the maximalprobability of block error is less than ε.

3 / 26

Page 4: Channel coding for quantum key distribution

Channel Coding Shannon: Commiunication in the Presence of Noise

ony, this operation consists of merely changing soundpressure into a proportional electrical current. In teleg-

the channel capacity may be defined as

C9g2 MT,= -oo TT--a T

Fig. 1-General communications system.

raphy, we have an encoding operation which produces a

sequence of dots, dashes, and spaces corresponding tothe letters of the message. To take a more complexexample, in the case of multiplex PCM telephony thedifferent speech functions must be sampled, compressed,quantized and encoded, and finally interleaved properlyto construct the signal.

3. The channel. This is merely the medium used totransmit the signal from the transmitting to the receiv-ing point. It may be a pair of wires, a coaxial cable, a

band of radio frequencies, etc. During transmission, or

at the receiving terminal, the signal may be perturbedby noise or distortion. Noise and distortion may be dif-ferentiated on the basis that distortion is a fixed opera-

tion applied to the signal, while noise involves statisticaland unpredictable perturbations. Distortion can, inprinciple, be corrected by applying the inverse opera-

tion, while a perturbation due to noise cannot always beremoved, since the signal does not always undergo thesame change during transmission.

4. The receiver. This operates on the received signaland attempts to reproduce, from it, the original mes--sage. Ordinarily it will perform approximately the math-ematical inverse of the operations of the transmitter, al-though they may differ somewhat with best design inorder to combat noise.

5. The destination. This is the person or thing forwhom the message is intended.

Following Nyquist' and Hartley,2 it is convenient touse a logarithmic measure of information. If a device hasn possible positions it can, by definition, store logbn unitsof information. The choice of the base b amounts to a

choice of unit, since logb n = logb c log, n. We will use thebase 2 and call the resulting units binary digits or bits.A group of m relays or flip-flop circuits has 2'" possiblesets of positions, and can therefore store log2 2m =m bits.

If it is possible to distinguish reliably M different sig-nal functions of duration T on a channel, we can saythat the channel can transmit log2 M bits in time T. Therate of transmission is then log2 M/T. More precisely,

1 H. Nyquist, "Certain factors affecting telegraph speed," BellSyst. Tech. Jour., vol. 3, p. 324; April, 1924.

2 R. V. L. Hartley, "The transmission of information," Bell Sys.Tech. Jour., vol. 3, p. 535-564; July, 1928.

A precise meaning will be given later to the requirementof reliable resolution of the M signals.

II. THE SAMPLING THEOREM

Let us suppose that the channel has a certain band-width W in cps starting at zero frequency, and that weare allowed to use this channel for a certain period oftime T. Without any further restrictions this wouldmean that we can use as signal functions any functionsof time whose spectra lie entirely within the band W,and whose time functions lie within the interval T. Al-though it is not possible to fulfill both of these condi-tions exactly, it is possible to keep the spectrum withinthe band W, and to have the time function very smalloutside the interval T. Can we describe in a more usefulway the functions which satisfy these conditions? Oneanswer is the following:THEOREM 1: If a function f(t) contains no frequencies

higher than W cps, it is completely determined by givingits ordinates at a series of points spaced 1/2W secondsapart.

This is a fact which is common knowledge in the com-

munication art. The intuitive justification is that, if f(t)contains no frequencies higher than W, it cannotchange to a substantially new value in a time less thanone-half cycle of the highest frequency, that is, 1/2 W. Amathematical proof showing that this is not onily ap-

proximately, but exactly, true can be given as follows.Let F(w) be the spectrum of f(t). Then

1 a00f(t) = 2f7( )eF(w,)eitdw

+29rW=

2F(w)ewtodco,

-1_2iW

(2)

(3)

since F(c) is assumed zero outside the band W. If welet

nt=-

2W (4)

where n is any positive or negative integer, we obtain

f (2T) = 27r 2W7F(w)ei-2W do. (5)

On the left are the values of f(t) at the sampling points.The integral on the right will be recognized as essen-

tially the nth coefficient in a Fourier-series expansion ofthe function F(w), taking the interval - W to + W as a

fundamental period. This means that the values of thesamples f(n2W) determine the Fourier coefficients inthe series expansion of F(w). Thus they determine F(w,),since F(w) is zero for frequencies greater than W, and for

(1)

1949 11

I transmit data reliably from source to destinationI maximise the code rate Rc such that the probability of error Pe < ε

I the maximum rate is given by the capacity of the channel C

Channel Coding Theorem [Shannon 1948]

For any ε > 0 and Rc < C, for large enough N, there exists a code oflength N and rate Rc and a decoding algorithm, such that the maximalprobability of block error is less than ε.

3 / 26

Page 5: Channel coding for quantum key distribution

Typical ApproachI choose a family of channels with a single parameter (e.g.,

AWGN, BSC, BEC,...)I fix a code rateI optimise the code such that it achieves vanishing error probability

close to capacity

4 Simplifying Component Decoders

0.5 1 1.5 2 2.510−6

10−5

10−4

10−3

10−2

10−1

100

SPAMSAMSA − variable scalingMSA − fixed scaling 0.60MSA − fixed scaling 0.70MSA − universal

Eb

N0

bit

erro

rra

te

Figure 4.10: Bit error rates for irregular code with and without post-processing.

50

4 / 26

Page 6: Channel coding for quantum key distribution

Slepian-Wolf Coding

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-19, NO. 4, JULY 1973 471

634648, Aug. 1971. [3] B. S. Atal and S. F. Hanauer, “Speech analysis and synthesis by

linear orediction of the soeech wave.” J. Amer. Statist. Ass.. vol. 50, pp: 637-655, Aug. 1971. ’

[4] B. S. Atal and M. R. Schroeder, “Adaptive predictive coding of speech signals,” Bell Syst. Tech. J., vol. 49, pp. 1973-1986, Oct. 197n I/ IY.

[5] G. M. Jenkins and G. S. Watts, Spectral Analysis and Its Applica- tions. San Francisco, Calif. : Holden-Day, 1969, ch. 7.

[6] R. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, 1968, ch. 9.

[7] T. Berger, Rate Distortion Theory, A Mathematical Basis for Data Compression. Englewood Cliffs, N.J. : Prentice-Hall, 1971.

[8] J. E. Gunn, “The application of modern estimation techniques to the speech data reduction problem,” Ph.D. dissertation, Informa- tion and Control Sciences Center, Southern Methodist Univ., Dallas, Tex., 1972.

[9] R. A. McDonald and P. M. Schultheiss, “Information rates of Gaussian signals under criteria constraining the error spectrum,” IEEE Proc., vol. 52, pp. 415-416, Apr. 1964.

[lo] T. J. Goblick, “Theoretical limitations on the transmission of data from analog sources,” IEEE Trans. Inform. Theory, vol.

IT-11, pp. 558-567, Oct. 1965. 1111 H. L. Van Trees. Detection. Estimation and Modulation. Part ZZ: _ _

Nonlinear Modulation Theory. New York: Wiley, 1971, ch. 5. [12] H. L. Royden, Real Analysis. New York: Macmillan, 1968. [13] B. J. Bunin and J. K. Wolf, “Convergence to the rate-distortion

function for Gaussian sources,” ZEEE Trans. Inform. Theory, vol. IT-17, pp. 65-71, Jan. 1971.

[14] U. Grenander and G. Szegii, Toeplitr Forms and Their Applica- tions. Berkeley, Calif. : Univ. California Press, 1958.

[15] J. B. O’Neal, Jr., “Predictive quantizing systems for the transmis- sion of televtston signals,” Bell Syst. Tech. J., vol. 45, pp. 689-720, May 1966.

[16] C. E. Shannon and W. Weaver, A Mathematical Theory of Com- munication. Urbana, Ill. : Univ. Illinois Press, 1949.

[17] L. Davisson, “Theory of data compression,” Ph.D. dissertation, Dep. Eng., Univ. Calif., Los Angeles, Calif., Sept. 1964.

[18] J. B. O’Neal, Jr., “Entropy coding in speech and television DPCM signals,” IEEE Trans. Inform. Theory, vol. IT-17, pp. 758-760, Nov. 1971.

[19] J. B. O’Neal and R. W. Stroh, “A speech encoder-multiplexer feasibility study,” Air Force Office of Scientific Research, pre- pared under Contract F44620-70-C-0122.

Noiseless Coding of Correlated Information Sources

DAVID SLEPIAN AND JACK K. WOLF

Abstracr-Correlated information sequences . . .,X- 1,X0,XI,. . . and . . .,Y-,,Y,,Y,,.. . are generated by repeated independent drawings of a pair of discrete random variables X, Y from a given bivariate distribu- tion pxu(x, y). We determine the minimum number of bits per character Rx and R, needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders. The results, some of which are not at all obvious, are presented as an admissible rate region 9 in the Rx-Ry plane. They generalize a similar and well-known result for a single information sequence, namely Rx 2 H(X) for faithful reproduction.

I. INTRODUCTION

Notation and Problem Statement

I N THIS PAPER, we generalize, to the case of two correlated sources, certain well-known results on the

noiseless coding of a single discrete information source. Typical of the situations eonsidered is that depicted in Fig. 1. Here the two correlated information sequences . * .,X- 1, X0,X,, . . . and . . . , Y- 1, Y,,, Y,, . . . are obtained by repeated independent drawings from a discrete bivariate distribution p(x,v). The encoder of each source is constrained to operate without knowledge of the other source, while the decoder has available both encoded binary message streams. We determine the minimum number of bits per source character required for the two encoded message streams in order to

-x-( ,xg .X,,“’ x “~01101~~~ ..x-:,x,*,x;l-. - ENCODER RATE RX 0

E

CORRELATED C

SOURCES 0 D

“Ye, ,Y, .Y,;.. Y “‘11000..~ ; ..Y-,*,Yo*8Y,p... ENCODER RATE RY

Fig. 1. Correlated source coding configuration.

HtXIY) H(X) H(X,Y) RX

Fig. 2. Admissible rate region W corresponding to Fig. 1

ensure accurate reconstruction by the decoder of the outputs Manuscript received August 25, 1972; revised December 28, 1972. of both information sources. The results are presented as an D. Slepian is with the University of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93 for the two encoded

the Bell Laboratories, Murray Hill, N.J. 07974. J. K. Wolf was with the University of Hawaii, Honolulu, Hawaii, on

message streams as shown in Fig. 2. Note that in 93 for this leave from the Polytechnic Institute of Brooklyn, Brooklyn, N.Y. case we can have both R, < H(X) and R, -c H(Y) al-

Authorized licensed use limited to: University of South Australia. Downloaded on January 17, 2009 at 20:51 from IEEE Xplore. Restrictions apply.

I transmit two correlated sources over two noiseless channelsI joint encoding and decoding: H(X,Y)I separate encoding and decoding: H(X) + H(Y) ≥ H(X,Y)

Slepian-Wolf Theorem (1973)

The admissible rate region is given by the rate pairs satisfying

Rx ≥ H(X|Y)Ry ≥ H(Y|X)

Rx + Ry ≥ H(X,Y)

There is no penalty if X and Y are encoded separately!

5 / 26

Page 7: Channel coding for quantum key distribution

Slepian-Wolf Coding

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-19, NO. 4, JULY 1973 471

634648, Aug. 1971. [3] B. S. Atal and S. F. Hanauer, “Speech analysis and synthesis by

linear orediction of the soeech wave.” J. Amer. Statist. Ass.. vol. 50, pp: 637-655, Aug. 1971. ’

[4] B. S. Atal and M. R. Schroeder, “Adaptive predictive coding of speech signals,” Bell Syst. Tech. J., vol. 49, pp. 1973-1986, Oct. 197n I/ IY.

[5] G. M. Jenkins and G. S. Watts, Spectral Analysis and Its Applica- tions. San Francisco, Calif. : Holden-Day, 1969, ch. 7.

[6] R. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, 1968, ch. 9.

[7] T. Berger, Rate Distortion Theory, A Mathematical Basis for Data Compression. Englewood Cliffs, N.J. : Prentice-Hall, 1971.

[8] J. E. Gunn, “The application of modern estimation techniques to the speech data reduction problem,” Ph.D. dissertation, Informa- tion and Control Sciences Center, Southern Methodist Univ., Dallas, Tex., 1972.

[9] R. A. McDonald and P. M. Schultheiss, “Information rates of Gaussian signals under criteria constraining the error spectrum,” IEEE Proc., vol. 52, pp. 415-416, Apr. 1964.

[lo] T. J. Goblick, “Theoretical limitations on the transmission of data from analog sources,” IEEE Trans. Inform. Theory, vol.

IT-11, pp. 558-567, Oct. 1965. 1111 H. L. Van Trees. Detection. Estimation and Modulation. Part ZZ: _ _

Nonlinear Modulation Theory. New York: Wiley, 1971, ch. 5. [12] H. L. Royden, Real Analysis. New York: Macmillan, 1968. [13] B. J. Bunin and J. K. Wolf, “Convergence to the rate-distortion

function for Gaussian sources,” ZEEE Trans. Inform. Theory, vol. IT-17, pp. 65-71, Jan. 1971.

[14] U. Grenander and G. Szegii, Toeplitr Forms and Their Applica- tions. Berkeley, Calif. : Univ. California Press, 1958.

[15] J. B. O’Neal, Jr., “Predictive quantizing systems for the transmis- sion of televtston signals,” Bell Syst. Tech. J., vol. 45, pp. 689-720, May 1966.

[16] C. E. Shannon and W. Weaver, A Mathematical Theory of Com- munication. Urbana, Ill. : Univ. Illinois Press, 1949.

[17] L. Davisson, “Theory of data compression,” Ph.D. dissertation, Dep. Eng., Univ. Calif., Los Angeles, Calif., Sept. 1964.

[18] J. B. O’Neal, Jr., “Entropy coding in speech and television DPCM signals,” IEEE Trans. Inform. Theory, vol. IT-17, pp. 758-760, Nov. 1971.

[19] J. B. O’Neal and R. W. Stroh, “A speech encoder-multiplexer feasibility study,” Air Force Office of Scientific Research, pre- pared under Contract F44620-70-C-0122.

Noiseless Coding of Correlated Information Sources

DAVID SLEPIAN AND JACK K. WOLF

Abstracr-Correlated information sequences . . .,X- 1,X0,XI,. . . and . . .,Y-,,Y,,Y,,.. . are generated by repeated independent drawings of a pair of discrete random variables X, Y from a given bivariate distribu- tion pxu(x, y). We determine the minimum number of bits per character Rx and R, needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders. The results, some of which are not at all obvious, are presented as an admissible rate region 9 in the Rx-Ry plane. They generalize a similar and well-known result for a single information sequence, namely Rx 2 H(X) for faithful reproduction.

I. INTRODUCTION

Notation and Problem Statement

I N THIS PAPER, we generalize, to the case of two correlated sources, certain well-known results on the

noiseless coding of a single discrete information source. Typical of the situations eonsidered is that depicted in Fig. 1. Here the two correlated information sequences . * .,X- 1, X0,X,, . . . and . . . , Y- 1, Y,,, Y,, . . . are obtained by repeated independent drawings from a discrete bivariate distribution p(x,v). The encoder of each source is constrained to operate without knowledge of the other source, while the decoder has available both encoded binary message streams. We determine the minimum number of bits per source character required for the two encoded message streams in order to

-x-( ,xg .X,,“’ x “~01101~~~ ..x-:,x,*,x;l-. - ENCODER RATE RX 0

E

CORRELATED C

SOURCES 0 D

“Ye, ,Y, .Y,;.. Y “‘11000..~ ; ..Y-,*,Yo*8Y,p... ENCODER RATE RY

Fig. 1. Correlated source coding configuration.

HtXIY) H(X) H(X,Y) RX

Fig. 2. Admissible rate region W corresponding to Fig. 1

ensure accurate reconstruction by the decoder of the outputs Manuscript received August 25, 1972; revised December 28, 1972. of both information sources. The results are presented as an D. Slepian is with the University of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93 for the two encoded

the Bell Laboratories, Murray Hill, N.J. 07974. J. K. Wolf was with the University of Hawaii, Honolulu, Hawaii, on

message streams as shown in Fig. 2. Note that in 93 for this leave from the Polytechnic Institute of Brooklyn, Brooklyn, N.Y. case we can have both R, < H(X) and R, -c H(Y) al-

Authorized licensed use limited to: University of South Australia. Downloaded on January 17, 2009 at 20:51 from IEEE Xplore. Restrictions apply.

I transmit two correlated sources over two noiseless channelsI joint encoding and decoding: H(X,Y)I separate encoding and decoding: H(X) + H(Y) ≥ H(X,Y)

Slepian-Wolf Theorem (1973)

The admissible rate region is given by the rate pairs satisfying

Rx ≥ H(X|Y)Ry ≥ H(Y|X)

Rx + Ry ≥ H(X,Y)

There is no penalty if X and Y are encoded separately!5 / 26

Page 8: Channel coding for quantum key distribution

Slepian-Wolf Coding

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-19, NO. 4, JULY 1973 471

634648, Aug. 1971. [3] B. S. Atal and S. F. Hanauer, “Speech analysis and synthesis by

linear orediction of the soeech wave.” J. Amer. Statist. Ass.. vol. 50, pp: 637-655, Aug. 1971. ’

[4] B. S. Atal and M. R. Schroeder, “Adaptive predictive coding of speech signals,” Bell Syst. Tech. J., vol. 49, pp. 1973-1986, Oct. 197n I/ IY.

[5] G. M. Jenkins and G. S. Watts, Spectral Analysis and Its Applica- tions. San Francisco, Calif. : Holden-Day, 1969, ch. 7.

[6] R. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, 1968, ch. 9.

[7] T. Berger, Rate Distortion Theory, A Mathematical Basis for Data Compression. Englewood Cliffs, N.J. : Prentice-Hall, 1971.

[8] J. E. Gunn, “The application of modern estimation techniques to the speech data reduction problem,” Ph.D. dissertation, Informa- tion and Control Sciences Center, Southern Methodist Univ., Dallas, Tex., 1972.

[9] R. A. McDonald and P. M. Schultheiss, “Information rates of Gaussian signals under criteria constraining the error spectrum,” IEEE Proc., vol. 52, pp. 415-416, Apr. 1964.

[lo] T. J. Goblick, “Theoretical limitations on the transmission of data from analog sources,” IEEE Trans. Inform. Theory, vol.

IT-11, pp. 558-567, Oct. 1965. 1111 H. L. Van Trees. Detection. Estimation and Modulation. Part ZZ: _ _

Nonlinear Modulation Theory. New York: Wiley, 1971, ch. 5. [12] H. L. Royden, Real Analysis. New York: Macmillan, 1968. [13] B. J. Bunin and J. K. Wolf, “Convergence to the rate-distortion

function for Gaussian sources,” ZEEE Trans. Inform. Theory, vol. IT-17, pp. 65-71, Jan. 1971.

[14] U. Grenander and G. Szegii, Toeplitr Forms and Their Applica- tions. Berkeley, Calif. : Univ. California Press, 1958.

[15] J. B. O’Neal, Jr., “Predictive quantizing systems for the transmis- sion of televtston signals,” Bell Syst. Tech. J., vol. 45, pp. 689-720, May 1966.

[16] C. E. Shannon and W. Weaver, A Mathematical Theory of Com- munication. Urbana, Ill. : Univ. Illinois Press, 1949.

[17] L. Davisson, “Theory of data compression,” Ph.D. dissertation, Dep. Eng., Univ. Calif., Los Angeles, Calif., Sept. 1964.

[18] J. B. O’Neal, Jr., “Entropy coding in speech and television DPCM signals,” IEEE Trans. Inform. Theory, vol. IT-17, pp. 758-760, Nov. 1971.

[19] J. B. O’Neal and R. W. Stroh, “A speech encoder-multiplexer feasibility study,” Air Force Office of Scientific Research, pre- pared under Contract F44620-70-C-0122.

Noiseless Coding of Correlated Information Sources

DAVID SLEPIAN AND JACK K. WOLF

Abstracr-Correlated information sequences . . .,X- 1,X0,XI,. . . and . . .,Y-,,Y,,Y,,.. . are generated by repeated independent drawings of a pair of discrete random variables X, Y from a given bivariate distribu- tion pxu(x, y). We determine the minimum number of bits per character Rx and R, needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders. The results, some of which are not at all obvious, are presented as an admissible rate region 9 in the Rx-Ry plane. They generalize a similar and well-known result for a single information sequence, namely Rx 2 H(X) for faithful reproduction.

I. INTRODUCTION

Notation and Problem Statement

I N THIS PAPER, we generalize, to the case of two correlated sources, certain well-known results on the

noiseless coding of a single discrete information source. Typical of the situations eonsidered is that depicted in Fig. 1. Here the two correlated information sequences . * .,X- 1, X0,X,, . . . and . . . , Y- 1, Y,,, Y,, . . . are obtained by repeated independent drawings from a discrete bivariate distribution p(x,v). The encoder of each source is constrained to operate without knowledge of the other source, while the decoder has available both encoded binary message streams. We determine the minimum number of bits per source character required for the two encoded message streams in order to

-x-( ,xg .X,,“’ x “~01101~~~ ..x-:,x,*,x;l-. - ENCODER RATE RX 0

E

CORRELATED C

SOURCES 0 D

“Ye, ,Y, .Y,;.. Y “‘11000..~ ; ..Y-,*,Yo*8Y,p... ENCODER RATE RY

Fig. 1. Correlated source coding configuration.

HtXIY) H(X) H(X,Y) RX

Fig. 2. Admissible rate region W corresponding to Fig. 1

ensure accurate reconstruction by the decoder of the outputs Manuscript received August 25, 1972; revised December 28, 1972. of both information sources. The results are presented as an D. Slepian is with the University of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93 for the two encoded

the Bell Laboratories, Murray Hill, N.J. 07974. J. K. Wolf was with the University of Hawaii, Honolulu, Hawaii, on

message streams as shown in Fig. 2. Note that in 93 for this leave from the Polytechnic Institute of Brooklyn, Brooklyn, N.Y. case we can have both R, < H(X) and R, -c H(Y) al-

Authorized licensed use limited to: University of South Australia. Downloaded on January 17, 2009 at 20:51 from IEEE Xplore. Restrictions apply.

I assume that Y is transmitted at H(Y)I we operate at a corner point of the Slepian-Wolf regionI for this corner point we can use the syndrome of a channel code

as a binning scheme

6 / 26

Page 9: Channel coding for quantum key distribution

Slepian-Wolf Coding

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-19, NO. 4, JULY 1973 471

634648, Aug. 1971. [3] B. S. Atal and S. F. Hanauer, “Speech analysis and synthesis by

linear orediction of the soeech wave.” J. Amer. Statist. Ass.. vol. 50, pp: 637-655, Aug. 1971. ’

[4] B. S. Atal and M. R. Schroeder, “Adaptive predictive coding of speech signals,” Bell Syst. Tech. J., vol. 49, pp. 1973-1986, Oct. 197n I/ IY.

[5] G. M. Jenkins and G. S. Watts, Spectral Analysis and Its Applica- tions. San Francisco, Calif. : Holden-Day, 1969, ch. 7.

[6] R. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, 1968, ch. 9.

[7] T. Berger, Rate Distortion Theory, A Mathematical Basis for Data Compression. Englewood Cliffs, N.J. : Prentice-Hall, 1971.

[8] J. E. Gunn, “The application of modern estimation techniques to the speech data reduction problem,” Ph.D. dissertation, Informa- tion and Control Sciences Center, Southern Methodist Univ., Dallas, Tex., 1972.

[9] R. A. McDonald and P. M. Schultheiss, “Information rates of Gaussian signals under criteria constraining the error spectrum,” IEEE Proc., vol. 52, pp. 415-416, Apr. 1964.

[lo] T. J. Goblick, “Theoretical limitations on the transmission of data from analog sources,” IEEE Trans. Inform. Theory, vol.

IT-11, pp. 558-567, Oct. 1965. 1111 H. L. Van Trees. Detection. Estimation and Modulation. Part ZZ: _ _

Nonlinear Modulation Theory. New York: Wiley, 1971, ch. 5. [12] H. L. Royden, Real Analysis. New York: Macmillan, 1968. [13] B. J. Bunin and J. K. Wolf, “Convergence to the rate-distortion

function for Gaussian sources,” ZEEE Trans. Inform. Theory, vol. IT-17, pp. 65-71, Jan. 1971.

[14] U. Grenander and G. Szegii, Toeplitr Forms and Their Applica- tions. Berkeley, Calif. : Univ. California Press, 1958.

[15] J. B. O’Neal, Jr., “Predictive quantizing systems for the transmis- sion of televtston signals,” Bell Syst. Tech. J., vol. 45, pp. 689-720, May 1966.

[16] C. E. Shannon and W. Weaver, A Mathematical Theory of Com- munication. Urbana, Ill. : Univ. Illinois Press, 1949.

[17] L. Davisson, “Theory of data compression,” Ph.D. dissertation, Dep. Eng., Univ. Calif., Los Angeles, Calif., Sept. 1964.

[18] J. B. O’Neal, Jr., “Entropy coding in speech and television DPCM signals,” IEEE Trans. Inform. Theory, vol. IT-17, pp. 758-760, Nov. 1971.

[19] J. B. O’Neal and R. W. Stroh, “A speech encoder-multiplexer feasibility study,” Air Force Office of Scientific Research, pre- pared under Contract F44620-70-C-0122.

Noiseless Coding of Correlated Information Sources

DAVID SLEPIAN AND JACK K. WOLF

Abstracr-Correlated information sequences . . .,X- 1,X0,XI,. . . and . . .,Y-,,Y,,Y,,.. . are generated by repeated independent drawings of a pair of discrete random variables X, Y from a given bivariate distribu- tion pxu(x, y). We determine the minimum number of bits per character Rx and R, needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders. The results, some of which are not at all obvious, are presented as an admissible rate region 9 in the Rx-Ry plane. They generalize a similar and well-known result for a single information sequence, namely Rx 2 H(X) for faithful reproduction.

I. INTRODUCTION

Notation and Problem Statement

I N THIS PAPER, we generalize, to the case of two correlated sources, certain well-known results on the

noiseless coding of a single discrete information source. Typical of the situations eonsidered is that depicted in Fig. 1. Here the two correlated information sequences . * .,X- 1, X0,X,, . . . and . . . , Y- 1, Y,,, Y,, . . . are obtained by repeated independent drawings from a discrete bivariate distribution p(x,v). The encoder of each source is constrained to operate without knowledge of the other source, while the decoder has available both encoded binary message streams. We determine the minimum number of bits per source character required for the two encoded message streams in order to

-x-( ,xg .X,,“’ x “~01101~~~ ..x-:,x,*,x;l-. - ENCODER RATE RX 0

E

CORRELATED C

SOURCES 0 D

“Ye, ,Y, .Y,;.. Y “‘11000..~ ; ..Y-,*,Yo*8Y,p... ENCODER RATE RY

Fig. 1. Correlated source coding configuration.

HtXIY) H(X) H(X,Y) RX

Fig. 2. Admissible rate region W corresponding to Fig. 1

ensure accurate reconstruction by the decoder of the outputs Manuscript received August 25, 1972; revised December 28, 1972. of both information sources. The results are presented as an D. Slepian is with the University of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93 for the two encoded

the Bell Laboratories, Murray Hill, N.J. 07974. J. K. Wolf was with the University of Hawaii, Honolulu, Hawaii, on

message streams as shown in Fig. 2. Note that in 93 for this leave from the Polytechnic Institute of Brooklyn, Brooklyn, N.Y. case we can have both R, < H(X) and R, -c H(Y) al-

Authorized licensed use limited to: University of South Australia. Downloaded on January 17, 2009 at 20:51 from IEEE Xplore. Restrictions apply.

I assume that Y is transmitted at H(Y)I we operate at a corner point of the Slepian-Wolf regionI for this corner point we can use the syndrome of a channel code

as a binning scheme

6 / 26

Page 10: Channel coding for quantum key distribution

Binning with Syndrome

Bin 1 Bin 2 Bin 3

I encoding of X can be done by random binningI the syndrome of a linear code is used for binning

7 / 26

Page 11: Channel coding for quantum key distribution

Dual Channel

Correlated sources:I assume sources X and Y with P(X,Y) = P(X)P(Y|X)I generate X according to P(X)I transmit X over the channel P(Y|X) to obtain Y

What is the channel that is seen by the channel decoder?I in general it is the dual channel which is not equal to P(Y|X) nor

P(X|Y)I the channel seen by the decoder is always a symmetric channel

with uniform inputI therefore, linear codes can be usedI for the simple case of two binary sources correlated via a BSC all

these channels are the same

8 / 26

Page 12: Channel coding for quantum key distribution

Dual Channel

Correlated sources:I assume sources X and Y with P(X,Y) = P(X)P(Y|X)I generate X according to P(X)I transmit X over the channel P(Y|X) to obtain Y

What is the channel that is seen by the channel decoder?I in general it is the dual channel which is not equal to P(Y|X) nor

P(X|Y)I the channel seen by the decoder is always a symmetric channel

with uniform inputI therefore, linear codes can be usedI for the simple case of two binary sources correlated via a BSC all

these channels are the same

8 / 26

Page 13: Channel coding for quantum key distribution

Linear Block Codes, Syndrome and Ratesx

N M

I C ={

x ∈ {0, 1}N∣∣∣ xHT = 0

}

I Rc = N−MN = 1− M

N

x

s

N M

I Cs ={

x ∈ {0, 1}N∣∣∣ xHT = s

}

I Rs = MN = 1− Rc

I efficiency parameter f = MH(X|Y) = M

NH(X|Y) = RsH(X|Y)

9 / 26

Page 14: Channel coding for quantum key distribution

Linear Block Codes, Syndrome and Ratesx

N M

I C ={

x ∈ {0, 1}N∣∣∣ xHT = 0

}

I Rc = N−MN = 1− M

N

x

s

N M

I Cs ={

x ∈ {0, 1}N∣∣∣ xHT = s

}

I Rs = MN = 1− Rc

I efficiency parameter f = MH(X|Y) = M

NH(X|Y) = RsH(X|Y)

9 / 26

Page 15: Channel coding for quantum key distribution

LDPC Codes

H =

0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 00 0 0 0 0 0 1 1 0 1 0 1 0 0 0 00 1 1 0 0 0 0 1 0 0 1 0 0 0 0 01 1 0 1 0 0 0 0 0 0 0 0 0 1 0 01 0 0 0 0 1 0 0 0 0 0 1 0 0 1 00 0 0 0 1 0 0 0 0 1 0 0 1 1 0 00 0 0 1 1 0 0 0 1 0 0 0 0 0 0 10 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1

N

M

dv

dc

10 / 26

Page 16: Channel coding for quantum key distribution

Outline

BasicsChannel CodingSlepian-Wolf CodingBinning and the Dual ChannelLinear Block Codes, Syndrome and Rates

QKD ReconciliationSystem SetupExampleOptimisation

Conclusions

11 / 26

Page 17: Channel coding for quantum key distribution

Quantum Key Distribution

Bob

PublicChannel

Quantum  Channel

Alice

X Y

I Alice and Bob generate a common keyI they communicate via a quantum channel and a public channelI Eve attempts to gain knowledge of the key

12 / 26

Page 18: Channel coding for quantum key distribution

System Setup

X(Alice)

Y

EncoderPublicChannel

Bob

I the quantum channel creates a correlated sourceI Alice observes X and Bob observes YI Alice has to communicate at least H(X|Y) over the public channelI this corresponds to the corner point of the Slepian-Wolf region

13 / 26

Page 19: Channel coding for quantum key distribution

Aims

Aims of QKD:I Alice and Bob want to create a common keyI the goal is to maximise the key generation rateI this does not necessarily require error free communication (as

long as the errors are detectable)I the key generation rate can be limited by

I the quantum channel (quantum source)I the data rate over the public channelI the processing capabilities of Bob

14 / 26

Page 20: Channel coding for quantum key distribution

Example

0.15 0.2 0.25 0.3 0.35 0.4 0.450

0.2

0.4

0.6

0.8

1

0.15 0.2 0.25 0.3 0.35 0.4 0.450

0.2

0.4

0.6

0.8

1

rsrs

WERAlgorithm 1 Algorithm 2

15 / 26

Page 21: Channel coding for quantum key distribution

Example

0.15 0.2 0.25 0.3 0.35 0.4 0.450

0.2

0.4

0.6

0.8

1

0.15 0.2 0.25 0.3 0.35 0.4 0.450

0.2

0.4

0.6

0.8

1

rsrs

WERAlgorithm 1 Algorithm 2

15 / 26

Page 22: Channel coding for quantum key distribution

Example

0.15 0.2 0.25 0.3 0.35 0.4 0.450

0.2

0.4

0.6

0.8

1

0.15 0.2 0.25 0.3 0.35 0.4 0.450

0.2

0.4

0.6

0.8

1

rsrs

WERAlgorithm 1 Algorithm 2

Keyrate

15 / 26

Page 23: Channel coding for quantum key distribution

Example

0.15 0.2 0.25 0.3 0.35 0.4 0.450

0.2

0.4

0.6

0.8

1

0.15 0.2 0.25 0.3 0.35 0.4 0.450

0.2

0.4

0.6

0.8

1

rsrs

WERAlgorithm 1 Algorithm 2

Keyrate Time

15 / 26

Page 24: Channel coding for quantum key distribution

Optimisation ProblemI maximum achievable key rate

rk,max = fk(rs, pX,Y)

I word error probability

pe = fe(rs, pX,Y ,A)

I decoding complexity

td = ft(rs, pX,Y ,A)

Optimisation Problem

rk = max {rk,max (1− pe)}

subject to td < td,max

where the maximisation is taken overI 0 < rs < 0.5I all decoding algorithms A

16 / 26

Page 25: Channel coding for quantum key distribution

Optimisation ProblemI maximum achievable key rate

rk,max = fk(rs, pX,Y)

I word error probability

pe = fe(rs, pX,Y ,A)

I decoding complexity

td = ft(rs, pX,Y ,A)

Optimisation Problem

rk = max {rk,max (1− pe)}

subject to td < td,max

where the maximisation is taken overI 0 < rs < 0.5I all decoding algorithms A

16 / 26

Page 26: Channel coding for quantum key distribution

Optimisation

The decoding algorithm can either beI fixedI chosen from a fixed set of algorithmsI adaptively changed during the decoding process (e.g., gear-shift

decoding)

The coding rate can either beI fixedI chosen from a fixed set of rates (rate-compatible codes)I adaptively changed during the decoding process (rateless codes)

17 / 26

Page 27: Channel coding for quantum key distribution

Message-Passing Decoders

LchLcv,j

Lvc,i

Lvc,j

Lcv,i

I Sum-Product Algorithm (SPA)

Lvc,i = Lch +dv∑

j=1 j 6=i

Lcv,j Lcv,i = 2 tanh−1dc∏

j=1 j6=i

tanhLvc,j

2

I Min-Sum Algorithm (MSA)

Lvc,i = Lch +dv∑

j=1 j 6=i

Lcv,j Lcv,i = α ·minj 6=i|Lvc,j| ·

dc∏

j=1 j6=i

sign(Lvc,j)

I Binary Message-Passing (BMP)

mvc,i = majority(mch,mcv,j) mcv,i = xorj 6=i

mvc,j

18 / 26

Page 28: Channel coding for quantum key distribution

Gear-Shift Decoding1238 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 54, NO. 7, JULY 2006

Fig. 2. Simple gear-shifting trellis with of size six and three algorithms.Notice that some vertices have fewer than three outgoing edges; this happenswhen some algorithms have a closed EXIT chart at this message-error rate, orwhen two algorithms result in a parallel edge (in which case, only the lowercomplexity algorithm is retained).

Clearly, every gear-shifting sequence corresponds to a pathin the gear-shifting trellis starting at the depth-zero vertex, andthe total computation time is obtained as the sum of theweights of the edges along that path. Furthermore, a path thatends at the vertex associated with corresponds to an ad-missible gear-shifting sequence. An optimum gear-shifting se-quence is, therefore, associated with a path of minimum totalcost starting at vertex and ending at vertex in the gear-shifting trellis. Such a path can easily be found by dynamic pro-gramming (e.g., via the Viterbi algorithm).

Notice that the number of edges in the gear-shifting trellis isgiven as , so increasing (in order to reduce the quan-tization error) results in a corresponding linear increase in thenumber of trellis edges. Since it is computationally quite fea-sible to handle gear-shifting trellises having even a few thou-sand edges, the set of quantized probability values can easilyhave several thousand elements. In practice, we uselogarithmically spaced probability values in the range of – .

Since we are interested only in the minimum-weight pathfrom to , we can remove all branches that connect a vertex

at depth to a vertex at time when , assuch branches correspond to gear shifts that are harmful to de-coder progress. This further simplifies the dynamic program.

We may also expand the gear-shifting trellis to account forconstraints that may be imposed on the gear-shifting sequence.For example, we may wish to impose the constraint that an iter-ation of highly quantized message passing such as Gallager-Bor Algorithm E may follow an iteration of the sum-product ormin-sum algorithms, but not vice-versa. In this case, there aretwo algorithm classes (Class 0 and Class 1, say), and so wecan expand the number of vertices in the gear-shifting trellisby a factor of two. A vertex labeled in the unconstrainedgear-shifting trellis is split into two vertices labeled and

, respectively. Here, the second component of the vertexlabel is a Boolean variable indicating whether (or not) an al-gorithm in Class 1 has been used. The initial vertex is labeled

. From every vertex at depth labeled , we allowedges corresponding to each available algorithm, leading to avertex at depth labeled for some if the algorithmis in Class 0, and labeled for some if the algorithmis in Class 1. On the other hand, from every vertex at depth

labeled , we allow edges corresponding only to algo-rithms in Class 1, leading to a vertex at depth labeled

. Thus, once an algorithm in Class 1 has been used in agear-shifting sequence, the algorithms in Class 0 become un-available. Once more, an optimum constrained gear-shifting se-quence can be found by dynamic programming using the ex-panded gear-shifting trellis.

E. Convergence Thresholds

For equally complex algorithms, it is clear that the conver-gence threshold of the gear-shift decoder is equal to or betterthan the best of the available decoders. Note that the decoderchooses the most open EXIT chart at each iteration. The re-sulting EXIT chart is then the lower hull of all the EXIT charts,and hence, has an equal or even better threshold.

In the case of algorithms with different complexities, the op-timum gear-shift decoder chooses the algorithms based on theircomplexity and performance. Hence, it might use an algorithmwith a worse performance due to its lower complexity. There-fore, it is not obvious how this might affect the threshold of con-vergence. The following theorem shows that even in this case,the threshold of convergence can only be improved by gear-shiftdecoding.

Theorem 1: The convergence threshold of the gear-shift de-coder is better than or equal to the best threshold among thoseof the available decoding algorithms.

Proof: Let us denote the best convergence threshold ofavailable algorithms with . We need to show that when theintrinsic message-error rate is better than , the optimumgear-shift decoder can converge successfully.

Since is the convergence threshold of at least one algo-rithm, say algorithm , at least this algorithm has an open EXITchart at . In other words, for all ,we have . Therefore, in thegear-shifting trellis, there is a path between and , andhence, there is a path of minimum cost between them.

Notice that if for some , all decoding rules except algo-rithm have a closed EXIT chart, they all result in a

. Therefore, in the gear-shifting trellis, we can remove thebranches associated with them at this particular . Hence, theoptimum gear-shift decoder chooses the only existing branch,i.e., algorithm , towards convergence. This is also true whenother decoding rules have a tight EXIT chart. As we see in laterexamples, in tight regions, the optimum gear-shift decoder usesmore complex decoders.

III. MINIMUM HARDWARE DECODING

In the previous section, we studied the case of minimum de-coding latency, which, in particular, is attractive for softwaredecoders. In this section, we show that gear-shift decoding mayalso be attractive to optimize a pipeline-structured hardware de-coder implementation, using throughput (rather than decodinglatency) as the measure of optimality.

Consider the pipelined decoder structure shown in Fig. 3.In this figure, the block labeled is the th stage ofthe pipeline system, which performs iterations of algorithmnumber . Our goal in this section is to show how one can min-imize the cost of hardware by proper choice of and at each

Authorized licensed use limited to: University of South Australia. Downloaded on October 21, 2009 at 23:04 from IEEE Xplore. Restrictions apply.

from Ardakani and Kschischang, “Gear-shift decoding,” IEEE Trans. Com. 2006

19 / 26

Page 29: Channel coding for quantum key distribution

Fixed Rate vs Rateless

I error rate on the quantum channel known and large block lengthtransmit syndrome over public channel and discard key ifdecoding is not successful (one bit feedback)

I error rate on the quantum channel variesnot enough data on the public channel leads to high error ratetoo much data on the public channel reduces the keyrate

20 / 26

Page 30: Channel coding for quantum key distribution

Literature

Information Theory

David Slepian and Jack K Wolf.Noiseless coding of correlated information sources.IEEE Transactions on Information Theory, 19(4):471 – 480, 1973.

Aaron D Wyner.Recent results in the Shannon theory.IEEE Transactions on Information Theory, 20(1):2 – 10, 1974.

Jun Chen, Da ke He, and Ashish Jagmohan.On the duality between Slepian–Wolf coding and channel coding under mismatcheddecoding.IEEE Transactions on Information Theory, 55(9):4006 – 4018, 2009.

21 / 26

Page 31: Channel coding for quantum key distribution

Literature

Coding

Robert G Gallager.Low-density parity-check codes.IEEE Transactions on Information Theory, 8(1):21 – 28, 1962.

Michael Luby.LT codes.In IEEE Symposium on Foundations of Computer Science, 2002, pages 271 – 280, 2002.

Amin Shokrollahi.Raptor codes.IEEE Transactions on Information Theory, 52(6):2551 – 2567, 2006.

T. Richardson and R. Urbanke.Modern Coding Theory.Cambridge University Press, 2008.

22 / 26

Page 32: Channel coding for quantum key distribution

Literature

QKD Basics

Gilles Brassard and Louis Salvail.Secret-key reconciliation by public discussion.In Advances in Cryptology EUROCRYPT’93, pages 410–423, 1994.

Tomohiro Sugimoto and Kouichi Yamazaki.A study on secret key reconciliation protocol ”cascade”.In IEICE Transactions on Fundamentals of Electronics, Communications and ComputerSciences, volume E83-A, pages 1987–1991, 2000.

W T Buttler, S K Lamoreaux, J R Torgerson, G H Nickel, C H Donahue, and C G Peterson.Fast, efficient error reconciliation for quantum cryptography.arXiv, quant-ph, 2002.

Hao Yan, Xiang Peng, Xiaxiang Lin, Wei Jiang, Tian Liu, and Hong Guo.Efficiency of Winnow protocol in secret key reconciliation.In Computer Science and Information Engineering, 2009 WRI World Congress on, volume 3,pages 238 – 242, 2009.

23 / 26

Page 33: Channel coding for quantum key distribution

LiteratureCoding for QKD (non-exhaustive)

David Elkouss, Anthony Leverrier, Romain Alleaume, and Joseph J Boutros.Efficient reconciliation protocol for discrete-variable quantum key distribution.In International Symposium on Information Theory, pages 1879–1883, 2009.

David Elkouss, Jesus Martinez-Mateo, Daniel Lancho, and Vicente Martin.Rate compatible protocol for information reconciliation: An application to QKD.In Information Theory Workshop (ITW), 2010 IEEE, pages 1 – 5, 2010.

David Elkouss, Jesus Martinez-Mateo, and Vicente Martin.Efficient reconciliation with rate adaptive codes in quantum key distribution.arXiv, quant-ph, 2010.

David Elkouss, Jesus Martinez-Mateo, and Vicente Martin.Secure rate-adaptive reconciliation.In Information Theory and its Applications (ISITA), 2010 International Symposium on, pages179 – 184, 2010.

Kenta Kasai, Ryutaroh Matsumoto, and Kohichi Sakaniwa.Information reconciliation for QKD with rate-compatible non-binary LDPC codes.In Information Theory and its Applications (ISITA), 2010 International Symposium on, pages922 – 927, 2010.

Jesus Martinez-Mateo, David Elkouss, and Vicente Martin.Interactive reconciliation with low-density parity-check codes.In Turbo Codes and Iterative Information Processing (ISTC), 2010 6th InternationalSymposium on, pages 270 – 274, 2010.

24 / 26

Page 34: Channel coding for quantum key distribution

LiteratureImplementation (non-exhaustive)

Chip Elliott, Alexander Colvin, David Pearson, Oleksiy Pikalo, John Schlafer, and Henry Yeh.Current status of the DARPA quantum network.arXiv, 2005.

Jerome Lodewyck, Matthieu Bloch, Raul Garcia-Patron, Simon Fossier, Evgueni Karpov,Eleni Diamanti, Thierry Debuisschert, Nicolas J Cerf, Rosa Tualle-Brouri, Steven WMcLaughlin, and Philippe Grangier.Quantum key distribution over 25 km with an all-fiber continuous-variable system.arXiv, quant-ph, 2007.

Simon Fossier, Eleni Diamanti, Thierry Debuisschert, Andre Villing, Rosa Tualle-Brouri, andPhilippe Grangier.Field test of a continuous-variable quantum key distribution prototype.arXiv, quant-ph, 2008.

Simon Fossier, J Lodewyck, Eleni Diamanti, Matthieu Bloch, Thierry Debuisschert, RosaTualle-Brouri, and Philippe Grangier.Quantum key distribution over 25 km, using a fiber setup based on continuous variables.In Lasers and Electro-Optics, 2008 and 2008 Conference on Quantum Electronics and LaserScience. CLEO/QELS 2008, pages 1 – 2, 2008.

A Dixon, Z Yuan, J Dynes, A Sharpe, and Andrew Shields.Megabit per second quantum key distribution using practical InGaAs APDs.In Lasers and Electro-Optics, 2009 and 2009 Conference on Quantum Electronics and LaserScience. CLEO/QELS 2009, pages 1 – 2, 2009.

25 / 26

Page 35: Channel coding for quantum key distribution

Conclusions

I Reconciliation for QKD is a Slepian-Wolf coding problem (in acorner point)

I linear codes are sufficient for the optimal solutionI maximising the key rate is not necessarily equivalent to

minimising the error rateI complexity constraints may lead to a non-trivial optimisation

problem to find the best codes and decoding algorithmsI rate adaptive or rateless schemes might be necessary in case

where the error rate on the quantum channel is unknown

26 / 26