Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Teaching the Sampling Theorem
Bruce FrancisElectrical and Computer EngineeringUniversity of TorontoToronto, OntarioCanada, M5S [email protected]
December 29, 2010
AbstractThis note reviews the sampling theorem and the difficulties involved in teaching it.
1 Introduction
The sampling theorem—a bandlimited continuous-time signal can be reconstructed fromits sample values provided the sampling rate is greater than twice the highest frequencyin the signal—is taught in every Signals and Systems course because of its fundamentalnature. This note reviews two possible ways of teaching it, and discusses the pros and consof these ways.
The notation is standard: xc(t) is a continuous-time real-valued signal, where t rangesover the whole infinite time interval (−∞,∞); Xc(jω) is the Fourier transform of xc(t),where ω ranges over (−∞,∞); T is the sampling period in seconds, fs = 1/T the samplingfrequency in Hz, ωs = 2πfs the sampling frequency in rad/s, and ωN = ωs/2 the Nyquistfrequency in rad/s; xc(nT ) are the sample values, where n runs over all integers, posi-tive, negative, and 0; xc(nT ) is denoted by x[n]; and X(ejω) is the discrete-time Fouriertransform of x[n], where here ω ranges over (−π, π).
2 Aliasing
One common way to introduce the sampling theorem is to show a graph of a sinusoid andshow that if you undersample, there’s a lower-frequency sinusoid that interpolates the samesample points. In the figure just below, the signal sin(1.2πt) of frequency 0.6 Hz is sampledat only 1 Hz (instead of greater than 1.2 Hz), and the “aliased signal” sin(−0.8πt) goesthrough the same sample values, shown as black dots:
1
0.0 1.0 2.0 3.0 4.0 5.01.0
0.6
0.2
0.2
0.6
1.0
sin(1.2πt)
sin(−0.8πt)
This gives an intuitive idea of the concept of aliasing. There being no aliasing is a sufficientcondition for being able to reconstruct the signal. However, there are other sufficientconditions, so this example doesn’t tell the whole story.
3 The Sampling Problem is Linear
The problem of reconstructing a signal from samples can be posed as a linear mathematicalproblem: Sampling is a linear projection of the continuous-time signal; the bandwidthconstraint is a subspace condition. Possible lecture notes follow.
1. Suppose voltage v and current i are involved in an electric circuit, and suppose youmeasure the value of v. Can you say what is the value of i? Not unless you knowhow v and i are related, such as for example v = Ri where you know R.
Let’s think of this geometrically: x = (v, i) is a point in the plane. If you know thatv = Ri and you know R, then you know that x lies on a specific line L throughthe origin. Measuring v is a form of sampling the vector x—projecting it onto thehorizontal axis. Then knowing x ∈ L and measuring the horizontal component v ofx allows you to determine x uniquely, i.e., find the vertical component i.
2. Here’s a somewhat larger example. The “signal” is a vector x in R6:
x = (x0, x1, x2, x3, x4, x5).
Suppose you know that the signal lives in the subspace B defined by the three equa-tions
x1 + 2x2 + x4 = 0−2x0 − x1 + x5 = 0x0 − 3x4 + x5 = 0.
2
You sample the signal in this way: You measure that
x0 = 1, x2 = −2, x4 = 3.
You want to find the missing components of x. It’s easy, because you now have 6linear equations for the 6 variables. As long as the equations are linearly independent,they uniquely determine the 6 variables. Let us write the 6 equations in the formAx = b:
0 1 2 0 1 0−2 −1 0 0 0 1
1 0 0 0 −3 11 0 0 0 0 00 0 1 0 0 00 0 0 0 1 0
x0
x1
x2
x3
x4
x5
=
0001−2
3
.
You can solve by Gaussian elimination, or x = A−1b, or however you like.
3. Let us rephrase the problem in a slightly different way. The given subspace conditionis Bx = 0, where B is the matrix
B =
0 1 2 0 1 0−2 −1 0 0 0 1
1 0 0 0 −3 1
.
You measure y = Dx, where
y =
1−2
3
, D =
1 0 0 0 0 00 0 1 0 0 00 0 0 0 1 0
.
The problem is this: Given B,D, y and the equations Bx = 0, Dx = y, compute x.The equation to solve is
[BD
]x =
[0y
],
where the zero on the right-hand side is the 3× 1 zero vector.
4. For completeness, the definition of a subspace B is a nonempty subset of a vectorspace that has the properties 1) if x, y are in B, so is x+y, 2) if x is in B and c is a realnumber, then cx is in B. The subspaces of R3 are the lines and planes through theorigin, together with the origin and finally R3 itself. In the preceding example, thesubspace B is the nullspace of B. (Nullspace is the solution space of the homogeneousequation Bx = 0.)
3
4 The Sampling Theorem in Discrete Time
At first glance, the task in the sampling theorem seems hopeless: Interpolating a continuumof values from only countably many? The discrete-time version seems less daunting: Givenevery other sample, compute the skipped ones.
The following note assumes students know the discrete-time Fourier transform and thez transform.
1. The signal x[n] is defined for all −∞ < n <∞. It is downsampled by 2, y[n] = x[2n]:
x[n]
y[n]
n
n
The components y[n] are measured and the task is to compute all the sample valuesof x[n] that weren’t measured. The block diagram model is
x[n]! 2
y[n]
where an arrow made of dashes indicates a discrete-time signal. We want to find theinput given the output, i.e., we want another system with input y and output x:
x[n]↓ 2
y[n] x[n]
So again we’re given a projection of x[n], namely, all the components with evenindices. For the problem to be solvable, x[n] must be restricted in some way. Itturns out that a sufficient restriction is that x[n] lies in the subspace Bπ/2 of signals
4
bandlimited to frequency π/2, that is, X(ejω) = 0, π ≥ |ω| ≥ π/2. This meansintuitively that x[n] isn’t changing too quickly.
To repeat, the sampling problem is this: We measure y[n] = x[2n] for all n and wewant to compute x[n] for all odd n, with the knowledge that x ∈ Bπ/2.
2. We turn to the frequency domain. We have the z transform
X(z) = · · ·+ x[−1]z + x[0] + x[1]1z
+ · · ·
and therefore
X(−z) = · · · − x[−1]z + x[0]− x[1]1z
+ · · · .
Adding gives
X(z) +X(−z) = 2Y (z2).
From this it follows that
Y (z2) =12
[X(z) +X(−z)], Y (ej2ω) =12
[X(ejω) +X(ej(ω−π))]
Let us symbolically sketch the Fourier transform of x[n] as a triangle:
!!!
1
X(ej!)
!
Note that this plot isn’t meant to be realistic: X(ejω) is complex-valued, for onething. The plot merely shows the support of X(ejω), i.e., the frequency range whereX(ejω) is nonzero. Remember: X(ejω) is a periodic function of ω, of period 2π.Thus the graph is duplicated periodically to the left and right. Then the graph ofX(ej(ω−π)) is that of X(ejω) shifted to the right by π:
!!!
1 1
X(ej(!!"))
5
Adding the two and dividing by 2 gives Y (ej2ω):
!!!
Y (ej2!)
1/2
Therefore the FT Y (ejω) looks like
!!!
1/2Y (ej!)
Thus in the frequency domain the effect of the downsampler is to stretch out theFourier transform of X(ejω) and scale by 1/2, but there is no aliasing. That is, thetriangle of X is preserved uncorrupted in Y .
3. The following system reconstructs x[n] from y[n]:
y[n]! 2
w[n] xr[n]H(ej!)
gain 2cuto! !/2
The output is denoted xr[n] (r for reconstructed). It will turn out that xr = x ifx ∈ Bπ/2. Let’s follow the Fourier transforms from y to xr. We have
w[2n] = y[n], w[2n+ 1] = 0, W (z) = Y (z2), W (ejω) = Y (ej2ω).
Thus the graph of W (ejω) is
!!!
1/2
W (ej!)
6
The ideal lowpass filter passes only the low frequency triangle, amplifying by 2:
!!!
Xr(ej!)1
Thus we have shown that xr = x if x ∈ Bπ/2.
4. Finally, we need to see how to implement the reconstruction, that is, the time-domainformula for xr[n] as a function of y[n]. Let h[n] denote the impulse response functionof the filter:
h[n] =1
2π
∫ π/2
−π/22ejωndω =
sin(π2n)
π2n
.
This is the familiar sinc function. The convolution equation is
xr[n] = h[n] ∗ w[n] =∑
m
h[n−m]w[m].
Since w[m] = 0 for m odd, thus
xr[n] =∑
m
h[n− 2m]w[2m]
and since w[2m] = y[m], so
xr[n] =∑
m
h[n− 2m]y[m].
Equivalently,
xr[n] =∑
m
h[n− 2m]x[2m].
Look at the right-hand side: If n is even, say n = 2k, then the right-hand sidebecomes
∑
m
h[2k − 2m]x[2m] = x[2k].
7
Whereas if n is odd, all terms are required for the reconstruction of x[n].
Notice that the reconstruction operation is noncausal: To reconstruct the odd valuesof x[n], all past and future even values are required.
5. In summary, the following system has the property that the output equals the inputfor every input in Bπ/2:
↑ 2w[n]
H(ejω)
gain 2cutoff π/2
x[n]↓ 2
y[n] x[n]
This system has the form
x[n] x[n]S R
Here, S stands for (discrete-time) sampler and R for reconstructor. Symbolically,RSx = x for all x in Bπ/2. Because R goes on the left of S in the equation RSx = x,we say that R is a left-inverse of S when S is restricted to the subspace Bπ/2.
6. Finally, we mention without proof, though it’s very similar to what was done above,that the output also equals the input in the following system for every input ban-dlimited to the high-frequency range (π/2, π):
↑ 2 H(ejω)
gain 2
x[n]↓ 2
x[n]
passband [π/2, π]
The filter is an ideal high-frequency filter.
8
5 The Sampling Theorem by Impulse Modulation
Now we turn to the “standard” sampling problem, sampling a continuous-time signal.Impulse modulation is the most common way of developing the sampling theorem in anundergraduate course.
1. The definition of S, the sampling operator, is
x[n] = xc(nT ).
Thus the input is a continuous-time signal and the output a discrete-time signal. Thearrow convention is this:
xc(t) x[n]S
The assumption on the input will be xc ∈ BωN , that is,
Xc(jω) = 0, |ω| ≥ ωN .
In words, the input is bandlimited and the sampling frequency is greater than twicethe highest frequency in the input. Under this condition, xc(t) can be recovered fromthe sample values x[n]. But note that the reconstruction is non-causal. That is, forany given t not a sampling instant, to reconstruct xc(t) requires all the infinitelymany sample values, past and future.
2. The first step in this development is to derive the relationship between the input andoutput in the frequency domain. The formula is this:
X(ejωT ) =1T
∑
k
Xc(jω − jkωs). (1)
This formula doesn’t assume the input is bandlimited. Here’s a schematic sketch ofthe graph of Xc(jω):
Xc(jω)
ω
9
Equation (1) is applied, and two terms in the series are shown here, namely, k = 0and k = 1:
1T
Xc(jω)1T
Xc(jω − jωs)
There is aliasing, in the sense that higher frequency components of xc(t) becomebaseband frequency terms under sampling. On the other hand, if xc ∈ BωN , then theterms won’t overlap:
ΩωN ωs
1T
Xc(jω)1T
Xc(jω − jωs)
And therefore
X(ejωT ) =1TXc(jω), |ω| < ωN
or equivalently
X(ejω) =1TXc
(jω
T
), |ω| < π.
The corresponding graph is
π
X(ejω) =1T
Xc(jω/T )
3. The proof of (1) involves an artifice called impulse modulation, shown here:
10
!
s(t) =!
n
!(t! nT )
xc(t) xs(t) =!
n
x[n]!(t! nT )
The signal s(t) is a series of impulses at the sampling instants. It multiples xc(t) toproduce xs(t). Since
xc(t)δ(t− nT ) = xc(nT )δ(t− nT ),
we have
xs(t) =∑
n
x[n]δ(t− nT ).
The reason for introducing this setup is that all three signals are continuous-time.
There are two steps in the proof. The first connects the outputs of the two systems.
Step 1 Xs(jω) = X(ejωT )
Proof
Xs(jω) =∫xs(t)e−jωtdt
=∫xc(t)s(t)e−jωtdt
=∫xc(t)
∑
m
δ(t−mT )e−jωtdt
=∑
m
∫xc(t)δ(t−mT )e−jωtdt
=∑
m
xc(mT )e−jωmTdt
=∑
m
x[m]e−jωmTdt
= X(ejωT )
The second step connects the input and output of the impulse modulation system.
Step 2 Xs(jω) = 1T
∑kXc(jω − jkωs).
11
Proof The signal s(t) is periodic of period T and it can therefore be expanded in aFourier series:
s(t) =1T
∑
m
ejmωst.
Then
Xs(jω) =∫xs(t)e−jωtdt
=∫xc(t)s(t)e−jωtdt
=∫xc(t)
1T
∑
m
ejmωste−jωtdt
=1T
∑
m
∫xc(t)ejmωste−jωtdt
=1T
∑
m
∫xc(t)e−j(ω−mωs)tdt
=1T
∑
m
Xc(jω − jmωs).
Finally, putting the two steps together proves (1).
Discussion
The continuous-time proof using impulse modulation is the standard way, but I feel it canbe dangerous. If you ask students how to reconstruct a signal from its sampled values manywill say “lowpass filter”. By which they mean time-domain filtering of the modulated signal.They seem not to realize that the reconstruction procedure is a discrete-time to continuous-time converter, but instead to believe the modulated signal
∑x[n]δ(t − nT ) is real. This
isn’t surprising since many widely used texts emphasize this way of reconstructing xc(t).For example
“It is possible to reconstruct the function from the train of impulses ... usingsome process of filtration.” [1]
“If we are given a sequence of samples, x[n], we can form an impulse train ...If this impulse train is the input to an ideal lowpass continuous-time filter ...,then the output of the filter will be [the reconstructed signal].” [3]
“The original analog signal can be perfectly reconstructed by passing this im-pulse train through a low-pass filter. ... While this method is mathematicallypure, it is difficult to generate the required narrow pulses in electronics.” [4]
12
Also, students seem to accept the formula∑
n
δ(t− nT ) =1T
∑
m
ejm(2π/T )t,
which is a form of the Poisson summation formula. But there must be some lack of realunderstanding: The right-hand side converges? And to something that equals zero every-where except at the sampling times, where it is infinite? This formula can be interpretedonly in the framework of Schwartz distributions [6].
6 The Sampling Theorem using Fourier Series
The derivation below shows that the reconstruction formula is a Fourier series expansion.
1. Students will recall Fourier series (FS) from their Signals and Systems course. FS isnormally developed in the context of periodic waveforms, such as a square wave. Butit makes just as much sense to focus on signals defined on a bounded time interval.Let f(t) be a function defined for 0 ≤ t ≤ 1. The goal is to represent f(t) as a seriesof sinusoids, namely,1
wn(t) = e−j2πnt.
Here n ranges over all integers, positive, negative, and 0. These sinusoids are or-thonormal in the following way:
∫ 1
0wn(t)wm(t)dt equals 0 if n 6= m and equals 1 if n = m.
The overbar means complex-conjugate. Denote the integral by 〈wn, wm〉. This innerproduct yields a norm: ‖f‖ = 〈f, f〉1/2.
Notice that w0 is a DC signal, w1 a sinusoid of period 1 sec., w3 a sinusoid of period1/2 sec., and so on. The FS expansion of f(t) is
f(t) =∑
n
cnwn(t). (2)
Provided ‖f‖ <∞, the series converges to f in the sense that limN→∞ ‖f −fN‖ = 0,where
fN (t) =N∑
n=−Ncnwn(t).
1The minus sign is for convenience.
13
The Fourier coefficients can be computed by taking inner products in (2):
〈f, wm〉 =∑
n
〈cnwn, wm〉 = cm.
Thus the Fourier coefficients in the FS are cn = 〈f, wn〉.
2. Now let’s turn to the sampling theorem. For simplicity we assume T = 1. The Fouriertransform, Xc(jω), of the signal xc(t), is zero outside the interval −π ≤ ω ≤ π.Under a mild condition, Xc(jω) has a FS expansion in terms of sinusoids. For thetime interval [0, 1] the sinusoids were e−j2πnt; consequently, for the frequency interval[−π, π], the appropriate sinusoids are
Wn(jω) = e−jωn.
So that these are orthonormal, the inner product is taken to be
〈Wn,Wm〉 =1
2π
∫ π
−πWn(jω)Wm(jω)dω.
Thus the FS expansion of the Fourier transform Xc(jω) is
Xc(jω) =∑
n
cnWn(jω)
and the Fourier coefficients are
cn = 〈Xc,Wn〉
=1
2π
∫ π
−πXc(jω)Wm(jω)dω
=1
2π
∫ π
−πXc(jω)ejωndω.
You will recognize the last integral as the inverse Fourier transform of Xc(jω) att = n. That is, cn = x[n]. The FS expansion is thus seen to be
Xc(jω) =∑
n
x[n]Wn(jω). (3)
14
3. It remains to take inverse Fourier transforms in (3). Let wn(t) denote the inverseFourier transform of
Wn(jω) = e−jωn.
Namely
wn(t) = w0(t− n), w0(n) =sinπtπt
.
Thus the inversion formula is
xc(t) =∑
n
x[n]w0(t− n), w0(n) =sinπtπt
.
Discussion
This derivation is rigorous, though not all details are given. In particular, complete-ness of the sinusoidal basis in the L2-norm is omitted.
7 The Sampling Theorem in Hilbert Space
This section is for graduate students who already have had an undergraduate Signals andSystems course and who know some Hilbert space theory. A very good reference in thecontext of Fourier theory is [2].
1. The following Hilbert spaces are used:
(a) The space `2 of square-summable real-valued discrete-time signals, with innerproduct
〈x, y〉 =∑
n
x[n]y[n].
(b) The space L2(−∞,∞) of square-integrable real-valued continuous-time signals,with inner product
〈x, y〉 =∫ ∞
−∞x(t)y(t)dt.
.
15
(c) The Fourier transform operator F maps L2(−∞,∞) onto L2(−j∞, j∞), thesquare-integrable functions on the imaginary axis; the inner product is
〈X,Y 〉 =1
2π
∫ ∞
−∞X(jω)Y (jω)dω.
The operator F preserves inner products and is a Hilbert-space isomorphism;its adjoint is denoted by F ∗.
(d) The subspace L2(−jπ, jπ) of L2(−j∞, j∞) such that X(jω) = 0 for |ω| > π.
2. Now we can turn to the sampling problem. We have continuous-time signals inL2(−∞,∞). For simplicity, we take the sampling period to be T = 1 second, andthen the Nyquist frequency is π rad/s. Accordingly, define Bπ = F ∗L2(−jπ, jπ),the subspace of L2(−∞,∞) of signals xc(t) whose Fourier transforms equal zero for|ω| > π. For xc ∈ Bπ, the inversion formula xc = F ∗Xc is
xc(t) =1
2π
∫ π
−πXc(jω)ejωtdω.
Thus the sampled signal is
x[n] =1
2π
∫ π
−πXc(jω)ejωndω.
This is the inner product between Xc(jω) and e−jωn. So let us define
Wn(jω) = e−jωn
so that we have
x[n] = 〈Xc,Wn〉.
By Parseval’s equality, the preceding equation holds for the inverse Fourier trans-forms:
x[n] = 〈xc, wn〉.
The functions Wn(jω) = e−jωn are an orthonormal basis for the frequency-domainspace L2(−jπ, jπ) of bandlimited transforms. Consequently, wn(t) are an orthonor-mal basis for the time-domain space Bπ. These basis functions are given by
wn(t) =1
2π
∫ π
−πe−jωnejωtdω =
sinπ(t− n)π(t− n)
.
16
3. We now define the sampling operator S:
S : Bπ −→ `2, x = Sxc, x[n] = 〈xc, wn〉.
The adjoint operator satisfies
〈y, Sxc〉 = 〈S∗y, xc〉,
that is,
〈y, x〉 = 〈S∗y, xc〉,
that is,∑
n
y[n]〈xc, wn〉 = 〈S∗y, xc〉,
Everything is real-valued in this equation, and thus∑
n
y[n]wn = S∗y.
Therefore,
S∗ : `2 −→ Bπ, yc = S∗y, yc(t) =∑
n
y[n]wn(t).
It follows that S∗ = S−1 and therefore S is an isomorphism.
A nice block diagram to describe what we’ve done is this:
xc(t)S S∗x[n] xc(t)
The input xc lies in Bπ; the sampling block with operator S produces x in `2; thenthe adjoint operator S∗ reproduces xc. That is,
S∗Sxc = xc, xc ∈ Bπ.
The continuous arrow signifies continuous-time, the dashed arrow, discrete time.
Let’s summarize:
Theorem 1 The sampling operator is a Hilbert space isomorphism when restrictedto the subspace of bandlimited signals.
17
7.1 Sample-rate Changing
This section shows an application of the operator formalism.Consider a continuous-time signal xc(t) that belongs to L2(−∞,∞). Suppose it is
sampled at 2 Hz, producing x[n] = xc(n/2). To avoid aliasing, assume it is bandlimited to2π. The problem is to compute y[n] = xc(n/3), the same signal sampled at 3 Hz.
Let B2π denote the subspace of signals bandlimited to [−2π, 2π]. A sinusoidal basis forB2π, in the frequency domain, is
Wn(jω) = e−j(ω/2)n.
The corresponding basis functions in the time domain, the inverse Fourier transforms, aredenoted wn(t):
wn(t) = w0
(t− n
2
), w0(t) = 2
sin 2πt2πt
.
These are orthogonal but not unit norm: ‖wn‖2 =√
2.Likewise, let B3π denote the subspace of signals bandlimited to [−3π, 3π]. The basis in
the frequency domain is
Vn(jω) = e−j(ω/3)n,
and in the time domain is
vn(t) = v0
(t− n
3
), v0(t) = 3
sin 3πt3πt
.
These are orthogonal but ‖vn‖2 =√
3.Further, let S2 denote the sampling operator B2π −→ `2. It maps xc(t) to x[n] =
xc(n/2). From the Fourier transform inversion formula,
x[n] = 〈xc, wn〉.
The inverse operator maps x[n] to
xc(t) =12
∑
n
x[n]wn(t).
The coefficient 1/2 arises because 〈wn, wn〉 = 2.And let S3 denote the sampling operator B3π −→ `2. It maps xc(t) to y[n] = xc(n/3),
where
y[n] = 〈xc, vn〉.
The inverse maps y[n] to
xc =13
∑
n
y[n]vn.
18
Then the sample-rate changer is simply S3S−12 . It acts on a sequence x[n] to produce
y[n], as follows. Start with x[n] in `2. Apply S−12 to get
xc =12
∑
n
x[n]wn.
Now apply S3 to get y[n]:
y[n] = 〈xc, vn〉
=
⟨12
∑
m
x[m]wm, vn
⟩
=∑
m
x[m]12〈wm, vn〉.
This is in fact the answer, but let’s provide a bit more detail:
12〈wn, vm〉 =
12〈Wn, Vm〉
=1
4π
∫ ∞
−∞Wn(jω)Vm(−jω)dω
=1
4π
∫ 2π
−2πe−j(ω/2)nej(ω/3)mdω.
The latter integral evaluates to
h[2n− 3m], h[k] =sin π
3kπ3k
.
Thus the sample-rate changer maps x[n] to y[n] according to
y[n] =∑
m
h[2n− 3m]x[m].
This is equivalent to: upsample by 3, then lowpass filter with gain 3, cutoff π/3, and thendownsample by 2.
8 Final Discussion
Digital controllers have samplers too. Normally controllers are designed in continuoustime and then implemented digitally using, for example, the bilinear transformation. Thedigital controller should achieve close to the performance of the designed controller. Sosignal reconstruction from sampled data is not a direct issue. However, impulse modulationis sometimes used in the analysis of sampled-data control systems—the old star notation.It is used for the same reason as in the sampling theorem: to convert a hybrid system,
19
with both discrete-time and continuous-time signals, into one with only continuous-timesignals. The danger here is that students then don’t fully appreciate that the system isnot time-invariant, but periodically time-varying. Instead, they are told that some input-output pairs do not have transfer functions. A rigorous control theory for sampled-datasystems, modeling intersample behaviour, was in fact developed [5], without using impulsemodulation, but it has not yet been brought to the undergraduate level.
Given that the Nyquist rate features in the sampling theorem and the Nyquist criterionis central in feedback stability, it might be expected that confusion could arise. Here’s astartling quote from a book on power-switching voltage converters: “According to Nyquistsampling theory, the unity-gain crossover frequency, f1, must be less than half the switchingfrequency to ensure system stability.”
References
[1] R. N. Bracewell. The Fourier Transform and its Applications. McGraw-Hill, 3rd edition,2000.
[2] H. Dym and H. P. McKean. Fourier Series and Integrals. Academic Press, 1972.
[3] A. V. Oppenheim, R. W. Schafer, and J. R. Buck. Discrete-Time Signal Processing.Prentice-Hall, 2nd edition, 1999.
[4] S. W. Smith. Digital Signal Processing: A Practical Guide for Engineers and Scientists.Newnes, 2002.
[5] Y. Yamamoto. A function space approach to sampled-data control systems and trackingproblems. IEEE Trans. Automatic Control, pages 703–712, 1994.
[6] A. H. Zemanian. Distribution Theory and Transform Analysis. Dover, 1965.
20