Upload
others
View
5
Download
0
Embed Size (px)
Citation preview
Portland State University Portland State University
PDXScholar PDXScholar
University Honors Theses University Honors College
5-26-2018
Basics of Bessel Functions Basics of Bessel Functions
Joella Rae Deal Portland State University
Follow this and additional works at: https://pdxscholar.library.pdx.edu/honorstheses
Let us know how access to this document benefits you.
Recommended Citation Recommended Citation Deal, Joella Rae, "Basics of Bessel Functions" (2018). University Honors Theses. Paper 546. https://doi.org/10.15760/honors.552
This Thesis is brought to you for free and open access. It has been accepted for inclusion in University Honors Theses by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected].
Basics of Bessel Functions
by
Joella Deal
An undergraduate honors thesis submitted in partial fulfillment of the
requirements for the degree of
Bachelor of Science
in
University Honors
and
Mathematics
Thesis Adviser
Dr. Bin Jiang
Portland State University
2018
Abstract
This paper is a deep exploration of the project Bessel Functions by Martin Krehof Pennsylvania State University. We begin with a derivation of the Bessel functionsJa(x) and Ya(x), which are two solutions to Bessel’s differential equation. Next wefind the generating function and use it to prove some useful standard results andrecurrence relations. We use these recurrence relations to examine the behavior of theBessel functions at some special values. Then we use contour integration to derive theirintegral representations, from which we can produce their asymptotic formulae. Wealso show an alternate method for deriving the first Bessel function using the generatingfunction. Finally, a graph created using Python illustrates the Bessel functions of order0, 1, 2, 3, and 4.
1 Introduction to Bessel Functions
Bessel functions are the standard form of the solutions to Bessel’s differential equation,
x2∂2y
∂x2+ x
∂y
∂x+ (x2 − n2)y = 0, (1)
where n is the order of the Bessel equation. It is often obtained by the separation of thewave equation
∂2u
∂t2= c2∇2u (2)
in cylindric or spherical coordinates. For this reason, the Bessel functions fall under theumbrella of cylindrical (or spherical) harmonics when n is an integer or half-integer, and wesee them appear in the separable solutions to both the Helmholtz equation and Laplace’sequation in cylindric or spherical coordinates. Since the Bessel equation is a 2nd orderdifferential equation, it has two linearly independent solutions, Jn(x) and Yn(x).
1.1 Bessel Functions of the First Kind
To find the first solution we begin by taking a power series,
y(x) = xn∞∑k=0
bkxk, (3)
which we will plug into the Bessel equation (1) and solve for its necessary components. Forconvenience, the first and second partial derivatives of this power series are:
y′(x) = nxn−1∞∑k=0
bkxk + xn
∞∑k=0
kbkxk−1 (4)
and
y′′(x) = n(n− 1)xn−2∞∑k=0
bkxk + 2nxn−1
∞∑k=0
kbkxk−1 + xn
∞∑k=0
k(k − 1)bkxk−2 (5)
1
as given by the multiplication rule. We next multiply equation (4) by x and equation (5)by x2 so that we can easily plug them into (1). (Note that the x or x2 can be inserted intothe front of each term or be distributed into the summation.) We then have
xy′(x) = nxn∞∑k=0
bkxk + xn
∞∑k=0
kbkxk (6)
and
x2y′′(x) = n(n− 1)xn∞∑k=0
bkxk + 2nxn
∞∑k=0
kbkxk + xn
∞∑k=0
k(k − 1)xk. (7)
Finally, we will need the term
x2y(x) = xn∞∑k=0
bkxk+2 = xn
∞∑k=2
bk−2xk. (8)
Note that because we are solving for the appropriate bk, we can artificially set b−2 :=b−1 := 0. This will become useful when simplifying the full Bessel differential equationbelow. Plugging equations (6), (7), and (8) into (1), we get
n(n− 1)xn∞∑k=0
bkxk + 2nxn
∞∑k=0
kbkxk + xn
∞∑k=0
k(k − 1)bkxk + nxn
∞∑k=0
bkxk
+ xn∞∑k=0
kbkxk + xn
∞∑k=2
bk−2xk − n2xn
∞∑k=0
bkxk = 0, (9)
which can be simplified in the following steps.
Step One. Combine the summation terms (we can do this because we defined b−2 and b−1to be equal to zero, so
∑∞k=2 bk−2x
k =∑∞k=0 bk−2x
k):
xn∞∑k=0
[n(n− 1)bk + 2nkbk + k(k − 1)bk + nbk + kbk + bk−2 − n2bk]xk = 0. (10)
Step Two. Cancel like-terms:
xn∞∑k=0
[2nkbk + k2bk + bk−2]xk = 0. (11)
Step Three. Compare the coefficients to yield:
2nkbk + k2bk + bk−2 = 0. (12)
We use equation (12) to create the general formula:
bk =−bk−2
k(k + 2n). (13)
Since b−1 = 0 we can infer that b1 = −b−1
1+2n = 0, and continuing in this manner, b2k−1 = 0∀ k ∈ N. What about for even values of k? We have no condition on b0 (since k = 0 puts
2
equation (13) in indeterminate form). We can choose a convenient value for b0 as needed.First, let us examine the case where −n /∈ N (we will take care of the −n ∈ N case later).From equation (13) we have
b2k = − b2k−22k(2k + 2n)
= − b2k−24k(n+ k)
. (14)
We will perform induction on this equation to find a general formula for b2k.
Step One. Examine the base cases k = 1 and k = 2.
b2(1) = b2 = (−1)b0
4(1)(n+ 1)(15)
b2(2) = b4 = (−1)(−1) b0
4(1)(n+1)
4(2)(n+ 2)=
(−1)2b042(2 · 1)(n+ 2)(n+ 1)
(16)
Step Two. Perform the inductive step. Based on the above cases, suppose that thefollowing formula holds for some positive integer l:
b2l = (−1)lb0
4ll!∏n+ln+1m
. (17)
We will show that this holds for the l + 1 case as well:
b2(l+1) = (−1)(−1)l b0
4ll!∏n+ln+1m
4(l + 1)(n+ l + 1)
= (−1)l+1 b0
4l+1(l + 1)!∏n+l+1n+1 m
.
(18)
Therefore we know that equation (17) holds in general for all positive integers k.
Before we can plug equation (17) back into the original power series (3), we need to choosea convenient value for b0. We know that the summation in the final power series equationneeds to be convergent in order for it to be a solution to the Bessel equation (1). In addition,it would be advantageous to use the factorial (n+ k)! in the denominator of bk (instead ofhaving to terminate the product at (n+ 1)). For these reasons, we will choose
b0 =1
2nn!. (19)
Plugging this into equation (17), we have
b2k =(−1)k
2n4kk!(n+ k)(n+ k − 1)...(n+ 1)n!, (20)
which can clearly be simplified to
b2k =(−1)k
2n4kk!(n+ k)!. (21)
3
Now we are finally ready to plug our formula for b2k into the full power series (3). We have
y(x) = xn∞∑k=0
(−1)k
2n4kk!(n+ k)!x2k. (22)
Observe that the x2k term at the end comes from the b2k in equation (17). I.e., we wantthe summation term
b0x0 + b2x
2 + b4x4 + ... =
∞∑k=0
b2kx2k, (23)
so we must ensure that the power of x is the same as the subscript of b. Rearranging, wehave
y(x) = (x
2)n∞∑k=0
(−1)k
k!(n+ k)!(x
2)2k. (24)
Clearly, this power series is only meaningful if it is convergent. We will check by the well-known Ratio Test. If it passes, we have found a solution to Bessel’s equation. We take
ρ = limk→∞
bk+1
bk
=
(−1)k+1
(k+1)!(n+k+1)! (x2 )2(k+1)
(−1)kk!(n+k)! (
x2 )2k
= limk→∞
−1
(k + 1)(n+ k + 1)(x
2)2 = 0.
(25)
Since ρ < 1, the series converges. We have indeed found a solution to equation (1).
However, we also want to construct a solution for the complex order v, not just for theorder n ∈ N. We must find a continuous function with which we can replace the factorialin equation (24).
The most versatile way to extend the factorial function to non-integer and complex numbersis the Gamma function. The reciprocal of the Gamma function also happens to be holo-morphic, meaning that it is infinitely differentiable and equal to its own Taylor series. Sincewe are applying the Gamma function to the denominator of equation (24), this property ofits reciprocal will be especially convenient.
The Gamma function is defined as
Γ(n) = (n− 1)! (26)
(the factorial function with its argument shifted down by 1) if n is a positive integer.For complex numbers and non-integers, the Gamma function corresponds to the Mellintransform of the negative exponential function,
Γ(z) = {Me−x}(z), (27)
where the Mellin transform is
{Mf}(s) = ϕ(s) =
∫ ∞0
xs−1f(x)dx. (28)
4
Making this modification to equation (24), we now have
Jv(x) = (x
2)v∞∑k=0
(−1)k
Γ(k + 1)Γ(v + k + 1)(x
2)2k. (29)
This is not valid for −n ∈ N, where the Gamma function is undefined. In this case, wewill begin the summation at k = n to bypass any undefined Gamma terms (since k = ncorresponds to Γ(−n+ n+ 1)):
J−n(x) = (x
2)−n
∞∑k=n
(−1)k
Γ(k + 1)Γ(−n+ k + 1)(x
2)2k
= (x
2)−n
∞∑k=0
(−1)k+n
Γ(n+ k + 1)Γ(−n+ n+ k + 1)(x
2)2(k+n)
= (x
2)−n(
x
2)2n
∞∑k=0
(−1)k(−1)n
Γ(n+ k + 1)Γ(k + 1)(x
2)2k
= (−1)nJn(x).
(30)
This will also solve the Bessel differential equation (1). So we have found our first Besselfunction, Jn(x) or Jv(x).
1.2 Bessel Functions of the Second Kind
We will now determine a second, linearly independent solution. Let us begin by examiningthe behavior of Jv(x) as x → 0.
Step One. Let <(v) > 0. We have
limx→0
(x
2)v∞∑k=0
(−1)k
Γ(k + 1)Γ(k + 1 + v)(x
2)2k = 0, (31)
sincelimx→0
(x
2)v = 0. (32)
Step Two. Let <(v) = 0. We examine
limx→0
(x
2)0∞∑k=0
(−1)k
Γ(k + 1)Γ(k + 1)(x
2)2k = 1. (33)
Though it may be difficult to see at first, the limit of this is 1 because of the property that00 = 1, since our summation will become
limx→0
∞∑k=0
(−1)k
Γ(k + 1)Γ(k + 1)(x
2)2k = 1 + 0 + 0 + 0 + ... (34)
5
Step Three. Let <(v) < 0, v /∈ Z (since the gamma function is undefined here). Then wehave
limx→0
(2
x)−v
∞∑k=0
(−1)k
Γ(k + 1)Γ(k + 1 + v)(x
2)2k = ±∞. (35)
Summarized, the results from steps one, two, and three are:
limx→0
Jv(x) =
0, <(v) > 0
1, v = 0
±∞, <(v) < 0, v /∈ Z. (36)
We can see here that Jv(x) and J−v(x) are two linearly independent solutions (i.e. theycannot be expressed as linear combinations of each other) if v /∈ Z. If v ∈ Z, they are linearlydependent (recall equation (30)). Because of this property (and the homogeneity of Bessel’sdifferential equation) any linear combination of Jv and J−v, where v /∈ Z, is also solution.We build the equation
Yv(x) =cos(vπ)Jv(x)− J−v(x)
sin(vπ), (37)
for v /∈ Z. Notice that this vanishes if we have order n ∈ N0, since cos(nπ) = (−1)n. For n∈ Z, we let
Yv(x) := limv→n
Yv(x). (38)
It can be shown by L’Hospital’s rule that this limit exists (the calculation is too lengthy tobe included here, but it follows from the properties of the digamma function, which givesthe relationship between the gamma function and its derivative).
We need to check that the Wronskian determinant of Jv(z) and Yv(z) does not vanish forany v,z ∈ C. Abel’s Theorem says that if y1(z) and y2(z) are two solutions to the differentialequation p(z)y′′+ q(z)y′+ r(z)y = 0, then the Wronskian of the two solutions is of the formCp(z) , where C is a constant which does not depend on z. Notice that we can write Bessel’s
equation in the form zy′′+ y′+ (z− v2
z )y = 0, so that it is self-adjoint. Then the Wronskian
determinant of Jv(z) and Yv(z) is of the form Avp(z) , where Av does not depend on z. We
omit the detailed calculation here, but the Wronskian determinant of Jv(z) and Yv(z) turnsout to be
W (Jv(z), Yv(z)) = Jv+1(z)Yv(z)− Jv(z)Yv+1(z) =2
πz, (39)
which does not vanish for any z ∈ C. Therefore, they are linearly independent for all v ∈C. We have successfully found two linearly independent solutions to the Bessel differentialequation.
6
2 Properties of Bessel Functions
Now that we have derived the two Bessel functions, we will prove some of their fundamentalproperties.
2.1 The Generating Function
Several properties of the Bessel functions can be proven using their generating function. Wewill begin this section by introducing the concept of generating functions and showing thatone exists for the Bessel functions.
Definition. A power series is an infinite sum of the form
∞∑i=0
aizi, (40)
where the ai’s are quantities given by a particular function or rule.
Definition. A generating function of another function an is the function whose powerseries has an as the coefficient of xn. I.e., the generating function of an is the functionG(an;x) where
G(an;x) =
∞∑n=0
anxn. (41)
Proposition 2.1. We have
ex2 (z−z
−1) =
∞∑n=−∞
Jn(x)zn. (42)
I.e., the function ex2 (z−z
−1) is the generating function of the first Bessel function.
(Note: This power series is actually a Laurent series, since it includes terms of a negativedegree. This will be important later.)
Proof. Recall the power series representation
ex =
∞∑l=0
xl
l!.
We have
ex2 z−
x2
1z = e
x2 ze−
x2
1z
=
∞∑m=0
(x2 z)m
m!
∞∑k=0
(− x2z )k
k!
7
=
∞∑m=0
(x/2)m
m!zm
∞∑k=0
(−1)k(x2 )k
k!z−k.
Recall that the multiplication of two infinite power series can be written as a Cauchy Prod-uct; i.e., if two power series f(z) =
∑∞n=0 anz
n and g(z) =∑∞n=0 bnz
n each have a radiusof convergence of R > 0, then their product can also be expressed as a power series in thedisc |z| < R:
(fg)(z) =
∞∑n=0
cnzn,
where
cn =
n∑k=0
akbn−k.
Applying this property, we have:
∞∑m=0
(x/2)m
m!zm
∞∑k=0
(−1)k(x2 )k
k!z−k =
∞∑n=−∞
( ∑m−k=nm,k≥0
(−1)k(x2 )m+k
m!k!
)zm−k
=
∞∑n=−∞
( ∞∑k=0
(−1)k
(n+ k)!k!(x
2)n+k+k
)zn
=
∞∑n=−∞
( ∞∑k=0
(−1)k
(n+ k)!k!(x
2)2k(
x
2)n
)zn
=
∞∑n=−∞
Jn(x)zn.
We will now use the generating function to prove some standard results.
Lemma 2.1.1. We have
cos(x) = J0(x) + 2
∞∑n=1
(−1)nJ2n(x), (43)
sin(x) = 2
∞∑n=0
(−1)nJ2n+1(x), (44)
1 = J0(x) + 2
∞∑n=1
J2n(x). (45)
Proof. Take z = eiφ and set i sin(φ) = 1z (z − 1
z ). From Euler’s formula,
cos(x sinφ) + i sin(x sinφ) = eix sinφ
8
=
∞∑n=−∞
Jn(x)einφ
=
∞∑n=−∞
Jn(x)(cos(nφ) + i sin(nφ)).
Separating real and imaginary parts, we get
cos(x sinφ) =
∞∑n=−∞
Jn(x) cos(nφ)
and
sin(x sinφ) =
∞∑n=−∞
Jn(x) sin(nφ).
Setting φ = π2 , we have
cos(x) =
∞∑n=−∞
Jn(x) cos(nπ
2).
Consider the cases.
Case One. Suppose n = 1 mod 4. Then cos(nπ2 ) = 0 and cos(−nπ2 ) = 0, so the summationbetween n and −n terms vanishes. The same occurs for n = 3 mod 4 (or any odd n).
Case Two. Suppose n = 2 mod 4. Then cos(nπ2 ) = −1 and cos(−nπ2 ) = −1. Recallingequation (30), the summation between n and −n terms becomes
Jn(x) cos(nπ
2) + J−n(x) cos(
−nπ2
) = Jn(x)(−1) + Jn(x)(−1)
= −2Jn(x).
Case Three. Suppose n = 0 mod 4. Then cos(nπ2 ) = 1 and cos(−nπ2 ) = 1. So thesummation becomes
Jn(x) cos(nπ
2) + J−n(x) cos(
−nπ2
) = Jn(x)(1) + Jn(x)(1)
= 2Jn(x).
Based on these cases, we can rewrite the equation as
cos(x) = 2
∞∑n=1
J2n(−1)n.
Plugging φ = π2 into sin(x sinφ), we get
sin(x) =
∞∑n=−∞
Jn(x) sin(nπ
2).
We will consider cases once again.
9
Case One. Suppose n = 1 mod 4. Then −n = 3 mod 4, so sin(nπ2 ) = 1 and sin(−nπ2 ) =−1. So the summation between the terms for −n and n becomes
Jn(x) sin(nπ
2) + J−n(x) sin(
−nπ2
) = Jn(x)(1) + J−n(x)(−1)
= Jn(x) + (−1)2Jn(x)
= 2Jn(x).
Case Two. Suppose n = 2 mod 4. Then sin(nπ2 ) = 0, so we no longer have the term inthe summation. The same applies for the case n = 0 mod 4.
Case Three. Suppose n = 3 mod 4. Then sin(nπ2 ) = −1 and sin(−nπ2 ) = 1. So thesummation between n and −n terms becomes
Jn(x) sin(nπ
2) + J−n(x) sin(
−nπ2
) = Jn(x)(−1) + J−n(x)(1)
= Jn(x)(−1) + (−1)Jn(x)
= −2Jn(x).
Based on these cases, we can clearly rewrite this summation as
sin(x) = 2
∞∑n=0
J2n+1(x)(−1)n.
Finally, setting φ = 2π, we have
cos(sin(2π)) = cos(0) = 1 =
∞∑n=−∞
Jn(x) cos(2πn)
=
∞∑n=−∞
Jn(x)
= J0(x) +
−1∑n=−∞
Jn(x) +
∞∑n=1
Jn(x)
= J0(x) + 2
∞∑n=1
J2n(x)
since odd values of n cause the terms to cancel, and even values sum to 2Jn(x). This is thethird result.
Lemma 2.1.2. We have
Jn(−x) = J−n(x) = (−1)nJn(x) (46)
∀ n ∈ Z.
10
Proof. We make the change of variables x→ −x and z → z−1 and insert into the generatingfunction:
∞∑n=−∞
Jn(−x)z−n = e−x2 (z−1−z)
= e−x2z e
x2 z
=
∞∑m=0
(−x2z )m
m!
∞∑k=0
(x2 z)k
k!
=
∞∑m=0
(x2 )m(−1)m
m!z−m
∞∑k=0
(x2 )k
k!zk
=
∞∑n=−∞
( ∑m−k=nm,k≥0
(−1)m(x2 )m+k
m!k!
)z−n
=
∞∑n=−∞
( ∞∑m=0
(−1)m
m!(m− n)!(x
2)2m(
x
2)−n
)z−n
=
∞∑n=−∞
J−n(x)z−n
So we have,∞∑
n=−∞Jn(−x)z−n =
∞∑n=−∞
J−n(x)z−n.
Comparing the coefficients, we get
Jn(−x) = J−n(x),
and from equation (30),Jn(−x) = J−n(x) = (−1)nJn(x).
Proposition 2.2. For any n ∈ Z,
d
dx
(x−nJn(x)
)= −x−nJn+1(x) (47)
andd
dx
(xnJn(x)
)= xnJn−1(x). (48)
Proof. Multiply the series representation of Jn(x) by x−n and differentiate.
d
dx
(x−nJn(x)
)=
d
dx
( ∞∑k=0
(−1)k
k!(n+ k)!(x
2)2k+n(x)−n
)
=d
dx
( ∞∑k=0
(−1)k
k!(n+ k)!
x2k
22k+n
)
11
=
∞∑k=0
(−1)k
k!(n+ k)!
(2kx2k−1
22k+n
)
=
∞∑k=0
(−1)k
k!(n+ k)!
(kx2k−1
22k+n−1
)
Observe that when k = 0 the summation term is
(−1)0
0!n!
((0)
x−1
2n−1
)= 0,
so we can rewrite the equation starting with the index k = 1:
d
dx
(x−nJn(x)
)=
∞∑k=1
(−1)k
k!(n+ k)!
(kx2k−1
22k+n−1
)
=
∞∑k=1
(−1)k
(k − 1)!(n+ k)!
(x2k−1
22k−1+n
)
= x−n∞∑k=1
(−1)k
(k − 1)!(n+ k)!
(x2
)2k−1+n.
Let j = k − 1. Then k = j + 1, and we have:
d
dx
(x−nJn(x)
)= x−n
∞∑j=0
(−1)j+1
j!(n+ j + 1)!
(x2
)2j+n+1
= x−n∞∑j=0
(−1)(−1)j
j!(n+ j + 1)!
(x2
)2j+(n+1)
= −x−nJn+1(x).
Similarly,
d
dx(xnJn(x)) =
d
dx
( ∞∑k=0
(−1)k
k!(n+ k)!
(x2
)2k+nxn
)
=d
dx
( ∞∑k=0
(−1)k
k!(n+ k)!
(x2k+2n
22k+n
))
=
∞∑k=0
(−1)k(2k + 2n)
k!(n+ k)!
(x2k+2n−1
22k+2n
)
=
∞∑k=0
(−1)k
k!(n− 1 + k)!
(x2k+2n−1
22k+n−1
)
=
∞∑k=0
(−1)k
k!(k + n− 1)!
(x2
)2k+n−1xn
12
= xnJn−1(x).
Lemma 2.2.1. For any n ∈ Z,
2d
dx
(Jn(x)
)= Jn−1(x)− Jn+1(x), (49)
and2n
xJn(x) = Jn+1(x) + Jn−1(x). (50)
Proof. We taked
dx(Jn(x)) =
d
dx
(xn(x−nJn(x)
))Applying the results from Proposition 2.2 and the product rule, we have
d
dx(Jn(x)) = nxn−1
(xnJn(x)
)+ xn
d
dx
(x−nJn(x)
)= nx−1Jn(x) + xn
(− x−nJn+1(x)
)= nx−1Jn(x)− Jn+1(x).
Similarly, we take
d
dx(Jn(x)) =
d
dx
(x−n
(xnJn(x)
))= −nx−n−1
(xnJn(x)
)+ x−n
d
dx
(xnJn(x)
)= −nx−1Jn(x) + x−n
(xnJn−1(x)
)= −nx−1Jn(x) + Jn−1(x).
Adding these two expressions, we get the first result from the Lemma:
2d
dx
(Jn(x)
)= Jn−1(x)− Jn+1(x).
Subtracting the same expressions, we get the second result:
2nx−1Jn(x) = Jn−1(x) + Jn+1(x).
Remark 2.2.1. Lemma 2.2.1 can be proved similarly by differentiating the generating func-tion of Jn(x) with respect to x and z, one at a time, and comparing the coefficients. Addingthe resulting relations and multiplying by xn
2 will produce the second result from Proposition2.2.
Remark 2.2.2. Note that the relation shown in equation (51) also holds for all v ∈ R, notjust for n ∈ N. This will become useful in the next section during our discussion of Lommelpolynomials.
13
Lemma 2.2.2. For any n ∈ Z we have∫xn+1Jn(x)dx = xn+1Jn+1(x) + C (51)
and ∫x−n+1Jn(x)dx = −x−n+1Jn−1(x) +D. (52)
where C and D are arbitrary constants.
Proof. We take the integral of equation (48) from Proposition 2.2:∫xn+1Jn(x)dx =
∫d
dx(xn+1Jn+1(x))dx
= xn+1Jn+1(x) + C.
To obtain the second equation, recall from Lemma 2.1.2 that J−n(x) = (−1)nJn(x), orequivalently, Jn(x) = (−1)nJ−n(x). We apply this to the equation below:∫
x−n+1Jn(x)dx =
∫x−n+1(−1)nJ−n(x)dx
= (−1)n∫x−n+1J−n(x)dx
= (−1)nx−n+1J−n+1(x) +D
by the first part of the lemma. Then we have∫x−n+1Jn(x)dx = (−1)nx−n+1J−n+1(x) +D
= (−1)−n+2x−n+1J−n+1(x) +D
= −x−n+1(−1)−n+1J−n+1(x) +D.
Applying Lemma 2.1.2, ∫x−n+1Jn(x)dx = −x−n+1Jn−1(x) +D.
Lemma 2.2.3. For any m 6= 0,
J20 (x) + 2
∞∑n=1
J2n(x) = 1 (53)
and∞∑
n=−∞Jn+m(x)Jn(x) = 0. (54)
14
Proof. We have
ex2 (z−z
−1)ex2 (z−1−z) = e0 = 1
and by Proposition 2.1,
1 =
∞∑k=−∞
Jk(x)zk∞∑
n=−∞Jn(x)z−n.
Letting m = k − n we write
1 =
∞∑m=−∞
( ∞∑n=−∞
Jn+m(x)Jn(x)
)zm
=
∞∑n=−∞
J2n(x)z0 +
∑m∈Zm6=0
( ∞∑n=−∞
Jn+m(x)Jn(x)
)zm
=
∞∑n=−∞
J2n(x) +
∑m∈Zm6=0
( ∞∑n=−∞
Jn+m(x)Jn(x)
)zm.
Because of the case where z = 0, we have
1 =
∞∑n=−∞
J2n(x).
In addition,
1 = 1 +∑m∈Zm6=0
( ∞∑n=−∞
Jn+m(x)Jn(x)
)zm,
which implies that
0 =
∞∑n=−∞
Jn+m(x)Jn(x)
for all m 6= 0.
We will now examine the equation 1 =∑∞n=−∞ J2
n(x) to find an equivalent form. We can
write this summation as 1 = J20 (x) +
∑−1n=−∞ J2
n(x) +∑∞n=1 J
2n(x).
Step One. Fix any even n ∈ Z. Lemma 2.1.2 gives that Jn(x) = J−n(x). Squaring bothsides, we get that J2
n(x) = J2−n(x). Then the sum of the n and −n terms is 2J2
n(x).
Step Two. Fix any odd n ∈ Z. Then we have that Jn(x) = (−1)J−n(x). Squaring bothsides, we now have that J2
n(x) = J2−n(x). Then J2
n(x) + J2−n(x) = 2J2
n(x).
Then our equivalent equation is
1 = J20 (x) + 2
∞∑n=1
J2n(x).
15
Lemma 2.2.4. ∑n∈Z
Jn(x) = 1. (55)
Proof. Set z = 1 in the generating function. Then we have
ex2 (1−1
−1) = 1 =
∞∑n=−∞
Jn(x)(1)n
=∑n∈Z
Jn(x).
Lemma 2.2.5. ∀ n ∈ Z, we have
Jn(x+ y) =∑k∈Z
Jk(x)Jn−k(y). (56)
Proof. Observe that ∑n∈Z
Jn(x+ y)tn = e12 (x+y)(t−t
−1)
= e12x(t−t
−1)e12y(t−t
−1)
=∑k∈Z
Jk(x)tk∑m∈Z
Jm(y)tm
=∑n∈Z
(∑k∈Z
Jk(x)Jn−k(y)
)tn.
Comparing the coefficients yields the result.
2.2 Some Special Values
In this section we will examine what the Bessel functions look like when some particularvalues are chosen.
Lemma 2.2.6. If v = 12 or − 1
2 we have
J 12(x) = Y− 1
2(x) =
√2
πxsin(x) (57)
and
J− 12(x) = −Y 1
2(x) =
√2
πxcos(x). (58)
16
Proof. Observe the series representation of Jv and apply the properties of the gamma func-tion. We begin with
J 12(x) =
√x
2
∞∑k=0
(−1)k
Γ(k + 1)Γ(k + 32 )
(x2
)2k.
Since Γ(k + 3
2
)= (2k+1)!
√π
k!22k+1 , we have
J 12(x) =
√x
2
∞∑k=0
(−1)k
k! (2k+1)!√π
k!22k+1
(x2
)2k=
√x
2
∞∑k=0
(−1)k22k+1
(2k + 1)!√π
(x2
)2k=
∞∑k=0
(−1)k22k+1
(2k + 1)!√π
(x2
)2k+1− 12
=
√2
x
∞∑k=0
(−1)k22k+1
(2k + 1)!√π
(x2
)2k+1
=
√2
xπ
∞∑k=0
(−1)k
(2k + 1)!x2k+1.
Since we know the Taylor series
sin(x) =
∞∑k=0
(−1)k
(2k + 1)!x2k+1,
we can conclude that
J 12(x) =
√2
xπsin(x).
We also have that
Y− 12
=cos(−π2 )J− 1
2(x)− J 1
2(x)
sin(−π2 )
=(0)J− 1
2− J 1
2(x)
−1
= J 12(x).
Thus we have proven the first result. The second result follows similarly, considering that
Γ(k + 12 ) = (2k)!
√π
22kk!and cos(x) =
∑∞n=0(−1)n x2n
(2n)! . We have
J− 12(x) =
(x2
)− 12∞∑k=0
(−1)k
Γ(k + 1)Γ(k + 12 )
(x2
)2k=
√2
x
∞∑k=0
(−1)k
(2k)!√π
22k
(x2
)2k
17
=
√2
xπ
∞∑k=0
(−1)k
(2k)!x2k
=
√2
xπcos(x).
Also,
Y 12
=cos(π2 )J 1
2(x)− J− 1
2(x)
sin(π2 )
=(0)J 1
2− J− 1
2(x)
1= −J− 1
2(x).
This proves the second result.
Lemma 2.2.7. If v ∈ 12 + Z, there are polynomials Pn(x) and Qn(x) with
degPn = degQn = n
Pn(−x) = (−1)nPn(x)
Qn(−x) = (−1)nQn(x)
such that for any k ∈ N0
Jk+ 12(x) =
√2
xπ
(Pk
( 1
x
)sin(x)−Qk−1
( 1
x
)cos(x)
)(59)
and
J−k− 12(x) = (−1)k
√2
xπ
(Pk
( 1
x
)cos(x) +Qk−1
( 1
x
)sin(x)
). (60)
Proof. From Lemma 2.2.1, equation (51), we had the recurrence formula
Jv+1(x) =2v
xJv(x)− Jv−1(x).
We will use this formula and show by induction that
Jk+v(x) = Pn(1
x)Jv(x)−Qn−1(
1
x)Jv−1(x)
for the Pn and Qn defined in the lemma. We begin with the base case Jv+2(x):
Jv+2(x) =2v
xJv+1(x)− Jv(x)
=2v
x
(2v
xJv(x)− Jv−1(x)
)− Jv(x)
=((2v
x
)2 − 1)Jv(x)− 2v
xJv−1(x).
18
We will also show the case Jv+3(x):
Jv+3(x) =2v
xJv+2(x)− Jv+1(x)
=2v
x
(((2v
x
)2 − 1)Jv(x)− 2v
xJv−1(x)
)− Jv+1(x)
=((2v
x
)3 − 2v
x
)Jv(x)−
(2v
x
)2Jv−1(x)−
(2v
xJv(x)− Jv−1(x)
)=((2v
x
)3 − 4v
x
)Jv(x)−
((2v
x
)2+ 1)Jv−1(x).
Then we know that the formula is true for some n ∈ N. We will show that it must hold forthe n+ 1 case. We have
Jn+1+v(x) = Pn(1
x)Jv+1(x)−Qn−1(
1
x)Jv(x)
= Pn(1
x)(2v
xJv(x)− Jv−1(x)
)−Qn−1(
1
x)Jv(x)
=(2v
xPn(
1
x))Jv(x)− Pn(
1
x)Jv−1(x)−Qn−1(
1
x)Jv(x)
=((2v
xPn(
1
x))−Qn−1(
1
x))Jv(x)− Pn(
1
x)Jv−1(x).
Letting Pn+1( 1x ) = 2v
x Pn( 1x )−Qn−1( 1
x ) and Qn( 1x ) = Pn( 1
x ), we now have
Jn+1+v(x) = Pn+1(1
x)Jv(x)−Qn(
1
x)Jv−1(x).
By the principle of mathematical induction, the formula holds for all k ∈ N.
Now let v = 12 . We have
Jk+ 12(x) = Pk(
1
x)J 1
2(x)−Qk−1(
1
x)J− 1
2(x).
Plugging in equations (58) and (59) from lemma 2.2.6, we get the first result:
Jk+ 12(x) =
√2
xπ
(Pk
( 1
x
)sin(x)−Qk−1
( 1
x
)cos(x)
).
To achieve the second result, we can rewrite our recurrence relation as
J−v−1(x) =−2v
xJ−v(x)− J−v+1(x).
We then have
Jv−2(x) =((2v
x
)2 − 1)J−v(x) + J−v+1(x)
and
Jv−3(x) =(−(2v
x
)3+
2v
x− 1)J−v(x)−
((2v
x
)2 − 1)J−v+1(x).
19
Continuing to iterate in the same manner as before, we get the formula
J−v−k(x) = (−1)k(Pk( 1
x
)J−v(x) +Qk−1
( 1
x
)J−v+1(x)
).
Letting v = 12 , we have the second result:
J−k− 12(x) = (−1)k
√2
xπ
(Pk
( 1
x
)cos(x) +Qk−1
( 1
x
)sin(x)
).
Remark 2.2.3. The polynomials Pn and Qn in Lemma 2.2.7 are called Lommel polyno-mials and were introduced by the physicist Eugen von Lommel (1837-1899). They solve therecurrence relation
Jm+v(z) = Jv(z)Rm,v(z)− Jv−1(z)Rm−1,v+1(z)
and are given by the formula
Rm,v =
[m2 ]∑n=0
(−1)n(m− n)!γ(v +m− n)
n!(m− 2n)!γ(v + n)
(z2
)2n−m.
Lemma 2.2.8. For k ∈ N,Y−k− 1
2= (−1)kJk+ 1
2(x) (61)
andYk+ 1
2(x) = (−1)k−1J−k− 1
2(x). (62)
Proof. We use the formula to obtain
Y−k− 12(x) =
cos(−kπ − π2 )J−k− 1
2(x)− Jk+ 1
2(x)
sin(−kπ − π2 )
=−Jk+ 1
2(x)
(−1)k+1
= (−1)kJk+ 12(x).
And similarly,
Yk+ 12(x) =
cos(kπ + π2 )Jk+ 1
2(x)− J−k− 1
2(x)
sin(kπ + π2 )
=−J−k− 1
2(x)
(−1)k
= (−1)k−1J−k− 12(x).
20
2.3 Integral Representations
The purpose of this section is to give the integral representations of each of our two Besselfunctions. These will aid us later on in our discussion of asymptotics.
Theorem 2.3. For all n, x ∈ C we have
Jv(x) =1
π
∫ π
0
cos(x sin t− vt)dt− sin(πv)
π
∫ ∞0
e−x sinh(t)−vtdt, (63)
and
Yv(x) =1
π
∫ π
0
sin(x sin t− vt)dt− 1
π
∫ ∞0
e−x sinh(t)(evt + cos(πv)e−vt)dt. (64)
Proof. A representation of the Gamma function extended to the complex plane (given to usby the mathematician Hermann Hankel) is
1
Γ(z)=
1
2πi
∫γ1
t−zetdt
where γ1 is some contour in the complex plane coming from −∞, turning upwards around0, and heading back towards −∞.
−∞(0, 0)
Figure 1: The contour γ1.
Then we have
Jv(x) =(x2 )v
2πi
∫γ1
∞∑k=0
(−1)k(x2 )2kt−v−k−1
k!etdt
since1
Γ(v + k + 1)=
1
2πi
∫γ1
t−v−k−1etdt.
Recall the power series representation
e−x2
4t =
∞∑k=0
4−k(−x2
t )k
k!=
∞∑k=0
(−1)k(x2 )2kt−k
k!.
21
Then Jv(x) becomes
Jv(x) =(x2 )v
2πi
∫γ1
t−v−1et−x2
4t dt.
We will apply u-substitution. Let t = x2u. Then
Jv(x) =(x2 )v
2πi
∫γ2
(x
2u)−v−1e
( x2 u)−(( x2 )
2 1( x2u)
)(x
2)du
=1
2πi
∫γ2
(x
2)v+1(
x
2)−v−1u−v−1e
x2 (u−
1u )du
=1
2πi
∫γ2
u−v−1ex2 (u−
1u )du
for some complex contour γ2 of the same type. Next we will perform another u subsitution.Let u = ew. We will have a new contour, since the exponential function is always positive.So our new contour will now originate from +∞, turn around 0 (positively oriented), andhead back to +∞. A suitable contour following this path is the rectangle with complexvertices ∞− iπ, −iπ, iπ, and ∞+ iπ. This will be our new contour, γ.
∞− iπ−iπ
iπ ∞+ iπ
Figure 2: The contour γ.
Making this substitution, we have
Jv(x) =1
2πi
∫γ
(ew)−v−1e(x2 )(e
w−e−w)ewdw
=1
2πi
∫γ
(ew)−vex2 (e
w−e−w)dw
=1
2πi
∫γ
e−vwex sinh(w)dw.
This integral in the rectangular γ can be split into three parts: the integral along the leftvertical edge, the integral along the top edge, and the negative of the integral along thebottom edge. We can write this:
Jv(x) =1
2πi(P1 + P2 − P3)
22
with
P1 =
∫ π
−πe−ivtex sinh(it)i dt,
P2 =
∫ ∞0
e−v(iπ+t)ex sinh(iπ+t)dt,
P3 =
∫ ∞0
e−v(−iπ+t)ex sinh(−iπ+t)dt.
Step One. Let us examine the first part, P1. We have
1
2πiP1 =
1
2πi
∫ π
−πe−ivtexsinh(it)i dt
=1
2π
∫ π
−πexsinh(it)−ivtdt.
Recall the fromula for the hyperbolic sine which says that sinh(it) = i sin(t). We use thisto get
1
2π
∫ π
−πexsinh(it)−ivtdt =
1
2π
∫ π
−πei(x sin(t)−vt)dt
=1
2π
(∫ π
0
ei(x sin(t)−vt)dt+
∫ 0
−πei(x sin(t)−vt)dt
)=
1
2π
(∫ π
0
ei(x sin(t)−vt) + e−i(x sin(t)−vt)dt)
By Euler’s formula, this becomes
1
2πiP1 =
1
2π
∫ π
0
(cos(x sin(t)− vt) + i sin(x sin(t)− vt)
+ cos(−x sin(t) + vt) + i sin(−x sin(t) + vt))dt.
Since sine is an odd function, i sin(x sin(t)− vt) = −i sin(−x sin(t) + vt), and since cosine isan even function, cos(x sin(t)− vt) = cos(−x sin(t) + vt)). Therefore, we have
1
2πiP1 =
1
π
∫ π
0
cos(x sin(t)− vt)dt.
This gives the first half of the first result.
Step Two. We will examine
1
2πi(P2 − P3) =
1
2πi
(∫ ∞0
e−v(−iπ+t)ex sinh(iπ+t)dt−∫ ∞0
e−v(−iπ+t)ex sinh(−iπ+t)dt)
=1
2πi
∫ ∞0
(eiπve−vtex sinh(iπ+t) − eiπve−vtex sinh(−iπ+t))dt.
23
We will use the property of the hyperbolic sine function which says that sinh(−iπ + t) =− sinh(t) and sinh(iπ + t) = − sinh(t). Then we have
1
2πi(P2 − P3) =
1
2πi
∫ ∞0
e−x sinh(t)−vt(e−iπv − eiπv
)dt
Recall the identity sin(x) = eix−e−ix2i which follows as a result of Euler’s formula. Utilizing
this, we have1
2πi(P2 − P3) =
− sin(πv)
π
∫ ∞0
e−x sinh(t)−vtdt.
This proves this 2nd half of the result for Jv(x).
Next we will find the result for Yv(x). Rearranging equation (37), we have that
sin(vπ)Yv(x) = cos(vπ)Jv(x)− J−v(x)
=cos(vπ)
π
∫ π
0
cos(x sin(t)− vt)dt− cos(vπ) sin(vπ)
π
∫ ∞0
e−x sinh(t)e−vtdt
− 1
π
∫ π
0
cos(x sin(t) + vt)dt+sin(−vπ)
π
∫ ∞0
e−x sinh(t)evtdt
=1
π
∫ π
0
cos(vπ) cos(x sin(t)− vt)dt− 1
π
∫ π
0
cos(x sin(t) + vt)dt
− sin(vπ)
π
∫ ∞0
(e−x sinh(t) cos(vπ)e−vt + evt
)dt
=L1
π− sin(vπ)
πL2.
Then
L1 =
∫ π
0
cos(vπ) cos(x sin(t)− vt)dt−∫ π
0
cos(x sin(t) + vt)dt
and
L2 =
∫ ∞0
(e−x sinh(t) cos(vπ)e−vt + evt
)dt.
We will use some rules of trigonometric products to rewrite L1. Recall that cos(a) cos(b) =12
(cos(a+ b) + cos(a− b)
)and sin(a) sin(b) = 1
2
(cos(a− b)− cos(a+ b)
). We have
cos(vπ) cos(x sin(t)− vt) =1
2
(cos(x sin(t)− vt+ vπ)) + cos(x sin(t)− vt− vπ)
)=(
cos(x sin(t)− vt+ vπ)− 1
2
(cos(x sin(t)− vt+ vπ)
))+
1
2
(cos(x sin(t)− vt− vπ)
)24
= cos(x sin(t)− vt+ vπ) + sin(vπ) sin(x sin(t)− vt)= cos(x sin(t) + v(π − t)) + sin(vπ) sin(x sin(t)− vt).
Special Step. Before we continue, we need to show that the following relation is true:∫ π
0
cos(x sin(t) + v(π − t))dt =
∫ π
0
cos(x sin(t) + vt)dt.
First we will simplify the statement.
cos(x sin(t) + vπ − vt
)= cos
(x sin(t)) cos(vπ − vt)− sin
(x sin(t)
)sin(vπ − vt)
= cos(x sin(t)
)(− cos(vt)
)− sin
(x sin(t)
)sin(vt)
= −(
cos(x sin(t)
)(cos(vt)
)+ sin
(x sin(t)
)sin(vt)
)= − cos
(x sin(t)− vt
).
Then we need to show that∫ π
0
cos(x sin(t) + vt
)dt =
∫ π
0
− cos(x sin(t)− vt
)dt.
We have
cos(x sin(t) + vt) = cos(x sin(t)) cos(vt)− sin(x sin(t)) sin(vt)
so the integral is∫ π
0
cos(x sin(t) + vt)dt =
∫ π
0
cos(x sin(t)) cos(vt)dt−∫ π
0
sin(x sin(t)) sin(vt)dt
= 0−∫ π
0
sin(x sin(t)) sin(vt)dt.
In addition,
− cos(x sin(t)− vt
)= − cos(x sin(t)) cos(vt)− sin(x sin(t)) sin(vt),
and the integral becomes∫ π
0
− cos(x sin(t)− vt)dt =
∫ π
0
− cos(x sin(t)) cos(vt)dt−∫ π
0
sin(x sin(t)) sin(vt)dt
= 0−∫ π
0
sin(x sin(t)) sin(vt)dt.
And thus, we have shown that the relation holds. We will use it to simplify L1 as follows:
L1 =
∫ π
0
(cos(x sinh(t) + v(π − t)) + sin(vπ) sin(x sin(t)− vt)
− cos(x sin(t) + vt))dt
= sin(vπ)
∫ π
0
sin(x sin(t)− vt)dt.
This proves the result for Yv(x).
25
2.4 Using the Generating Function to Derive Jn(x)
The previous section gave the integral representation of Jv(x) for all v ∈ C. Notice thatfor all n ∈ N, sin(nπ) = 0. We see this in the following theorem, which gives the integralrepresentation for orders in the natural numbers.
Theorem 2.4. Let the functions yn(x) be defined by the Laurent series
ex2 (z−z
−1) =
∞∑n=−∞
yn(x)zn. (65)
Then
yn(x) =(x
2
)n ∞∑k=0
(−1)k
(n+ k)!k!
(x2
)2k(66)
and
yn(x) =1
π
∫ π
0
cos(x sinφ− nφ) dφ. (67)
Proof. Cauchy’s integral formula for a Laurent series gives us:
yn(x) =1
2πi
∮γ
ex2 (t−t
−1)
tn+1dt
for any simply closed contour γ centered around 0.
Step One. Perform the u-substitution t = 2ux and recall the series expansion ex =
∑∞k=0
xk
k! .Then we have
yn(x) =1
2πi
∮γ′
ex2 (
2ux −(
2ux )−1)(
2xu)n+1
2
xdu
=1
2πi
∮γ′
eu−x2
4u(2x
)n+1un+1
2
xdu
=1
2πi
(x2
)n∮γ′eu−
x2
4u u−n−1 du
=1
2πi
(x2
)n∮γ′
∞∑m=0
um
m!
∞∑k=0
(−x2
)2ku−k
k!u−n−1 du
=(x
2
)n ∞∑m=0
∞∑k=0
(−1)k
m!k!
(x2
)2k 1
2πi
∮γ′um−k−n−1 du
Choosing γ′ to be a circle CR of radius R, Cauchy’s formula gives us∮CR
uldu = 2πi
26
for l = −1. Otherwise, the integral equals 0. Thus, the only summation term remainingwill correspond to m = n+ k, and the equation is
yn(x) =(x
2
)n ∞∑k=0
(−1)k
(n+ k)!k!
(x2
)2k.
This proves the first result.
Step Two. We can choose the contour t = eiφ and integrate from 0 to 2π. Then we have
yn(x) =1
2π
∫ 2π
0
ex2 (e
iφ−e−iφ)
(eiφ)n+1e−iφdφ
=1
2π
∫ 2π
0
ex2 (e
iφ−e−iφ)
einφdφ
=1
2π
∫ 2π
0
ex2
((cosφ+i sinφ)−(cos(−φ)+i sin(−φ))
)einφ
dφ
=1
2π
∫ 2π
0
ex2 (cosφ+i sinφ−cosφ+i sinφ)
einφdφ
=1
2π
∫ 2π
0
exi sinφ
einφdφ
=1
2π
∫ 2π
0
eix sinφ−inφdφ
=1
2π
∫ 2π
0
(cos(x sinφ− nφ) + i sin(x sinφ− nφ))dφ
=1
π
∫ π
0
cos(x sinφ− nφ)dφ.
This proves the second result.
2.5 Asymptotic Analysis
In this section we will use the proven integral representations to derive some asymptoticformulae.
Theorem 2.5. For x ∈ R, as x → ∞ we have
Jv(x) ∼√
2
πxcos(x− π
4− vπ
2), (68)
and
Yv(x) ∼√
2
πxsin(x− π
4− vπ
2). (69)
Proof. The second integrals in both of the integral represenations go to 0 exponentially asx gets large. So as x → ∞, we use Euler’s formula to write
Jv(x) + iYv(x) =1
π
∫ π
0
ei(x sin(t)−vt)dt+O(x−A)
27
for all A. Let u = t− π2 . Then we have
Jv(x) + iYv(x) =1
π
∫ π2
−π2ei(x sin(u+π
2 )−v(u+π2 ))du+O(x−A)
=2
π
∫ π2
0
eix cos(u)e−ivue−ivπ
2 du+O(x−A)
=2e−ivπ
2
π
∫ π2
0
eix cos(u)(
cos(vu)− i sin(vu))du+O(x−A)
=2e−ivπ
2
π
(∫ π3
0
eix cos(u)(
cos(vu)− i sin(vu))du
+
∫ π2
π3
eix cos(u)(
cos(vu)− i sin(vu))du
)+O(x−A)
=2e−ivπ
2
π
(P1 + P2
)+O(x−A)
where P1 =∫ π
3
0eix cos(u)
(cos(vu)−i sin(vu)
)du and P2 =
∫ π2π3eix cos(u)
(cos(vu)−i sin(vu)
)du.
Step One. We take P2 and make the substitution cos(u) = z. Then u = cos−1(z) anddu = −
√1− z2dz. So we have
P2 =
∫ 0
12
eixzcos(v cos−1(z)
)− i sin
(v cos−1(z)
)−√
1− z2dz
=
∫ 12
0
eixzcos(v cos−1(z)
)− i sin
(v cos−1(z)
)√
1− z2dz
=
∫ 12
0
eixzφ(z)dz
where
φ(z) =cos(v cos−1(z)
)− i sin
(v cos−1(z)
)√
1− z2.
Integrating by parts, we have∫ 12
0
eixzφ(z)dz =eixz
ixφ(z)
∣∣∣ 120− 1
ix
∫ 12
0
eixzφ′(z)dz
=(cos(xz) + i sin(xz)
ix
)φ(z)
∣∣∣ 120− 1
ix
∫ 12
0
eixzφ′(z)dz
=( sin(xz)
xφ(z) +
cos(xz)
ixφ(z)
)∣∣∣ 120− 1
ix
∫ 12
0
eixzφ′(z)dz
= O(x−1),
since z ∈ [0, 12 ] avoids any singularities of φ and φ′. (Notice that φ is not a function of x andis composed of cyclical functions sine and cosine. We are only interested in the behavior asx approaches a very large number, where φ will have a negligible effect.)
28
Step Two. We take P1 and substitute t =√
2x sin(u2 ). Then we have du =√2√x
dt√1− t22x
and
cos(u) = 1− t2
x . The equation becomes:
P2(x) =
√2√x
∫ √ x2
0
eix(1−t2
x )(
cos(2v sin−1(
t√2x
))− i sin
(2v sin−1(
t√2x
))) dt√
1− t2
2x
=
√2√xeix∫ √ x
2
0
e−it2(
cos(2v sin−1(
t√2x
))− i sin
(2v sin−1(
t√2x
))) dt√
1− t2
2x
.
As x → ∞, we get
P1(x) ∼√
2√xeix∫ ∞0
e−it2
dt.
Since∫∞0e−it
2
dt =√π2 e
−iπ4 , we finally have
Jv(x) + iYv(x) ∼ 2e−ivπ
2
π
(√2√xeix)(√π
2e−iπ4
)∼√
2
πxei(x−
π4−
vπ2 ).
Applying Euler’s formula leads to the result.
29
3 Graphs
The following code was implemented in Python to create graphs of the Bessel functions oforders n = 0, 1, 2, 3, 4.
3.1 Graph of First Bessel Function
import matplotlib as matplotlib
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as integrate
def f(x,n):
return integrate.quad(lambda t: 1/np.pi * np.cos(x*np.sin(t) - n*t),
0, np.pi)
X = np.arange(0.0, 30.0, 0.01)
plt.figure(1, figsize=(10,8))
plt.plot(X, [f(x,0)[0] for x in X], ’--’, linewidth = 1.7, label = ’n=0’)
plt.plot(X, [f(x,1)[0] for x in X], linewidth = 1.5, label = ’n=1’)
plt.plot(X, [f(x,2)[0] for x in X], ’--’,linewidth = 1.25, label = ’n=2’)
plt.plot(X, [f(x,3)[0] for x in X], linewidth = 1, label = ’n=3’)
plt.plot(X, [f(x,4)[0] for x in X], ’--’,linewidth = 0.75, label = ’n=4’)
legend = plt.legend(loc=’upper right’, shadow=True)
frame = legend.get frame()
frame.set facecolor(’0.90’)
for label in legend.get texts():
label.set fontsize(’large’)
for label in legend.get lines():
label.set linewidth(1.5)
plt.title(’Bessel Function of the First Kind’)
plt.show()
30
0 5 10 15 20 25 30
0.4
0.2
0.0
0.2
0.4
0.6
0.8
1.0
Bessel Function of the First Kind
n=0
n=1
n=2
n=3
n=4
3.2 Graph of Second Bessel Function
import matplotlib as matplotlib
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as integrate
def f(x,n):
return integrate.quad(lambda t: 1/np.pi * np.sin(x*np.sin(t) - n*t), 0, np.pi)
def g(x,n):
return integrate.quad(lambda t: -1/np.pi * np.exp(-x*np.sinh(t))
*(np.exp(n*t)+np.cos(n*np.pi)*np.exp(-n*t)), 0, 150)
X = np.arrange(0.1, 30.0, 0.01)
plt.figure(1, figsize=(10,8))
plt.plot(X, [f(x,0)[0]+g(x,0)[0] for x in X], ’--’,linewidth = 1.75, label =
’n=0’)
plt.plot(X, [f(x,1)[0]+g(x,1)[0] for x in X], linewidth = 1.50, label = ’n=1’)
plt.plot(X, [f(x,2)[0]+g(x,2)[0] for x in X], ’--’,linewidth = 1.25, label =
’n=2’)
31
plt.plot(X, [f(x,3)[0]+g(x,3)[0] for x in X], linewidth = 1.00, label = ’n=3’)
plt.plot(X, [f(x,4)[0]+g(x,4)[0] for x in X], ’--’,linewidth = 0.75, label =
’n=4’)
legend = plt.legend(loc=’upper right’, shadow=True)
frame = legend.get frame()
frame.set facecolor(’0.90’)
for label in legend.get texts():
label.set fontsize(’large’)
for label in legend.get lines():
label.set linewidth(1.5)
plt.title(’Bessel Function of the Second Kind’)
plt.ylim([-1.5,1.0])
plt.show()
0 5 10 15 20 25 301.5
1.0
0.5
0.0
0.5
1.0Bessel Function of the Second Kind
n=0
n=1
n=2
n=3
n=4
32
References
[1] Kreh, M. Bessel Functions. Retrieved from Pennsylvania State University:http://www.math.psu.edu/papikian/Kreh.pdf
[2] Watson, G.N. 1995. A Treatise on the Theory of Bessel Functions. New York, NY.Cambridge Mathematical Library.
[3] Chiang, E.Y.M. 2011. A Brief Introduction to Bessel and Related Special Functions.Retrieved from Hong Kong University of Science and Technology:https://www.math.ust.hk/∼machiang/150/Intro bessel bk Nov08.pdf
[4] Fisher, S.D. 1990. Complex Variables. Mineola, NY. Dover Publications, Inc.
[5] Arfken, G.B. 2013. Mathematical Methods for Physicists. Waltham, MA. Elsevier Inc.
[6] Bieri, J. Bessel Equations and Bessel Functions. Retrieved from Redlands University:http://facweb1.redlands.edu/fac/joanna bieri/pde/GoodSummaryofBesselsFunctions.pdf
33