CLIFFORD ALGEBRAS, COMBINATORICS, AND
STOCHASTIC PROCESSES
by
George Stacey Staples
M.S., Mathematics, Southern Illinois University, 1999
A DissertationSubmitted in Partial Fulfillment of the Requirements for the
Doctor of Philosophy Degree
Department of Mathematicsin the Graduate School
Southern Illinois Universityat CarbondaleMarch, 2004
AN ABSTRACT OF THE DISSERTATION OF
George Stacey Staples, for the Doctor of Philosophy degree in Mathematics,
presented on March 23, 2004, at Southern Illinois University at Carbondale.
TITLE: Clifford Algebras, Combinatorics, and Stochastic Processes
MAJOR PROFESSOR: Dr. P. Feinsilver
Clifford algebras have been applied extensively in physics and engineering but
have not been widely used in combinatorics. Moreover while fermionic stochastic
processes have been studied, stochastic processes on Clifford algebras of arbitrary
signature – which contain the real-, complex-, quaternion-valued and fermionic cases
– have not. In the first half of this work, Clifford-algebraic methods are applied to
combinatorics by creating Clifford adjacency matrices associated with finite graphs
and Clifford stochastic matrices associated with Markov chains. These matrices re-
veal information about self-avoiding paths and self-avoiding stochastic processes on
finite graphs and allow us to compute the expected number of Hamilton circuits in
a random graph. In the second half of this work, stochastic processes on Clifford
algebras are defined and specific examples, including Markov chains and Poisson
processes, are constructed. We prove the existence of Clifford-algebraic stochastic
integrals on the product space [0, t]m and utilize the graph-theoretic methods devel-
i
oped in Part I to recover the iterated stochastic integral by considering the limit in
mean of a sequence of Berezin integrals of traces of matrices associated with finite
graphs. As corollaries of known results, Hermite and Poisson-Charlier polynomials
are recovered in this manner.
ii
ACKNOWLEDGEMENTS
I would like to thank my advisor, Philip Feinsilver, for guidance, constructive
criticism, numerous books, and for introducing me to Clifford algebras and Engel’s
work on stochastic integrals. I would like to thank Professors Om Agrawal, Scott
Spector, Walter Wallis, and Marvin Zeman for their time and willingness to serve
on my committee.
Thanks also to Jerzy Kocik for constructive comments and career advice and
to John McSorley for reading and offering criticism on a preliminary version of the
graph-theoretic material in chapter three.
I owe a tremendous debt of gratitude to my family–to my wife, Nancy, for
her encouragement, support, and patience and to my son, Josh, whose visits always
bring us great joy.
iii
TABLE OF CONTENTS
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Clifford Algebras in Graph Theory . . . . . . . . . . . . . . . . . . 3
1.2 Functions on Partitions . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Clifford-Algebraic Markov Chains . . . . . . . . . . . . . . . . . . . 6
1.4 Clifford-Algebraic Poisson Processes . . . . . . . . . . . . . . . . . . 8
1.5 Multiple Stochastic Integrals . . . . . . . . . . . . . . . . . . . . . . 9
1.6 A Graph-Theoretic Construction of Stochastic Integrals . . . . . . . 12
1.7 The Enveloping Algebra . . . . . . . . . . . . . . . . . . . . . . . . 14
1.8 Notation and Terminology . . . . . . . . . . . . . . . . . . . . . . . 15
I Clifford-Algebraic Methods in Combinatorics and
Probability 18
2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1 Clifford Algebras, Spinors and Pinors . . . . . . . . . . . . . . . . . 19
2.1.1 Standard Definitions and Notation . . . . . . . . . . . . . . 19
2.2 The Fermion Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 24
iv
3 Combinatorial Spin Operators: Clifford Algebras in Graph Theory . . . . 27
3.1 Combinatorial Spin operators . . . . . . . . . . . . . . . . . . . . . 27
3.2 The Spin Operator Matrices . . . . . . . . . . . . . . . . . . . . . . 33
3.2.1 Properties of Spin Operator Matrices . . . . . . . . . . . . . 33
3.3 Clifford Adjacency Matrices . . . . . . . . . . . . . . . . . . . . . . 34
3.3.1 Simple Graphs . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.2 Finite Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3.3 Euler Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3.4 Conditional Branching . . . . . . . . . . . . . . . . . . . . . 47
3.4 Random Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5 Edge-Weighted Clifford Adjacency Matrices . . . . . . . . . . . . . 51
3.6 A Representation of R ⊗ Sn . . . . . . . . . . . . . . . . . . . . . . 53
4 Clifford Stochastic Matrices and Self-Avoiding Random Walks on Finite
Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.1 Clifford Stochastic Matrices . . . . . . . . . . . . . . . . . . . . . . 56
4.2 Random Walk on the n-dimensional Hypercube . . . . . . . . . . . 63
5 Functions on Partitions and the Grassmann Algebra . . . . . . . . . . . 70
5.1 The Grassmann Algebra . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2 The Grassmann Adjacency Matrix . . . . . . . . . . . . . . . . . . 71
5.3 Functions on Partitions . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.4 Additive Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.5 Counting Partitions of [n] . . . . . . . . . . . . . . . . . . . . . . . 76
v
5.6 An Alternate Construction . . . . . . . . . . . . . . . . . . . . . . . 78
6 MAPLE Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
II Stochastic Processes on Clifford Algebras 89
7 More Properties of Clifford Algebras . . . . . . . . . . . . . . . . . . . . 90
7.1 Norms and Inner Products . . . . . . . . . . . . . . . . . . . . . . . 92
7.2 Computing Powers via Generating Functions . . . . . . . . . . . . . 100
8 Clifford-Algebraic Random Variables and Markov Chains . . . . . . . . . 107
8.1 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.1.1 Discrete Clifford-Algebraic Random Variables . . . . . . . . 109
8.1.2 Products of Clifford-Algebraic Random Variables . . . . . . 111
8.2 Clifford-Algebraic Markov Chains . . . . . . . . . . . . . . . . . . . 112
9 The Clifford-Algebraic Poisson Process . . . . . . . . . . . . . . . . . . . 120
9.1 Continuous Clifford Poisson Processes and the Iterated Stochastic
Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
9.2 Examples: Complex Numbers and Quaternions . . . . . . . . . . . 134
10 Clifford-Algebraic Multiple Stochastic Integrals . . . . . . . . . . . . . . 137
10.1 The Clifford-Algebraic Stochastic Integral over [0, t] . . . . . . . . . 138
10.2 L2(Ω) ⊗ Cℓp,q-Valued Measures on the m-Dimensional Simplex . . . 139
10.3 The Multiple Stochastic Integral on the Square [0, t]2 . . . . . . . . 149
10.4 A Graph-Theoretic Approach using Clifford Algebras . . . . . . . . 151
10.5 The Enveloping Algebra . . . . . . . . . . . . . . . . . . . . . . . . 157
10.6 Orthogonal Polynomials . . . . . . . . . . . . . . . . . . . . . . . . 160
vi
10.7 Stochastic Processes on Spin+(n) . . . . . . . . . . . . . . . . . . . 165
11 Directions for Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . 168
vii
LIST OF FIGURES
3.1 An undirected graph. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.2 5-vertex digraph. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3 Do edges matter? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4 Generalized labelling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.5 Conditional branching at a vertex. . . . . . . . . . . . . . . . . . . . . 48
5.1 Γ4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
10.1 Graph construction for a 4-partition of [0, t). . . . . . . . . . . . . . . . 153
viii
CHAPTER 1
INTRODUCTION
In the first part of this work, a Clifford-algebraic approach to graph theory
and algebraic combinatorics is developed. Utilizing elements of Clifford algebras we
introduce “Clifford adjacency matrices” associated with finite graphs.
A well-known [21] result in elementary graph theory says the following: Let
G be a graph on n vertices with associated adjacency matrix AG. Then for any
positive integer k, the (i, j)th entry of (AG)k is the number of length k paths i → j.
In particular, the entries along the main diagonal of (AG)k are the numbers of k-
circuits in G.
What the “ordinary” adjacency matrix fails to do, however, is distinguish
between self-intersecting and self-avoiding paths. By considering entries of Ak where
A is an appropriate Clifford adjacency matrix, we are able to recover the self-avoiding
k-paths and k-cycles in any finite graph. Further, we are able to compute the
expected number of Hamiltonian circuits in a random graph.
We denote by Cℓp,q the Clifford algebra of signature (p, q), p + q = n, the
algebra consisting of elements of the form u =∑i⊂[n]
uiei. Here [n] = 1, 2, . . . , n,
i ⊂ [n] is a multi-index of integers from [n], ui ∈ R for each i ⊂ [n], and the multi-
vectors ei satisfy specific multiplication rules. We define the notation ⟨u⟩k to denote
1
the degree-k part of u; i.e. the sum of terms in the expansion of u over multi-indices
of size k.
Throughout this work use is made of the Berezin integral. The use of Berezin
integrals in the Clifford algebra context is nonstandard but follows naturally from
Berezin’s original context [5], where it is defined within a Grassmann algebra of
dimension 2n. In particular, given a Grassmann algebra Gg of g generators ξ1, . . . ξg
satisfying the anticommutation relations ξi, ξj = 0,∀1 ≤ i, j ≤ g with
F (ξ) = f0 +
g∑k=1
∑1≤i1≤···≤ik≤g
fi1i2···ikξi1ξi2 · · · ξik , (1.0.1)
the elementary Berezin integral is defined by
dξi, dξj = 0 (1.0.2)∫B
dξi = 0 (1.0.3)
∫B
ξiξj dξj = ξi,∀j = i. (1.0.4)
By iteration, we obtain ∫B
ξ1ξ2 · · · ξg dξg · · · dξ1 = 1. (1.0.5)
From this it follows that
∫B
F (ξ) dξg · · · dξ1 = f1,2,··· ,g, (1.0.6)
so that the Berezin integral of F (ξ) is merely the top-form coefficient in the ex-
pansion of F (ξ). We have borrowed this concept to define the Berezin integral of
u ∈ Cℓp,q as the top-form coefficient in the expansion of u.
2
Central to the Clifford-algebraic methods developed in Part I is the concept
of the “combinatorial spin algebra.” Considering disjoint bivectors of the form
ei,n+i ∈ Cℓn,n, the abelian group Sn is constructed. Letting Sn denote the corre-
sponding group ring, we generate the combinatorial spin algebra R ⊗ Sn.
1.1 CLIFFORD ALGEBRAS IN GRAPH THEORY
We define a Clifford adjacency matrix associated with a finite graph on n
vertices as an n × n matrix having entries in R ⊗ Sn. Letting A be the Clifford
adjacency matrix of any finite graph G, we are able to recover self-avoiding m-paths
and m-cycles occurring in G by considering maximal-degree elements in Am.
For example, letting G be any finite graph on n vertices and letting H denote
the number of Hamiltonian circuits in G, we find
∫B
tr (An) = nH, (1.1.1)
where∫B
denotes the Berezin integral.
Let G be a random graph on n vertices with associated random Clifford adja-
cency matrix A. Fix 0 ≤ k ≤ n and let X be a random variable taking values in the
nonnegative integers such that X is the number of k-cycles contained in G. Then
EX =1
k⟨⟨tr
(Ak
)⟩⟩2k. (1.1.2)
Letting H denote the number of Hamiltonian circuits contained in G, we find
EH =1
n
∫B
tr (An) . (1.1.3)
3
We next apply the Clifford-algebraic approach to finite state, time-
homogeneous Markov chains, developing “Clifford stochastic” matrices. By con-
sidering entries of T n where T is an appropriate Clifford stochastic matrix, we are
able to compute probabilities of self-avoiding n-step Markov processes. Expected
times of first self-intersection and expected hitting times of specific states can also
be computed.
The n-dimensional hypercube Qn is the Cayley graph of the combinatorial
spin group Sn, yielding a group isomorphism ϕ : Sn →⊕n
i=1 Z2. Hence, each vertex
of the hypercube can be uniquely labelled with an element of Sn. Utilizing this
correspondence, random walks on the hypercube are considered, and probabilities
of self-avoiding walks are recovered. Moreover, expected first self-intersection times
and expected hitting times of specific states are computed.
1.2 FUNCTIONS ON PARTITIONS
We define functions on the power set of [n] = 1, 2, . . . , n and evaluate these
functions over k-subset partitions of [n] by considering the top-form coefficients of
the trace of (An)k, where An is the adjacency matrix of the complete graph on 2n−1
vertices with Grassmann-algebraic vertex weights. As a special case we recover the
Stirling numbers of the second kind. Moreover, we recover the Bell numbers from
the matrix exponential exp(An).
We denote by
nk
the Stirling numbers of the second kind. These are defined
to be the number of ways a set of n elements can be partitioned into k nonempty
subsets.
4
Given a multiplicative group G, let f : 2[n] → G be a function on the power
set of [n] with f(∅) = eG, where eG denotes the identity. Let R denote the group
ring of G. Define the function g : P([n]) → R by
g(π) =∏b∈π
f(b). (1.2.1)
Here we assume each partition element π ∈ P([n]) is canonically ordered.
The following result is used in Part II to obtain a graph-theoretic construction
of the multiple stochastic integral.
Fix n > 0 and let K2n−1 denote the complete graph on 2n−1 vertices with ver-
tex labels i ⊂ [n] and vertex weights f(i), with corresponding Grassmann adjacency
matrix Γn. Let 0 < k ≤ n. Then
∫B
tr((Γn)k
)=
∑π∈P([n])|π|=k
∑σ∈Sk
g(σ(π)), (1.2.2)
where Sk is the symmetric group on k elements; i.e., we sum over all permutations
of each π ∈ P([n]) such that |π| = k.
This concludes Part I of the current work. In Part II, we consider Clifford-
algebraic stochastic processes. The study of stochastic processes on Clifford algebras
is not original. Applebaum [2]; Applebaum and Hudson [3]; and Barnett, Streater,
and Wilde [4] have all contributed original work dealing with fermionic stochastic
processes in infinite dimensions.
The current work differs from those cited in that we consider stochastic pro-
cesses on finite-dimensional Clifford algebras of arbitrary signature. Stochastic pro-
cesses on the finite-dimensional fermion algebra can therefore be considered a special
5
case in the current work.
By generalizing the work of Engel [10] on multiple stochastic integrals of L2(Ω)-
valued stochastic processes, we prove the existence of multiple stochastic integrals
of L2(Ω) ⊗ Cℓp,q-valued processes.
We conclude with a graph-theoretic construction of the multiple stochastic
integral of a Clifford-algebraic stochastic process. This approach differs from the
combinatorial approach utilized by Rota and Wallstrom [19] and Anshelevich [1].
1.3 CLIFFORD-ALGEBRAIC MARKOV CHAINS
Given a Clifford algebra Cℓp,q, p+q = n and a probability space (Ω,F , Pr), we
define the Clifford-algebraic random variable Ξ(ω) =∑i⊂[n]
ξi(ω) ei, where ξi(ω) is a
real-valued random variable on Ω for each i ⊂ [n]. As special cases real-, complex-,
and quaternion-valued random variables are defined. Considering Cℓn,n, the theory
contains as a special case the n-particle fermion algebra.
Clifford-algebraic Markov processes are obtained by letting the ξi(t, ω) be mu-
tually independent random variables satisfying the Markov property. We further
assume the component Markov chains are adapted to the same given family of σ-
fields, say
F0 ⊂ F1 ⊂ · · · ⊂ Fk ⊂ · · · .
We are then able to define transition matrices for time-homogeneous, finite-state
Markov chains as tensor products of transition matrices for real-valued Markov
chains.
Let n ≥ 0 be fixed and consider the Clifford algebra Cℓp,q where p + q = n.
6
For each multi-index i ⊂ [n], let ξi : N × Ω → Si ⊂ N be a Markov chain. Then the
sequence of Clifford-algebraic random variables Ξk defined by
Ξk =∑i⊂[n]
ξi(k) ei (1.3.1)
satisfies the Markov property.
Given a Clifford-algebraic Markov process Ξk =∑i⊂[n]
ξi(k) ei, the expectation
of Ξk is given by
E(Ξk) =∑i⊂[n]
E(ξi(k)) ei. (1.3.2)
Let Ξk be a finite-state, time-homogeneous Clifford-algebraic Markov chain,
where for each i ⊂ [n], ξi(k) is a real-valued Markov random variable taking values in
the state space Si. Let Mi denote the transition probability matrix for the Markov
chain ξi(k) for each i ⊂ [n]. Further, let vj0≤j≤2n−1 be the standard orthonormal
basis for R2n. Then the transition probability matrix for the Clifford-algebraic
Markov chain Ξk is given by
M =∑i⊂[n]
((|vf(i)⟩⟨vf(i)|
)⊗Mi
)(1.3.3)
under the mapping
f(i) =n∑
ℓ=0
2ℓχ(ℓ ∩ i), (1.3.4)
where for any set A,
χ(A) ≡
0, if A = ∅
1, otherwise.
(1.3.5)
Letting x0 represent the initial distribution of the finite-state, time-
homogeneous Clifford-algebraic Markov chain Ξk, the distribution at time k > 0
7
is given by
xk = x0Mk. (1.3.6)
A recurrent state s of Ξ(k) is periodic with period d if and only if for each
i ⊂ [n], si is a recurrent state of the Markov chain ξi(k) with period di < ∞. In this
case
d = l.c.m.d∅, . . . , d[n]. (1.3.7)
1.4 CLIFFORD-ALGEBRAIC POISSON PROCESSES
Let υi(t, ω)i⊂[n] be a collection of independent regular Poisson processes; i.e.,
for each i ⊂ [n], we have
Prυi(t, ω) = ℓ =(λit)
ℓ
ℓ!e−λit (1.4.1)
for some parameter λi > 0. We then define the Clifford-algebraic Poisson process
with parameter λ =∑i⊂[n]
λi by
Υ(t, ω) =∑i⊂[n]
υi(t, ω) ei. (1.4.2)
Let Cℓp,q ⊂ Cℓp,q denote the restriction to elements whose coefficients are strictly
non-negative integers. Given u =∑i⊂[n]
uiei ∈ Cℓp,q, we have
PrΥ(t, ω) = u = e−λt∏i⊂[n]
(λit)ui
ui!. (1.4.3)
If λ∅ = · · · = λ[n] = κ, then given u ∈ Cℓp,q we have
PrΥ(t, ω) = u = e−2nκt(κt)|u|∏i⊂[n]
1
ui!
(1.4.4)
8
where |u| denotes the Clifford-algebraic 1-norm of u.
Let m ≥ 0 be fixed. Then
Pr|Υ(t, ω)| = m =(λt)m
m!e−λt. (1.4.5)
In other words, the 1-norm of the Clifford-algebraic Poisson process is a regular
Poisson process.
We say Υ(t, ω) =∑i⊂[n]
υi(t, ω) ei is a continuous Clifford-algebraic Poisson pro-
cess if its parameter λ = λ(t) =∑i⊂[n]
mi((0, t]) for some family of non-atomic mea-
sures mi. In other words,
Prυi(t, ω) = k =mi((0, t])
k
k!e−mi((0,t]) (1.4.6)
where mi((0, t]) is a continuous, nonnegative, monotonically nondecreasing function
of t.
Let m(n) = m(m − 1) · · · (m − n + 1). We find
∫· · ·
∫0≤t1<···<tm≤t
|dΥ(t1, ω)| · · · |dΥ(tm, ω)| =1
m!|Υ(t, ω)|(m), (1.4.7)
and
|∫
· · ·∫
0≤t1<···<tm≤t
dΥ(t1, ω) · · · dΥ(tm, ω)| ≤ 1
m!|Υ(t, ω)|(m). (1.4.8)
1.5 MULTIPLE STOCHASTIC INTEGRALS
We extend the work of Engel [10] on multiple stochastic integration of L2(Ω)-
valued stochastic processes to a theory of L2(Ω) ⊗ Cℓp,q-valued processes, proving
the existence of Clifford-algebraic random measures on the product space [0, t]m.
9
The approach taken here differs from the work of Barnett, Streater, and Wilde
[4], who defined a stochastic integral with respect to the fermion field. Letting H
be a complex Hilbert space and J be a conjugation on H, the anti-symmetric Fock
space over H is the Hilbert space Λ(H) =⊕∞
n=0 Λn(H), where Λ0(H) = C and
Λn(H) is the Hilbert-space anti-symmetric n-fold tensor product of H with itself.
For z ∈ H, the fermion creation and annihilation operators are defined by
C(z) : Λn(H) → Λn+1(H), u 7→ (n + 1)12A(z ⊗ u), and (1.5.1)
A(z) = C(z)∗ (1.5.2)
where A is the anti-symmetrization projection and A(z) is the adjoint of C(z). The
fermion field Ψ(z) is defined on Λ(H) by Ψ(z) = C(z) + A(Jz). We have that
Ψ(·) : H → B(Λ(H)) is linear and that the anti-commutation relations hold:
Ψ(z), Ψ(w) ≡ Ψ(z)Ψ(w) + Ψ(w)Ψ(z) = 2⟨Jw, z⟩I. (1.5.3)
If dim(H) = n < ∞, the n-particle fermion algebra is isomorphic to Cℓn,n and
thus occurs in the current work as a special case. It should be noted that stochastic
integrals defined on infinite-dimensional Clifford algebras of arbitrary signature lie
beyond the scope of this paper.
In Engel’s work, families of stochastic processes satisfying specific regularity
conditions have unique countably additive extensions to the Borel σ-algebra gen-
erated by elementary subsets of the product space. We extend these regularity
conditions to Clifford-algebraic regularity conditions which reduce to the former in
Cℓ0,0∼= R.
10
We shall refer to Clifford-algebraic stochastic processes satisfying the appro-
priate regularity conditions as “good.” Given systems of good stochastic processes,
we define L2(Ω) ⊗ Cℓp,q-valued measures.
We begin with L2(Ω) ⊗ Cℓp,q-valued measures on the m-dimensional simplex.
Given a Clifford-algebraic stochastic process Ξ(t, ω), we wish to express∫· · ·
∫0≤t1<t2<···<tm≤t
dΞ(t1, ω) · · · dΞ(tm, ω) (1.5.4)
as the limit in mean of sums of the form
∑1≤i1<···<in≤q
Ξ(Ii1)Ξ(Ii2) · · ·Ξ(Iin). (1.5.5)
Given an interval I = [s, t) and a stochastic process X(t), we shall adhere to the
convention X(I) ≡ X(t) − X(s).
Let E(S) denote the Borel σ-field of the set
S = (t1, t2, . . . , tm) : 0 ≤ t1 < t2 < · · · < tm ≤ t. (1.5.6)
E(S) is the smallest σ-field containing all elementary sets of the form
E =⋃
1≤i1<···<im≤q
χi1···imIi1 × Ii2 × · · · × Iim , (1.5.7)
where I1, . . . , Iq is some partition of [0, t] into disjoint intervals depending on E
for which Ik < Ik+1, k = 1, 2, . . . , q − 1 and
χi1···im =
0 if Ii1 × · · · × Iim is not included in the union,
1 if Ii1 × · · · × Iim is included in the union.
(1.5.8)
Following Engel, we define a finitely additive L2(Ω) ⊗ Cℓp,q-valued measure
Ψ on elementary sets of the m-simplex and show that this can be extended to a
11
countably additive L2(Ω) ⊗ Cℓp,q-valued measure defined on E(S), the Borel σ-field
of the m-dimensional simplex S.
By considering sets of the form
Sπ = (t1, . . . , tm) : 0 ≤ tπ(1) < · · · < tπ(m) ≤ t, (1.5.9)
where π is any permutation, we can define a finitely additive L2(Ω) ⊗ Cℓp,q-valued
measure on the field Fm0 of all elementary subsets of [0, t]m.
Finally, the finitely additive L2(Ω) ⊗ Cℓp,q-valued measure Ψ defined on the
field Fm0 of elementary subsets of [0, t]m can be extended to a countably additive
L2(Ω) ⊗ Cℓp,q-valued measure Ψ defined on the Borel σ-field Fm generated by Fm0 .
1.6 A GRAPH-THEORETIC CONSTRUCTION OF STOCHASTIC
INTEGRALS
The problem of extending a random measure ϕ on a set S to the product
set Sn is non-trivial in that the product measure must vanish on lower-dimensional
subsets, also known as diagonal sets [19], of Sn. Rota and Wallstrom handle the
intersection properties of diagonal sets by observing that the family of diagonal sets
is isomorphic to the lattice of partitions of the set 1, 2, . . . , n. The “sieving out” of
the overlaps among diagonal sets can then be accomplished by applying the Mobius
inversion formula on the lattice of partitions. The analysis underlying their work is
Engel’s work on multiple stochastic integrals, and therefore all measures referred to
as “good” in their work are assumed to satisfy Engel’s regularity conditions.
We apply the Clifford-algebraic graph-theoretic approach developed in Part I
12
to this problem of “sieving out” diagonal sets. In our graph-theoretic construction,
we recover the multiple stochastic integral over [0, t]m by partitioning the set [0, t]
into N > 0 disjoint intervals and constructing a weighted graph on(
N+12
)− 1
vertices and its associated adjacency matrix ΓN with entries in the algebra of disjoint
Grassmann bivectors. We refer to this adjacency matrix as the Grassmann evolution
matrix associated with the process.
If Ξ(t, ω) ∈ L2(Ω) ⊗ Cℓp,q is a good Clifford-algebraic stochastic process, then
L.I.M.N→∞
I ⊗∫B
tr ((ΓN)m) = Ξ(m)(t, ω), (1.6.1)
where ΓN is the Grassmann evolution matrix associated with Ξ(t, ω) and Ξ(m)(t, ω)
is the iterated stochastic integral of Ξ(t, ω) defined on the Borel σ-algebra of ele-
mentary subsets of [0, t]m.
If Φ(t, ω) is a good stochastic process defined on a commutative sub-algebra
of Cℓp,q, then
Φ(m)(t, ω)s = L.I.M.N→∞
1
m!
I ⊗∫B
tr ((ΓN)m) , (1.6.2)
where ΓN is the Grassmann evolution matrix associated with Ξ(t, ω) and Φ(m)(t, ω)s
is the iterated stochastic integral of Φ(t, ω) defined on the Borel σ-algebra of ele-
mentary subsets of the m-dimensional simplex S = (t1, t2, . . . , tm) ∈ [0, t]m : 0 ≤
t1 ≤ t2 ≤ · · · ≤ tm = t.
Let D(t, ω) be the compensated Poisson process. For each N ≥ 1, constructing
the N th Grassmann evolution matrix ΓN associated with D(t, ω), we obtain
L.I.M.N→∞
I ⊗∫B
tr((ΓN)m) = m! Km(P (t, ω), t). (1.6.3)
13
Let X(t, ω) be standard Brownian motion. For each N ≥ 1, constructing the
N th Grassmann evolution matrix ΓN associated with X(t, ω), we obtain
L.I.M.N→∞
I ⊗∫B
tr((ΓN)m) = m! Hm(X(t, ω), t). (1.6.4)
1.7 THE ENVELOPING ALGEBRA
The current work is completed by recasting the graph-theoretic approach to
Clifford-algebraic multiple stochastic integrals in the setting of a single algebra. Let
us define the 2n-dimensional Clifford algebra Cℓp,q,r whose constituent vectors satisfy
e2i =
1, 1 ≤ i ≤ p
−1, p + 1 ≤ i ≤ p + q
0, p + q + 1 ≤ i ≤ n.
(1.7.1)
Let us also define the involution ⋆ : Cℓp,q,r → Cℓp,q,r and the evaluation map
ϵB : Cℓp,q,r → Cℓp,q by
⋆(u) =∑i⊂[n]
uie[n]\i (1.7.2)
ϵB(u) = ⋆(⋆(u) e([n]\[p+q])). (1.7.3)
Remark 1.7.1. The involution ⋆ defined above is not the Hodge dual [14], although
it is similar.
Let Ξ(t, ω) be a good Clifford-algebraic stochastic process, n′ = maxp, q,
and let ΓN be the Grassmann evolution matrix associated with Ξ(t, ω) for some
N > 0, written using basis vectors from H = R(N+12 )−1. The following diagram is
14
commutative.
(L2(Ω) ⊗ Cℓp,q) ⊗ (GN ⊗H⊗H∗)
I⊗R
B
!
tr
−−−−−−→ L2(Ω) ⊗ Cℓp,qyι′
yι
L2(Ω) ⊗ Cℓn′,n′,2N ⊗H⊗H∗ ϵBtr−−−→ L2(Ω) ⊗ Cℓn′,n′
(1.7.4)
where ι, ι′ are defined by linear extension of
ι(αe(i∩[p])e(i\[p])) = αe(i∩[p])
∏ℓ∈i\[p]
e(n′+ℓ−p) (1.7.5)
ι′(αei ⊗ γℓ ⊗ vj ⊗ v⊤k ) = αe(i∩[p])
∏k∈([p+q]\[p])
e(k−p+n′)
∏k∈ℓ
e2n′+2k−1e2n′+2k ⊗ vj ⊗ v⊤k .
(1.7.6)
1.8 NOTATION AND TERMINOLOGY
We conclude the introduction with a list of notation used throughout the work.
Cℓp,q,r denotes the 2p+q+r-dimensional Clifford algebra of signature (p, q, r).
Cℓp,q denotes the Clifford algebra Cℓp,q,0.
Cℓp,q denotes elements of Cℓp,q having only nonnegative integer coefficients.
Cℓ+p,q denotes the even sub-algebra of Cℓp,q.∧n denotes the 2n-dimensional Grassmann algebra, also Cℓ0,0,n.
Rp,q denotes real quadratic space, consists of vectors in Cℓp,q.
N0 denotes nonnegative integers N ∪ 0.
[n] denotes the set 1, 2, . . . , n .
i denotes a subset of [n], used as a multi-index.
ei denotes a basis element of Cℓp,q.
P([n]) denotes the collection of all partitions of [n].
15
u denotes grade involution of u ∈ Cℓp,q.
u denotes reversion of u ∈ Cℓp,q.
u denotes the Clifford conjugate of u ∈ Cℓp,q.
⟨u⟩k denotes the degree-k part of u ∈ Cℓp,q.
⟨⟨u⟩⟩k denotes the sum of the coefficients in the degree-k part of u.
⟨⟨u⟩⟩ denotes the sum of all coefficients in the expansion of u.∫B
denotes the Berezin integral.
∥u∥ denotes the inner-product norm of u ∈ Cℓp,q.
|u| denotes the 1-norm of u ∈ Cℓp,q.
R ⊗ Sn denotes the 2n-dimensional combinatorial spin algebra (CSA).
si denotes a basis element of R ⊗ Sn.
R ⊗ Gn denotes the 2n-dimensional Grassmann bivector algebra.
γi denotes a basis element of R ⊗ Gn.
E(G) denotes the set of edges in graph G.
V(G) denotes the set of vertices in graph G.
π denotes an element of symmetric group Sk, sometimes σ.
σ denotes an element of symmetric group Sk, sometimes π.
uv denotes an edge in the graph G for u, v ∈ V(G).
L(V ) denotes linear operators on vector space V .
(Ω,F , Pr) denotes a probability space.
L2(Ω) ⊗ Cℓp,q denotes the space of Clifford-algebraic stochastic processes.
L.I.M. denotes limit in mean.
16
Ξ(t, ω) denotes a Clifford-algebraic stochastic process.
Ξ(m)(t, ω) denotes the stochastic integral of Ξ(t, ω) on the product space [0, t]m.
Ξ(m)(t, ω)s denotes the stochastic integral of Ξ(t, ω) on the m-dimensional simplex.
17
Part I
Clifford-Algebraic Methods in
Combinatorics and Probability
18
CHAPTER 2
PRELIMINARIES
2.1 CLIFFORD ALGEBRAS, SPINORS AND PINORS
Spinors in their most general form were discovered by Elie Cartan in 1913
[7]. Since that time spinors have been studied in great detail by Brauer and Weyl
[6], Dirac [8], and many others. The material in the first subsection is standard,
and interested readers are referred to [16] and [14] for more on Clifford algebras in
general.
2.1.1 Standard Definitions and Notation
Definition 2.1.1. Let k be a field of characteristic not 2. Let V be a vector space
of dimension n over k and let q be a non-degenerate quadratic form on V . It is
known that there exists an orthogonal basis e1, . . . , en of V such that
q(ei) = qi, for some qi = 0. (2.1.1)
The Clifford algebra Cℓ(V,q) is the associative algebra generated by 1 and ei with
e2i = qi · 1 (2.1.2)
eiej + ejei = 0, ∀i = j. (2.1.3)
We identify k and V inside Cℓ(V,q) in the obvious way. The dimension of
19
Cℓ(V,q) is 2n and it has a canonical basis ei1 · · · eip|1 ≤ i1 < i2 < · · · < ip ≤ n.
Definition 2.1.2. Let xi be an orthonormal basis for Rp+q. Then equipping Rp+q
with a quadratic form q such that
q(xi) =
1, for 1 ≤ i ≤ p,
−1, for p + 1 ≤ i ≤ p + q,
(2.1.4)
we obtain the real quadratic space Rp,q.
Definition 2.1.3. For any n > 0, we define the 2n-dimensional Clifford algebra
Cℓp,q,r of signature (p, q, r) where p + q + r = n as the algebra generated by basis
elements of the form
scalars: e0 = 1
vectors: e1, . . . , en
bivectors: eiej, where 0 < i < j < n
...
n-vector: e1e2 · · · en
(2.1.5)
subject to the multiplication rules
eiej = −ejei
e21 = e2
2 = · · · = e2p = 1,
e2p+1 = · · · = e2
p+q = −1,
e2p+q+1 = · · · = e2
p+q+r = 0.
(2.1.6)
20
Notation. We denote by Cℓp,q the Clifford algebra Cℓp,q,0.
Remark 2.1.4. The scalars may be chosen from any field, but we choose to work
over R. Also, the vector space spanned by the vectors ei is Rp,q.
Definition 2.1.5. Given two multivectors u,v ∈ Cℓp,q, we define the Clifford prod-
uct of u,v as
uv = uyv + u ∧ v, (2.1.7)
where y is the (left) contraction operator and ∧ is the exterior (Grassmann) product.
Let us now fix the notation we shall use throughout this work.
1. [n] ≡ 1, 2, . . . , n for any nonnegative integer n.
2. Underlined Roman characters will denote subsets of [n] and are assumed to be
lexicographically ordered so that we may use them as multi-indices of multi-
vectors and their coefficients. For example,
αiei = αi
∏ι∈i
eι (2.1.8)
where αi ∈ R.
3. will always denote the symmetric difference operator. In other words,
ij =(i ∪ j
)\
(i ∩ j
). (2.1.9)
4. An arbitrary u ∈ Cℓp,q will be written
u =∑i⊂[n]
uiei. (2.1.10)
21
Definition 2.1.6. Let u ∈ Cℓp,q be arbitrary. We define the degree-k part of u as
the sum of all k-degree Clifford monomials in u. That is u =
p+q∑i=0
⟨u⟩i where the
degree-k part of u is
⟨u⟩k =∑i⊂[n]|i|=k
uiei. (2.1.11)
We denote the sum of the coefficients of the degree-k part of u by
⟨⟨u⟩⟩k =∑i⊂[n]|i|=k
ui. (2.1.12)
Definition 2.1.7. It can be shown that evaluation of the real coefficient of e[n] in
the multivector expansion of u ∈ Cℓp,q is a linear functional [5]. We shall take this
as the definition of the Berezin integral,
u =∑i⊂[n]
uiei ⇒∫B
u = u[n]. (2.1.13)
Clifford algebras come equipped with three involutory automorphisms: grade
involution, reversion and Clifford conjugation [14]. We define them now.
Definition 2.1.8. Let u ∈ Cℓp,q. The grade involution of u, denoted u, is defined
by
u =
p+q∑k=0
(−1)k ⟨u⟩k. (2.1.14)
The reversion of u, denoted u, is defined by
u =
p+q∑k=0
(−1)k(k−1)
2 ⟨u⟩k. (2.1.15)
In other words, all multivectors are obtained by reversing the order of their con-
stituent vectors. The Clifford conjugate u, denoted u, is defined by
u =
p+q∑k=0
(−1)k(k+1)
2 ⟨u⟩k. (2.1.16)
22
Clifford conjugation is easily seen to be the composition of grade involution and
reversion.
Definition 2.1.9. It can be shown that the collection of even-degree elements of
Cℓp,q forms a subalgebra. We call this the even subalgebra of Cℓp,q and we denote it
by Cℓ+p,q.
Definition 2.1.10. Let p + q = n be given and consider the n-dimensional vector
space Rp,q. We define the Clifford group (sometimes called the Lipschitz group)
Γ(p, q) in the following way:
Γ(p, q) = s ∈ Cℓp,q : ∀x ∈ Rp,q, sxs−1 ∈ Rp,q. (2.1.17)
Definition 2.1.11. The Pin group is defined as
Pin(p, q) = u ∈ Γ(p, q) : uu = ±1. (2.1.18)
The Spin group is defined as
Spin(p, q) = Pin(p, q) ∩ Cℓ+p,q. (2.1.19)
For convenience, we shall refer to elements of Spin(p, q) as spin operators.
The spin group Spin(p, q) has a subgroup:
Spin+(p, q) = s ∈ Spin(p, q) : ss = 1. (2.1.20)
Remark 2.1.12. It can be shown that Spin(p, q) gives a double-covering of SO(p, q)
and that Pin(p, q) gives a double-covering of O(p, q). We refer to the irreducible
representations of Spin(p, q) and Pin(p, q) as spinors and pinors, respectively.
23
2.2 THE FERMION ALGEBRA
In the notation of [12], we let f+i and fi denote the fermion creation and
annihilation operators, respectively. We can think of these as the operators that
create or annihilate a particle at position i.
Given n > 0, the n-particle fermion algebra is generated by elements fi, f+i ,
1 ≤ i ≤ n satisfying the canonical anticommutation relations (CAR):
f+i , fj = δij (2.2.1)
fi, fj = f+i , f+
j = 0. (2.2.2)
We call f+i the fermion creation operator and fi the fermion annihilation
operator at position i.
Lemma 2.2.1. The fermion pairs fif+i and f+
i fi are idempotent.
Proof. From the CAR we see
fif+i + f+
i fi = I (2.2.3)
fif+i (fif
+i + f+
i fi) = fif+i (2.2.4)
(fif+i )2 + fif
+i f+
i fi = fif+i (2.2.5)
(fif+i )2 + fi(f
+i )2fi = fif
+i (2.2.6)
(fif+i )2 = fif
+i . (2.2.7)
An identical argument shows (f+i fi)
2 = f+i fi.
24
Lemma 2.2.2. The fermion algebra is isomorphic to Cℓn,n via the correspondence
fi ≡1
2(ei − en+i) (2.2.8)
f+i ≡ 1
2(ei + en+i) . (2.2.9)
Proof. We show that 12(ei ± en+i) satisfy the CAR.
f+i , f+
j ≃ 1
2(ei + en+i) ,
1
2(ej + en+j)
=1
4(eij + ei,n+j + en+j,i + en+i,n+j) +
1
4(eji + en+j,i + ei,n+j + en+j,n+i)
= 0 (2.2.10)
by antisymmetry of the Clifford product. An identical argument shows
12(ei − en+i) , 1
2(ej − en+j) = 0. Let us assume i = j and consider
fi, f+j ≃ 1
2(ei − en+i) ,
1
2(ej + en+j)
=1
4(eij + ei,n+j − en+j,i − en+i,n+j) +
1
4(eji + en+j,i − ei,n+j − en+j,n+i)
= 0. (2.2.11)
On the other hand,
fi, f+i ≃ 1
2(ei − en+i) ,
1
2(ei + en+i)
=1
4
(e2
i + ei,n+j − en+j,i − e2n+i
)+
1
4
(e2
i + en+i,i − ei,n+i − e2n+i
)=
1
4(1 − (−1) + 1 − (−1)) = 1 ≃ I. (2.2.12)
We observe that we can write the vectors of Cℓn,n in terms of fermion
25
creation/annihilation pairs:
ei = f+i + fi (2.2.13)
en+i = f+i − fi. (2.2.14)
In general, u ∈ Cℓp,q can be written as
u ≃∑i⊂[n]
ui
∏j∈(i∩[p])
(f+
j + fj
) ∏k∈i∩([n]\[p])
(f+
k − fk
). (2.2.15)
For more on the fermion algebra, the interested reader is referred to [12].
26
CHAPTER 3
COMBINATORIAL SPIN OPERATORS: CLIFFORD
ALGEBRAS IN GRAPH THEORY
3.1 COMBINATORIAL SPIN OPERATORS
Our motivation for a Clifford-algebraic approach to graph theory and prob-
ability is that it provides a natural way in which we can keep track of non-self-
intersecting paths and processes by considering tensor products of maximal rank.
The anticommutativity of the Clifford product may or may not pose a problem in
our method. If we are considering the adjacency matrix of a graph, the sign of
a product may prove inconsequential, whereas in the edge-probability matrix of a
random graph it may prove fatal.
Lemma 3.1.1. Let Cℓp,q be the 2p+q-dimensional Clifford algebra of signature (p, q).
By disjoint basis 2m-vectors, we shall mean any pair of basis 2m-vectors of the form
ei1···i2m , ek1···k2m where i1, . . . , i2m, k1, . . . , k2m are all distinct. Then
ei1···i2mek1···k2m = ek1···k2mei1···i2m . (3.1.1)
In other words, disjoint k-vectors commute when k is even.
Proof. Proof is by induction on k = 2m. When k = 2 we have
ei1i2ei3i4 = ei1i2i3i4 = −ei1i3i2i4 = ei3i1i2i4 = −ei3i1i4i2 = ei3i4i1i2 . (3.1.2)
27
Now labelling the ith k-vector ei′ , where i′ = k(i′ − 1) + 1, k(i′ − 1) + 2, . . . , ki′,
and considering k + 2-vectors,
ei1′ ijei2′ ik
= ei1′ iji2′ ik= (−1)2nei1′ i2′ ijik = (−1)2nei2′ i1′ ijik
= (−1)2n+2ei2′ i1′ ikij = (−1)4n+2ei′2iki′1ij = ei2′ ikei1′ ij , (3.1.3)
where ej, ek are bivectors and the sets j, k, 1′, 2′ are pairwise disjoint.
Any problems posed by anticommutativity can thus be overcome by consid-
ering disjoint bivectors. This approach has one drawback if we want to assume all
values are nonnegative.
Lemma 3.1.2. Let eij be any bivector in the basis of Cℓn,0. Then e2ij = −1.
Proof.
e2ij = eiejeiej = (−1) eiejejei = −1. (3.1.4)
We can formulate a Clifford-algebraic binomial theorem for disjoint bivectors.
Proposition 3.1.3. Clifford bivector binomial theorem Let ei′ , ej′ be any pair
of disjoint basis bivectors in the Clifford algebra Cℓn,0 where n ≥ 2. Then for any
a, b ∈ R,
28
(aei′ + bej′)m =
m∑k=0
(−1)⌊k2⌋+⌊m−k
2⌋(
m
k
)akbm−kek
i′em−kj′ =
m2∑
k=0
(−1)m2
(m
2k
)a2kbm−2k
+
m2−1∑
k=0
(−1)m2
(m
2k + 1
)a2k+1bm−2k−1ei′j′ , m even
⌊m2⌋∑
k=0
(−1)⌊m2⌋(
m
2k
)a2kbm−2kej′
+
⌊m2⌋−1∑
k=0
(−1)⌊m2⌋(
m
2k + 1
)a2k+1bm−2k−1ei′ , m odd.
(3.1.5)
Proof. Since disjoint basis bivectors commute by Lemma 3.1.1, we apply the stan-
dard binomial theorem and note the sign changes according to
eki′ =
1 if k ≡ 0 (mod 2) and ⌊k2⌋ ≡ 0 (mod 2)
ei′ if k ≡ 1 (mod 2) and ⌊k2⌋ ≡ 0 (mod 2)
−1 if k ≡ 0 (mod 2) and ⌊k2⌋ ≡ 1 (mod 2)
−ei′ if k ≡ 1 (mod 2)and⌊k2⌋ ≡ 1 (mod 2).
(3.1.6)
Since our formulation of random Clifford adjacency matrices will require non-
negative coefficients exclusively, we need yet another structure. To overcome this
last obstacle, we make one further modification to our approach.
29
Let Cℓn,n be the 22n-dimensional Clifford algebra of signature (n, n) with basis
vectors e1, . . . , e2n. Let Sn = ei,n+i1≤i≤n. Clearly Sn consists of n disjoint bivec-
tors, which commute by Lemma 3.1.1. Let us relabel these bivectors by si = ei,n+i
for all 1 ≤ i ≤ n.
Lemma 3.1.4. For any si ∈ Sn, s2i = 1.
Proof.
s2i = sisi = eien+ieien+i = −eien+ien+iei = −ei
(e2
n+i
)ei = e2
i = 1. (3.1.7)
Lemma 3.1.5. Sn ⊂ Spin(n, n).
Proof. Let si ∈ Sn. We begin by proving si ∈ Γ(n, n), the Clifford-Lipschitz group.
Let x ∈ Rn,n. Then writing x in terms of basis vectors we have
si = si ⇒ si−1 = si
⇒ sixsi−1 = sixsi = si
2n∑k=1
αkeksi =2n∑
k=1
αkei,n+iekei,n+i
=∑
1≤k≤2nk =i,n+i
αkek + αiei − αn+ien+i ∈ Rn,n. (3.1.8)
Now we show that si ∈ Pin(n, n) by showing sisi = ±1.
si = −si ⇒ sisi = −s2i = −1. (3.1.9)
Since si is a bivector, it is a spin operator.
Lemma 3.1.6. The elements of Sn generate an order 2n subgroup of Spin(n, n).
30
Proof. Considering all products of the si ∈ Sn we obtain a group whose elements are
of the form si where i denotes a multi-index of elements taken from 1, 2, . . . , n.
Appending to this the index 0 in place of the null set, we find 2n such multi-indices
possible.
Let Sn denote the subgroup of Spin(n, n) generated by Sn. We shall call Sn
the combinatorial spin group due to its natural applications to combinatorics. We
shall construct adjacency matrices and transition probability matrices whose entries
are in a special algebra we call the “combinatorial spin algebra.”
Proposition 3.1.7. Let si, sj ∈ Sn. Then for any a, b ∈ R,
(asi + bsj)m =
m2∑
k=0
(m
2k
)a2kbm−2k +
m2−1∑
k=0
(m
2k + 1
)a2k+1bm−2k−1sij m even
⌊m2⌋∑
k=0
(m
2k
)a2kbm−2ksj +
⌊m2⌋−1∑
k=0
(m
2k + 1
)a2k+1bm−2k−1si m odd.
(3.1.10)
Proof. This follows immediately from Proposition 3.1.3.
Definition 3.1.8. We give the combinatorial spin group Sn an additive structure
by defining Sn as the group ring of Sn over Z.
Definition 3.1.9. The combinatorial spin algebra (CSA), denoted R ⊗ Sn, is the
algebra generated by elements of Sn utilizing the additive structure of Sn. That is
R ⊗ Sn = ∑k⊂[n]
αksk : αk ∈ R, sk ∈ Sn. (3.1.11)
31
It is evident from the definition that we can work with coefficients from an
arbitrary field.
Lemma 3.1.10. For n > 0, the combinatorial spin algebra R ⊗ Sn is isomorphic
to the algebra generated by fermion creation/annihilation pairs under the correspon-
dence
si ≡ 2fif+i − I. (3.1.12)
Proof. By (2.2.13), we see
si = eien+i ≡ (f+i + fi)(f
+i − fi) = (f+
i )2 − f+i fi + fif
+i − (fi)
2
= −f+i fi + fif
+i = −(I − fif
+i ) + fif
+i = 2fif
+i − I. (3.1.13)
Clearly fif+i and fjf
+j commute for all 1 ≤ i, j ≤ n. Further, since fif
+i is idempo-
tent by Lemma 2.2.1, we see
(2fif+i − I)2 = 4(fif
+i )2 − 4fif
+i + I = I ≃ 1 = s2
i . (3.1.14)
It is now clear that the CSA can be rewritten using fermions:
si ≃∏ι∈i
(2fιf
+ι − I
)=
|i|∑ℓ=0
2ℓ(−1)|i|−ℓ∑k⊂i|k|=ℓ
∏κ∈k
fκf+κ
. (3.1.15)
Setting fif+i = I for all 1 ≤ i ≤ n in (3.1.15), we obtain a family of polynomials
satisfying the conditions
p|i|(ℓ) = 2ℓ(−1)|i|−ℓ
(|i|ℓ
), (3.1.16)
32
and hence satisfying the recurrence:
pm(0) = (−1)m (3.1.17)
pm(m) = 2m (3.1.18)
pm(ℓ) = (−1)m−ℓ (2|pm−1(ℓ − 1)| + |pm−1(ℓ)|) , for 1 ≤ ℓ ≤ m − 1. (3.1.19)
Remark 3.1.11. We note that by a construction similar to ours, one may obtain a
representation of the Hecke algebra HK(n+1, q) by working within a Clifford algebra
of arbitrary bilinear form. Interested readers are referred to [11].
3.2 THE SPIN OPERATOR MATRICES
Definition 3.2.1. Let M ∈ Mat(n, R⊗ Sn) be any n× n matrix with entries from
the combinatorial spin algebra. We refer to M as a (combinatorial) spin operator
matrix.
Given n > 0, let Cℓn,n denote the 22n-dimensional Clifford algebra of signature
(n, n) with basis vectors e1, . . . , e2n. We then fix the combinatorial spin operator
matrix C as the diagonal matrix
C ≡ diag (s1, s2, . . . , sn) (3.2.1)
whose entries are the generators of the combinatorial spin group Sn < Spin(n, n).
3.2.1 Properties of Spin Operator Matrices
Remark 3.2.2. We observe
1. C2 = I,
33
2. (I − C)n = 2n−1 (I − C).
Proposition 3.2.3. Let the n × n spin operator matrix A be constructed using
combinatorial spin operators from the 22n dimensional Clifford algebra Cℓ0,0,2n. Then
Ak = 0,∀k > n. (3.2.2)
Proof. Since squares are zero, the only nonzero entries of An must be scalar multiples
of disjoint 2n-vectors, but the Clifford algebra by hypothesis has only 2n disjoint
basis vectors.
3.3 CLIFFORD ADJACENCY MATRICES
In this section we detail a method of applying Clifford algebras to graph theory
by considering adjacency matrices whose entries are elements of a Clifford algebra.
In particular, we develop methods of counting self-avoiding paths in finite graphs
by computing powers of matrices.
We begin with some essential notation and terminology. The reader is referred
to [21] for more graph theory. A graph G = (V , E) is a collection of vertices V and
edges E between vertices. A graph is finite if V and E are finite sets, and we shall
let |V| and |E| denote respectively the numbers of vertices and edges in G. We say
u, v ∈ V are adjacent if there exists an edge uv ∈ E between u and v in the graph
G.
A loop in a graph is an edge joining a vertex to itself. A graph is said to be
simple if it has no multiple edges or loops. In other words, G has no loops and each
pair of adjacent vertices in G is joined by a single edge.
34
A path u → v in a graph is a sequence of edges and/or vertices with initial
vertex u and terminal vertex v. A k-path is a path containing k edges. A self-
avoiding path is a path in which no vertex appears more than once. A k-circuit is
a k-path whose initial vertex is also its terminal vertex. A k-cycle is a self-avoiding
k-circuit. A Hamiltonian circuit is a circuit that visits each vertex in V exactly
once. An Euler circuit is a closed path encompassing every edge in E exactly once.
When working with a finite graph G on n vertices, one often utilizes the
adjacency matrix AG associated with G. If the vertices are labelled 1, . . . , n, we
can define AG by
(AG)ij =
1 if i, j are adjacent
0 otherwise.
(3.3.1)
A simple but useful result of this definition, which can also be generalized to
directed graphs, is given here without proof.
Proposition 3.3.1. Let G be a graph on n vertices with associated adjacency matrix
AG. Then for any positive integer k, the (i, j)th entry of AkG is the number of k-paths
i → j. In particular, the entries along the main diagonal of AkG are the numbers of
k-circuits in G.
What the adjacency matrix fails to provide, however, is a method of counting
self-avoiding paths and cycles in G. Even more specifically, AG offers no tools for
counting the Hamiltonian paths contained in G. For that we need a “new” type of
adjacency matrix whose powers count self-avoiding paths. We now construct such
a matrix from entries of a Clifford algebra.
35
In considering an n-vertex graph, we might expect to work with the 2n-
dimensional Clifford algebra Cℓn; however, anticommutativity can “kill” some of
the self-avoiding paths when we compute powers of our Clifford adjacency matri-
ces. The approach we shall adopt makes use of Cℓn,n and the combinatorial spin
operators as these always commute.
3.3.1 Simple Graphs
Definition 3.3.2. Let G be a simple graph on n vertices. Label the vertices with
the combinatorial spin operators of Cℓn,n. We define the Clifford adjacency matrix
A as the adjacency matrix whose entries are combinatorial spin operators of Cℓn,n
such that, denoting vertex adjacency by si → sj, we have
si → sj ⇒ Aij = sj. (3.3.2)
Definition 3.3.3. Let G be any simple graph and let P(i, j) denote a k-path i → j
in G. Since G is simple we can uniquely identify paths by the vertices they contain.
Therefore we can write P(i, j) = si1 , si2 , . . . , sik. We define the associated Clifford
path PC(i, j) as the spin operator∏k
l=1 siℓ .
Proposition 3.3.4. P(i, j) is a self-avoiding path of length k if and only if PC(i, j)
is a spin operator of degree 2k.
Proof. PC(i, j) is the product of k bivectors. Therefore, PC(i, j) has degree at
most 2k. The product of k bivectors has degree 2k if and only if the bivectors
are pairwise disjoint, which can only happen when the path they represent has no
repeated vertices, i.e. when the path is self-avoiding.
36
Remark 3.3.5. This proposition and proof are also valid using the Clifford algebra
Cℓ0,0,2n if we use disjoint bivectors since e2i = 0 implies the product of n bivectors is
nonzero if and only if they are pairwise disjoint.
Proposition 3.3.6. For 0 ≤ k ≤ n and m ≥ 1, the degree-k parts of Am form a
decomposition of Am. In other words,
Am =n∑
k=0
⟨Am⟩2k. (3.3.3)
Proof. For 1 ≤ i, j ≤ n, we have
uij = (Am)ij =n∑
k=0
⟨u⟩2k. (3.3.4)
Since matrix addition is defined entry-by-entry, the result follows.
We have defined the Clifford adjacency matrix as the usual adjacency matrix
multiplied on the right by the diagonal matrix of generators for the combinatorial
spin group. Matrix powers then yield paths emanating from a vertex specified by the
row of the matrix. In order to account for the vertex from which the path emanates,
multiplication on the left by the vertex label is necessary. Hence paths of length m
will correspond to spin operators of degree 2(m+1). This is easily accomplished by
considering CAm in place of Am.
Proposition 3.3.7. Let A be the Clifford adjacency matrix of any simple graph G
on n vertices. For any m > 0 and i = j, summing the coefficients of(⟨CAm⟩2(m+1)
)ij
yields the number of self-avoiding m-paths i → j occurring in G.
37
Proof. Proof is by induction on m. When m = 1, we have
(⟨CA⟩4)ij = ⟨n∑
l=1
CilAlj⟩4 = (CA)ij (3.3.5)
by definition of A and C, because Alj = χljsj where
χlj =
1 if l, j adjacent
0 otherwise
(3.3.6)
and Cil = si if and only if l = i. Thus we have a degree-4 entry sij representing a
trivially self-avoiding 1-path i → j if and only if i and j are adjacent.
Now assuming the proposition holds for n and considering the case n + 1, we
have (CAm+1
)ij
= (CAm ×A)ij =n∑
l=1
(CAm)il Alj (3.3.7)
where each term in the sum is a sum of elements of R ⊗ Sn, each having degree at
most 2(m + 2). Considering a general term of the sum we find
⟨(CAm)il⟩2(m+1) ≡ s.a. m-paths i → l, and (3.3.8)
⟨Alj⟩2 ≡ s.a. 1-path l → j, (3.3.9)
where s.a. is short for self-avoiding. It should then be clear that terms of the product
⟨(CAm)il⟩2(m+1)⟨Alj⟩2 (3.3.10)
have degree 2(m + 2) if and only if they correspond to self-avoiding m + 1-paths
i → j.
38
Corollary 3.3.8. Let A be the Clifford adjacency matrix for a graph G on n vertices.
Let H denote the number of Hamiltonian cycles in G. Then
⟨⟨tr (CAn)⟩⟩2(n−1) = nH. (3.3.11)
Proof. Noting that the initial vertex cancels the terminal vertex in any cycle, and
that each Hamiltonian circuit can be based at each of the n vertices, the result
follows immediately from Proposition 3.3.7.
It is noteworthy that the matrix C is not necessary when we consider Hamil-
tonian circuits. Taking powers of A itself is sufficient.
Lemma 3.3.9. Let A be the Clifford adjacency matrix for a graph G on n vertices.
Then
⟨⟨tr (An)⟩⟩2n = ⟨⟨tr (CAn)⟩⟩2(n−1). (3.3.12)
Proof. The degree 2n terms in Aii correspond to self-avoiding n-paths i → i. These
are exactly the degree-2(n − 1) terms of (CAn)ii.
Remark 3.3.10. Our matrix powers method of counting cycles does not ignore orien-
tation. In other words, a cycle a → b → c → a is not equivalent to a → c → b → a.
Thus, if orientation is to be ignored in counting cycles, the final number must be
halved.
Example 3.3.11. Let G be the graph pictured here.
39
•1
•3 •4
•2
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
..................................................................................................................................................
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
..
Figure 3.1. An undirected graph.
Label the vertices of G with entries of Cℓ4,4 as follows:
1 ↔ e15
2 ↔ e26
3 ↔ e37
4 ↔ e48.
(3.3.13)
Notice that our adjacency matrix is
A =
0 e26 e37 0
e15 0 e37 e48
e15 e26 0 e48
0 e26 e37 0
. (3.3.14)
Thus we see(CA3
)1,4
= e15372648 + e15263748 = 2e12345678 and the coefficient 2 accu-
rately represents the number of self-avoiding 3-paths e15 → e48.
Remark 3.3.12. Choosing to ignore commutativity of combinatorial spin operators
by writing products formally preserves paths.
Example 3.3.13. Consider the directed graph and its Clifford adjacency matrix:
40
•s1
•s2
•s4 •s5
•s3....................................................................................................... ............ ........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
........
.................
............
......................................................................................................................................................... ............ .....................................................................................................................................................................
......................................................................................................................................................... ............ ..................................................................................................................................................................................................................................
Figure 3.2. 5-vertex digraph.
A =
0 s2 0 0 0
0 0 s3 s4 0
0 0 0 0 s5
0 0 0 0 s5
0 s2 0 0 0
(3.3.15)
CA2 =
0 0 s123 s124 0
0 0 0 0 s235 + s245
0 s352 0 0 0
0 s452 0 0 0
0 0 s523 s524 0
(3.3.16)
CA3 =
0 0 0 0 s1235 + s1245
0 s35 + s45 0 0 0
0 0 s52 s3524 0
0 0 s4523 s52 0
0 0 0 0 s23 + s24
. (3.3.17)
41
Again we see that entry (Am)ij represents m-paths i → j. Thus the sum of
coefficients of degree 2m gives the number of paths. Along the diagonal, we have
all the m-cycles contained in the graph, this time having degree 2(m − 1) since the
inital/terminal vertex is cancelled.
The adjacency matrix A allows us to count Hamiltonian circuits by taking
matrix powers, assuming the graph G is simple. In fact, we may still count Hamil-
tonian circuits in graphs with multiple edges between pairs of adjacent vertices if we
do not care which edge is traversed between a pair. Consider the following example.
Example 3.3.14. The graph below contains two Hamiltonian circuits if edges trav-
elled are considered, i.e. A → C → D → E and B → C → D → E. If our only con-
cern is vertices visited, it contains one Hamiltonian circuit, s1 → s2 → s3 → s4 → s1,
as will be indicated by the trace of CA4.
•s1•s2
•s4
•s3..........
.......................
...............................
.................................................................................................................................................................................................
A
...................................................................................................................................................................................
...........................
.....................
..............................
B
......................................................................................................................................................................................................................................................C
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
.....................
D
.............................................................................................................................................................................................................................................................................................................................
E
Figure 3.3. Do edges matter?
If multiple edges are present and distinct edge-vertex sequences are to be
counted, we shall need a more general spin operator matrix. We now construct a
new adjacency matrix that allows us to count Hamiltonian cycles in any finite graph.
42
3.3.2 Finite Graphs
We begin this subsection by defining a new adjacency matrix AH that gener-
alizes the adjacency matrix A to graphs with multiple edges. We further introduce
another Clifford adjacency matrix AE which allows us to enumerate the Euler cir-
cuits contained in any finite graph.
Definition 3.3.15. Let G be any finite graph having n vertices and e edges. Label
edges and vertices of G with integers 1, 2, . . . , n + e in one-to-one fashion. For
1 ≤ i ≤ n + e, we form the n × n (generalized) Clifford adjacency matrix AH by
i, j ∈ V(G) ⇒ (AH)ij =∑
edges k:i→j
sksj. (3.3.18)
Example 3.3.16. Let us reconsider the situation pictured in Figure 3.3. Labelling
the edges and vertices with elements of R ⊗ S9, we obtain:
•s1•s2
•s4
•s3..........
.......................
...............................
.................................................................................................................................................................................................
s5
...................................................................................................................................................................................
...........................
.....................
..............................
s6
......................................................................................................................................................................................................................................................s7 ...........
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
..........
s8
.............................................................................................................................................................................................................................................................................................................................
s9
Figure 3.4. Generalized labelling.
The generalized Clifford adjacency matrix is then seen to be
43
AH =
0 s25 + s26 0 s49
s15 + s16 0 s37 0
0 s27 0 s48
s19 0 s38 0
. (3.3.19)
Proposition 3.3.17. Let A be the Clifford adjacency matrix of any graph G. For
any m > 0 and i = j, summing the coefficients of(⟨(CAH)m⟩2(m+e+1)
)ij
yields the
number of self-avoiding m-paths i → j occurring in G.
Proof. Proof is by induction on m. The result holds for m = 1 by hypothesis, so we
assume true for m and consider (CAH)m+1:
⟨((CAH)m+1
)ij⟩2(m+1+e+1) = ⟨
n∑k=1
((CAH)m)ik (AH)kj⟩2(m+1)+2(e+1)
=n∑
k=1
self-avoiding m-paths i → k · 1-paths k → j
= (m + 1)-paths i → j. (3.3.20)
Summing the coefficients then gives the number of self-avoiding m + 1-paths i →
j.
Lemma 3.3.18. Let AH be the Clifford adjacency matrix for any finite graph G.
Then for any m ≥ 1 we have
⟨tr ((AH)m)⟩2(m+e) = ⟨tr ((CAH)m)⟩2(m+e−1). (3.3.21)
Proof. The highest-degree terms in the trace of (AH)m correspond to self-avoiding
paths in G, and the terms in tr ((CAH)m) have initial/terminal vertex cancellation.
Hence the result.
44
We now have the tools for counting m-cycles and Hamiltonian circuits in any
graph G simply by exponentiating the Clifford adjacency matrix. In addition, we
can now compute the “circumference” of a connected graph.
Corollary 3.3.19. Let AH be the n×n Clifford adjacency matrix for any graph G.
For any m ≥ 2, let zm denote the number of m-cycles contained in G. Then
⟨⟨tr ((AH)m)⟩⟩2(m+e) = mzm. (3.3.22)
Definition 3.3.20. Let G = (V , E) be any finite graph. The circumference of G is
the length of the longest cycle contained in G and is denoted by c(G).
Corollary 3.3.21. Given a graph G = (V , E) where |V| = n, |E| = e having associ-
ated Clifford adjacency matrix AH, the circumference of G is given by
c(G) = max0≤k≤n
k : ⟨tr((AH)k
)⟩2(k+e) = 0. (3.3.23)
Corollary 3.3.22. Let AH be the Clifford adjacency matrix for any connected graph
G. If |V| = n, the sum of coefficients of the trace of ⟨(AH)n⟩2(n+e) is n times the
number of Hamiltonian circuits occurring in the graph G.
Finally we have a connection between graph theory and the Berezin integral,
defined as the coefficient of s12···n, used in the mathematics of second quantization
[5]. The reader is referred to the discussion on page two of the current work.
Corollary 3.3.23. Let G = (V , E) be any finite graph on n vertices with associated
Clifford adjacency matrix AH. Then∫B
tr ((AH)n) = nH (3.3.24)
45
where H is the number of Hamiltonian circuits in G and∫B
denotes the Berezin
integral.
Corollary 3.3.24. Let Kn be the complete graph on n vertices with associated Clif-
ford adjacency matrix AH. Then∫B
tr ((AH)n) = n!. (3.3.25)
3.3.3 Euler Circuits
The Clifford adjacency matrices discussed so far have allowed us to count trees
and cycles, i.e. structures depending on non-repeated vertices. Another problem
we wish to solve is that of counting Euler circuits, which allow repeated vertices
but not repeated edges. In order to apply our matrix powers approach to this
problem, we need a slightly modified spin operator matrix that we shall refer to as
the Clifford-Euler adjacency matrix.
Definition 3.3.25. Let G be any finite graph having n vertices and e edges. Assign
a one-to-one labelling of edges of G with integers 1, 2, . . . , e. Utilizing R ⊗ Se, we
form the n × n Clifford-Euler adjacency matrix AE by
i, j ∈ V(G) ⇒ (AE)ij =∑
edges k:i→j
sk. (3.3.26)
Proposition 3.3.26. Let AE be the Clifford-Euler adjacency matrix of a finite graph
G having e edges and n vertices. Summing the coefficients of ⟨tr ((AE)e)⟩2e yields n
times the number of Euler circuits occurring in G.
Proof. As in the proof of Proposition 3.3.7, entries of (AE)k are degree-2k Clifford
polynomials corresponding to k-paths in the graph G. Given such a path i → j
46
and its corresponding polynomial((AE)
k)
ij, we see that the only terms of degree
2k are terms corresponding to self-avoiding paths i → j. In the context of Clifford-
Euler matrices, this means no edge appears more than once in the path. Thus
⟨((AE)e)ii⟩2e represents the collection of all self-avoiding e-circuits based at vertex i,
i.e. the collection of all Euler circuits i → i. Since every vertex appears in each such
circuit, a representation of each circuit appears at every diagonal entry in (AE)e and
the degree-2e terms in the diagonal elements of (AE)e are identical.
⟨⟨tr ((AE)e)⟩⟩2e =
k∑i=1
⟨⟨((AE)e)ii⟩⟩2e = n · ⟨⟨((AE)
e)kk⟩⟩2e
= n · |Euler circuits in G| (3.3.27)
where | · | denotes cardinality and 1 ≤ k ≤ n is arbitrary and fixed.
3.3.4 Conditional Branching
We have seen a method for counting Hamiltonian and Euler circuits via matrix
powers. Another application of Clifford adjacency matrices allows us to count paths
in which the edges leading out of a vertex can depend on the edge travelled into the
vertex. Consider the following situation: Suppose for some vertex there are three
ways into the vertex and two possible ways out. Valid paths in the graph may rely
upon some rule such as “entering the vertex via edges a or b requires exiting via edge
A.” Such conditional branching can be done using standard adjacency matrices by
splitting the single vertex into two vertices identified with each other and employing
a directed graph with separate paths representing the branches.
Our method, however, allows the situation to be modelled more “realistically.”
47
•
.....................................................
....................................................................................................................................................................................................
a
........................................................... ............ ..................................................................................................................................................................................
b
....
....
.....
......
........................................
............
.....................................................
.........................
.....................
....................
..................................................................................................................................
c
.......................................................................
........
........
..................................................................................................................
..........................
.............................................................
..................
A
........
........
..............................................................................................................................................................
................................................. ............
B
Figure 3.5. Conditional branching at a vertex.
We simply label the edges and vertices in such a way that m-paths violating the
branch condition correspond to spin operators of degree less than 2m by cancellation.
In this way, only the valid paths are counted in our matrix powers method.
Example 3.3.27. In the graph we just considered, assuming “a or b in ⇒ A out,”
label the vertex sι and label the edges according to
a ↔ sγ
b ↔ sγ
c ↔ sη
A ↔ sη
B ↔ sγ.
(3.3.28)
It should be clear that paths a, b → sι → B,
c → sι → A
(3.3.29)
will have reduced degree and will therefore be ignored in our counting of Hamiltonian
circuits.
48
3.4 RANDOM GRAPHS
Adjacency matrices prove indispensable in the study of random graphs as they
are easily generated by computer. We shall investigate the use of Clifford adjacency
matrices in the theory of random graphs in order to calculate expected numbers of
self-avoiding structures they contain.
Definition 3.4.1. A random graph of order n is a simple graph on N vertices, such
that each pair u, v ∈ V is adjacent with probability p(uv).
Note that we have restricted our discussion to simple graphs for simplicity. In
this way we only need consider probabilities associated with each pair of vertices.
We further note our graph will satisfy 0 ≤ |E| ≤(
n2
).
Definition 3.4.2. We now let A be the adjacency matrix for G defined by Aij =
p(ij), where the vertices of G have been enumerated from 1 . . . n. Relabelling the
vertices using combinatorial spin operators, we define our random Clifford adjacency
matrix A.
Now we are ready to consider the result of exponentiating A. Given an n-
vertex, e-edge graph G, we can compute the expectation of the numbers of cycles in
G.
Proposition 3.4.3. Let G be a random graph on n vertices with associated random
Clifford adjacency matrix A. Fix 0 ≤ k ≤ n and let X be a random variable taking
values in the nonnegative integers such that X is the number of k-cycles contained
in G. Then
49
EX =1
k⟨⟨tr
(Ak
)⟩⟩2k. (3.4.1)
Proof. For any path P : i → j in G, the probability of P is given by the product
of edge probabilities in P . For 1 ≤ k ≤ n, ⟨(Ak
)ij⟩2k corresponds to self-avoiding
k-paths i → j in G with path probabilities as coefficients. Hence we find that
EX =∑
z∈k-cycles of G
P (z) =1
k⟨⟨tr
(Ak
)⟩⟩2k. (3.4.2)
Corollary 3.4.4. Let G and A be as in Proposition 3.4.3. Then letting H denote
the number of Hamiltonian circuits contained in G, we find
EH =1
n
∫B
tr (An) . (3.4.3)
Corollary 3.4.5. Let G be a random graph on n vertices with associated random
Clifford adjacency matrix A and let X be a random variable taking values in the
nonnegative integers such that X is the number of cycles contained in G. Then
EX =n∑
k=1
1
k⟨⟨tr
(Ak
)⟩⟩2k. (3.4.4)
Proof. This follows immediately from the proposition by linearity of the expectation.
Corollary 3.4.6. Let Gi be a sequence of independent random graphs on n vertices
having common Clifford adjacency matrix A. For 0 ≤ k ≤ n, let X(i)k be a random
variable taking values in the nonnegative integers such that X(i)k is the number of
50
k-cycles contained in Gi. Then
n∑k=1
1
k⟨⟨tr
(Ak
)⟩⟩2k = lim
M→∞
1
M
M∑i=1
[1
n
n∑k=1
X(i)k
]. (3.4.5)
Proof. Let X be the random variable described in the corollary 3.4.5. Then by the
law of large numbers
EX = limM→∞
1
M
M∑i=1
[1
n
n∑k=1
X(i)k
], (3.4.6)
from which the corollary follows.
3.5 EDGE-WEIGHTED CLIFFORD ADJACENCY MATRICES
Definition 3.5.1. Let G be a simple graph on n vertices with weights wij for each
ij ∈ E(G). We define the edge-weight matrix Wij by
Wij = In + (wij − 1) eje⊤j , (3.5.1)
where In is the n × n identity matrix and ej ∈ Rn is the unit vector with all
components zero except the jth, which is 1.
Letting ∆(n, R) ⊂ Mat(n, R) denote the algebra of real diagonal matrices, we
have Wij ∈ ∆(n, R) for 1 ≤ i, j ≤ n. We can now define edge-weighted spin operator
matrices as matrices over the algebra ∆(n, R) ⊗ Sn.
Definition 3.5.2. Let G be a graph as in Def. 3.5.1. We define the edge-weighted
Clifford adjacency matrix AW of G by
(AW)ij =
Wijsj if ij ∈ E(G)
0 otherwise.
(3.5.2)
51
Lemma 3.5.3. For 2 ≤ k ≤ n, let wi1j1 , . . . , wikjkbe weights such that jl = jm for
1 ≤ l,m ≤ n. Thenk∑
l=1
wiljl= tr
(k∏
l=1
Wil
)+ k − n. (3.5.3)
Proof. By construction of the edge weight matrices,
(k∏
l=1
Wiljl
)r,c
=
0 if r = c
wrc if c = jl, for some 1 ≤ l ≤ k
1 otherwise.
(3.5.4)
The result follows by noticing we have n − k ones along the diagonal regardless of
the weights assigned.
Proposition 3.5.4. Let W denote the additive weight (or cost) of a self-avoiding
k-path i = i1 → i2 → · · · → ik = j in G. Then
W = tr(((CAW)k−1)ij|si1···ik
)+ k − n. (3.5.5)
Proof. As in the proof of Proposition 3.3.7,((CAW)k−1
)ij
corresponds to the set of
self-avoiding k-paths i → j, with the term(∏k
m=2 Wim−1im
)si1i2···ik corresponding
to the path i = i1 → i2 → · · · → ik = j. The result follows from Lemma 3.5.3.
Corollary 3.5.5. Let W denote the multiplicative weight of a self-avoiding k-path
i = i1 → i2 → · · · → ik = j in G. Then
W = det(((CAW)k−1)ij|si1···ik
). (3.5.6)
52
3.6 A REPRESENTATION OF R ⊗ SN
The Clifford algebra Cℓn,n is isomorphic as an associative algebra to the algebra
of 2n×2n real-valued matrices Mat(2n, R) [14]. Moreover, given an orthonormal basis
e1, . . . , en of Rp,q, the Clifford algebra Cℓp,q is generated by the 2 × 2 matricesei 0
0 −ei
,
0 1
1 0
, and
0 −1
1 0
, where 1 ≤ i ≤ n. From this we are able to
construct a faithful representation of the spin algebra R ⊗ Sn.
Recall the Pauli spin matrices:
σ0 =
1 0
0 1
, σx =
0 1
1 0
, σy =
0 −i
i 0
, σz =
1 0
0 −1
. (3.6.1)
We shall construct the representation of R ⊗ Sn through the correspondence
ek ≃
σ⊗(k−1)0 ⊗ σx ⊗ σ
⊗(n−k)0 , when 1 ≤ k ≤ n
σ⊗(k−(n+1))0 ⊗ 1
iσy ⊗ σ
⊗(2n−k)0 , when n + 1 ≤ k ≤ 2n.
(3.6.2)
Since si ≡ eien+i, we obtain the following proposition.
Proposition 3.6.1. Let n ≥ 1 be given. Let us define
ϖi ≡
σ⊗(i−1)0 ⊗ σz ⊗ σ
⊗(n−i)0 , 1 ≤ i ≤ n
I (the 2n × 2n identity matrix), i = 0.
(3.6.3)
Then a faithful representation of the combinatorial spin algebra R ⊗ Sn is given by
the correspondence
si ≃ ϖi. (3.6.4)
53
Proof. Let S ′n denote the subgroup of Mat(R, 2n) generated by the matrices ϖi. We
show that S ′n is a commutative group isomorphic to Sn.
ϖiϖj =(σ⊗(i−1)0 ⊗ σz ⊗ σ
⊗(n−i)0
)(σ⊗(j−1)0 ⊗ σz ⊗ σ
⊗(n−j)0
)= σ
⊗((i∧j)−1)0 ⊗ σz ⊗ σ
⊗((i∨j)−(i∧j)−1)0 ⊗ σz ⊗ σ
⊗(n−(i∨j))0 = ϖjϖi (3.6.5)
Further we see that
ϖ2i =
(σ⊗(i−1)0
)2
⊗ (σz)2 ⊗
(σ⊗(n−i)0
)2
= σ⊗n0 = I ≃ s0. (3.6.6)
Moreover
ϖiϖj = I ⇔ i = j, (3.6.7)
because for each 1 ≤ i ≤ n, ϖi is a block diagonal matrix whose ith block is the
Pauli matrix σz, and σ2z = σ0.
We define the multiplicative group homomorphism ϕ : Sn → Mat(R, 2n) by
ϕ(si) = ϖi for 0 ≤ i ≤ n. We see that for any i ⊂ [n] we have
ϕ(si) =∏ι∈i
ϖι (3.6.8)
ϕ(si) = I ⇔ i = ∅, (3.6.9)
where (3.6.9) follows from (3.6.7). Thus Sn∼= S ′
n.
Now let us define S ′n as the Z-module over S ′
n, and extend ϕ linearly to an
algebra homomorphism ϕ : R ⊗ Sn → R ⊗ S ′n defined by
ϕ(u) = ϕ(∑i⊂[n]
uisi) =∑i⊂[n]
uiϕ(si) =∑i⊂[n]
ui
∏ι∈i
ϖι. (3.6.10)
We observe that ϕ is clearly well-defined and onto.
54
We note that ασ0 + βσz = 0 for any nonzero scalars α, β ∈ R. Consider
αϖi + βϖk = ασ⊗(i−1)0 ⊗ σz ⊗ σ
⊗(n−i)0 + βσ
⊗(k−1)0 ⊗ σz ⊗ σ
⊗(n−k)0 = 0
⇔ i = k and α + β = 0. (3.6.11)
It is clear that this implies αϖi + βϖk = 0 if and only if i = j and α + β = 0. Then
by linearity we have ker ϕ = 0, and ϕ is an algebra isomorphism.
Utilizing the fermion representation of the generators of the CSA, we have
2fjf+j − I ≃ σ
⊗(j−1)0 ⊗ σz ⊗ σ
⊗(n−j)0
⇒ 2fjf+j ≃ 2σ
⊗(j−1)0 ⊗
1 0
0 0
⊗ σ⊗(n−j)0
⇒ fjf+j ≃ σ
⊗(j−1)0 ⊗
1 0
0 0
⊗ σ⊗(n−j)0 . (3.6.12)
This agrees with the formulation found in [12], p. 140:
fj ≃ σ⊗(j−1)0 ⊗ 1
2
(σx −
1
iσy
)⊗ σ⊗(n−j)
z (3.6.13)
f+j ≃ σ
⊗(j−1)0 ⊗ 1
2
(σx +
1
iσy
)⊗ σ⊗(n−j)
z . (3.6.14)
55
CHAPTER 4
CLIFFORD STOCHASTIC MATRICES AND
SELF-AVOIDING RANDOM WALKS ON FINITE
GRAPHS
4.1 CLIFFORD STOCHASTIC MATRICES
Finite-state Markov chains are discrete-time stochastic processes that can be
represented by finite graphs and stochastic matrices. We begin by recalling some
basic definitions from probability theory and work our way up to representations of
Markov chains by stochastic matrices. The basic probability theory appearing here
is standard, and interested readers are referred to works such as [13] and [18].
Definition 4.1.1. A probability space is a triple (Ω,F , Pr) where Ω is an arbitrary
set, F is a σ-algebra of subsets of Ω, and Pr : F → [0, 1] is a probability measure.
In other words, F satisfies:
1. Ω ∈ F ,
2. A ∈ F ⇒ A′ ∈ F , where A′ is the complement (in Ω) of A,
3. A, B ∈ F ⇒ A ∪ B ∈ F , and
4. A1, A2, . . . ∈ F ⇒∞⋃
n=1
An ∈ F .
56
The first three conditions make F an algebra, while the last condition makes F a
σ-algebra. Further, Pr satisfies:
1. PrΩ = 1,
2. PrA′ = 1 − PrA,∀A ∈ F ,
3. A1, A2, . . . An ∈ F , pairwise disjoint implies
Prn⋃
i=1
Ai =n∑
i=1
PrAi, (4.1.1)
4. A1, A2, . . . ∈ F , pairwise disjoint implies
Pr⋃n
An =∑
n
PrAn. (4.1.2)
We further note that the first two conditions imply
Pr∅ = 0. (4.1.3)
We fix a probability space (Ω,F , Pr) and detail a method of applying Clifford
algebras to time-homogeneous Markov chains by considering stochastic matrices
whose entries are elements of the combinatorial spin algebra R ⊗ Sn. In particular,
we develop methods of computing probabilities of self-avoiding random walks in
time-homogeneous Markov chains by computing powers of matrices.
This leads us to another “natural” definition.
Definition 4.1.2. Let M = (mij) be the n × n transition matrix for any time-
homogeneous Markov chain with states 1, 2, . . . , n. Relabelling the states with the
combinatorial spin operators s1, . . . sn and replacing the entries mij with mijsj, we
obtain the Clifford transition matrix T = MC.
57
Definition 4.1.3. An n×n spin operator matrix T is said to be Clifford stochastic
if its coefficients are all nonnegative and summing all coefficients over a fixed row
gives 1. In other words, for any 1 ≤ i ≤ n we have
n∑l=1
n∑k=0
⟨⟨Ti,l⟩⟩2k = 1, (4.1.4)
where ⟨⟨Ti,l⟩⟩2k denotes the sum of coefficients over all degree-2k homogeneous terms
in Ti,l.
Definition 4.1.4. An element u of Cℓn,n will be called convex if it can be represented
by a convex combination of basis elements. Letting ⟨⟨u⟩⟩ denote the sum of all
coefficients in u, an element is convex if and only if all its coefficients are nonnegative
and we have ⟨⟨u⟩⟩ = 1.
Lemma 4.1.5. The set of convex elements in the space of combinatorial spin oper-
ators is closed under multiplication.
Proof. Let u be an element of the combinatorial spin algebra(R ⊗ Sn
)taken over
Cℓn,n. Then u has the general form
u =∑i⊂[n]
αisi, (4.1.5)
where∑
i⊂[n] αi = 1, with all scalars nonnegative. Now we let v be another convex
combination of combinatorial spin operators and consider the product
uv =
∑i⊂[n]
αisi
∑j⊂[n]
βjsj
. (4.1.6)
58
We need to show ⟨⟨uv⟩⟩ = ⟨⟨u⟩⟩⟨⟨v⟩⟩ = 1. Since combinatorial spin operators
commute and square to 1, signs of all products are strictly positive. Thus we see
⟨⟨uv⟩⟩ = ⟨⟨
∑i⊂[n]
αisi
∑j⊂[n]
βjsj
⟩⟩ = ⟨⟨∑i⊂[n]
αi
∑j⊂[n]
βjsij⟩⟩
=∑i⊂[n]
αi
∑j⊂[n]
βj = 1. (4.1.7)
Proposition 4.1.6. Products of Clifford stochastic matrices are Clifford stochastic.
Proof. Let T ,U be n × n Clifford stochastic matrices. We show T U is Clifford
stochastic. For each 1 ≤ i ≤ n we have the ith row sum
⟨⟨(T U)i⟩⟩ =n∑
k=1
(n∑
j=1
⟨⟨TijUjk⟩⟩
)= ⟨⟨
n∑k=1
n∑j=1
TijUjk⟩⟩
= ⟨⟨n∑
j=1
Tij
n∑k=1
Ujk⟩⟩ = ⟨⟨n∑
j=1
Tij⟩⟩⟨⟨n∑
k=1
Ujk⟩⟩ = 1. (4.1.8)
Corollary 4.1.7. Powers of Clifford stochastic matrices are Clifford stochastic.
Lemma 4.1.8. If T is an n×n Clifford stochastic matrix, then CT m is also Clifford
stochastic, where C is the diagonal matrix of generators of the combinatorial spin
operator group.
Proof. By Corollary 4.1.7, T m is Clifford stochastic. Since multiplication on the
left by a diagonal matrix of combinatorial spin operators does nothing to alter the
coefficients of entries in T m we have the result.
59
Definition 4.1.9. We define a path P = s0, . . . , sm as a finite sequence of states
within the time-homogeneous Markov chain Xkk≥0. We often refer to a path
consisting of a sequence of m states as an m-path. We define the probability of path
P to be
PrP = PrX1 = s1; . . . ; Xm−1 = sm−1|X0 = s0. (4.1.9)
Proposition 4.1.10. Let T be the Clifford transition matrix associated with an n-
state Markov chain and consider an m-step random walk wk1≤k≤m from state i to
state j, where 1 ≤ m ≤ n. Summing the coefficients of ⟨(CT m)ij⟩2(m+1) gives the
probability that wk is self-avoiding.
Proof. By Lemma 4.1.8, CT m is Clifford stochastic for all m ≥ 1. The proof then
proceeds by induction on m. When m = 1 the result holds by definition, so we
assume true for m and consider
⟨(CT m+1
)ij⟩2(m+2)
= ⟨(CT mT )ij⟩2(m+2) = ⟨n∑
k=1
⟨(CT m)ik⟩2(m+1)⟨Tkj⟩2⟩2(m+2)
=n∑
k=1
∑s.a. m-walks Pm:i→k
PrPm · Pr1-walk k → j|j /∈P
=n∑
k=1
∑s.a. m+1-walks Pm+1:i→k→j
PrPm+1 =∑
all s.a. (m+1)-walks P :i→j
PrP. (4.1.10)
Since each term in the sum is a self-avoiding walk whose real coefficient is the
probability of its occurrence, summing the coefficients gives the total probability of
a self-avoiding (m + 1)-step walk i → j.
Corollary 4.1.11. Let T be the Clifford transition matrix associated with an n-state
Markov chain and consider an m-step random walk wj0≤j≤m≤n. Let i ⊂ [n] such
60
that |i| = m. Then
Prwj ⊂ i; wj is s.a.; wm = w0|w0 ∈ i =1
m⟨tr (T m) si⟩0. (4.1.11)
In other words, we obtain the conditional probability that wk forms an m-cycle on
i, given w0 ∈ i by examining the scalar coefficient of si in the trace of T m.
Proposition 4.1.12. Let T be the Clifford transition matrix for an n-state time-
homogeneous Markov chain. For 0 ≤ m ≤ n and 0 ≤ k ≤ m, we see ⟨T m⟩2k
corresponds to m-walks that revisit m − k (not necessarily distinct) vertices.
Proof. As we have seen, (T m)ij represents the probabilities of all m-step walks
i → j. The spin operator corresponding to such a walk has degree 2(m − r), where
r represents the number of vertices revisited. Thus, degree 2k corresponds to m− k
vertices revisited.
Proposition 4.1.13. Let T be the Clifford stochastic matrix associated with a
Markov chain on n states, and consider the collection Ω of walks with initial state
X0 = i. Let H be a random variable taking values in the nonnegative integers such
that H(ω) is the time step at which the walk ω first revisits a state, i.e. the time of
first self-intersection. Then
EH =∞∑
k=2
k
(n∑
j=1
(⟨⟨CT k−2⟩⟩2(k−1) − ⟨⟨CT k−1⟩⟩2k
)ij
). (4.1.12)
Proof. We begin by observing that the only way a path ω can be self-avoiding
through k − 1 steps is if either (i) it is still self-avoiding at step k, or (ii) step k is
the time of first self-intersection. We therefore have
Prs.a. thru k − 1 = Prs.a. thru k + Prfirst intersection = step k. (4.1.13)
61
From this we obtain
E(H) =∞∑
k=2
k (Prs.a. thru k-1 steps − Prs.a. thru k steps) . (4.1.14)
We would now like to consider “hitting times” of a fixed state β in a Markov
chain. Given an n-state Markov chain, consider the Clifford algebra Cℓn,n−1,1 with
squaring rules
e2i =
1 if 1 ≤ i ≤ n
−1 if n + 1 ≤ i ≤ 2n − 1,
0 if i = 2n.
(4.1.15)
Let the combinatorial spin algebra be constructed over this Clifford algebra. We
then see that for any si, sj ∈ Cℓn,n−1,1 we have sisj = sjsi. Moreover, we see
s2i =
1 if i ≤ n − 1
0 if i = n.
(4.1.16)
Proposition 4.1.14 (Markov chain hitting time). Given an n-state Markov
chain M, let one state β be set aside and use the combinatorial spin algebra obeying
(4.1.16) to label the states of M, with β labelled by sn. Let T denote the Clifford
transition matrix associated with M under this labelling, and consider the collection
Ω of walks with initial state i = β. If H is a random variable taking values in the
nonnegative integers such that H(ω) is the time step at which the walk ω first visits
state β, i.e. the first hitting time, we find
EH =∞∑
k=1
k⟨⟨(T k
)iβ⟩⟩. (4.1.17)
62
Proof. We observe that we have
(T k
)iβ
=∑
k-paths p:i→β
Prpath psi1···ik=n (4.1.18)
and consequently that
⟨⟨(T k
)iβ⟩⟩ =
∑k-paths p:i→β
first time visiting β
Prpath p. (4.1.19)
The sum is over all k-paths beginning at state i visiting state β for the first time,
for if a path p revisits β, s2n = 0 removes the probability of that path from the sum.
Hence,
E(H) =∞∑
k=1
k∑
k-paths p:i→βfirst time visiting β
Prpath p, (4.1.20)
from which the result follows.
4.2 RANDOM WALK ON THE N-DIMENSIONAL HYPERCUBE
We observe that the n-dimensional hypercube Qn is the Cayley graph of the
combinatorial spin group Sn, yielding a group isomorphism ϕ : Sn →⊕n
i=1 Z2. In
fact, we can define an algebra isomorphism ψ : R ⊗ Sn → R ⊗⊕n
i=1 Z2, defined by
ψ(a ⊗ si) = a ⊗ zi, (4.2.1)
where zi can be thought of as a binary n-vector with 1’s only in the positions specified
by the multiindex i. In order for ψ to be an algebra isomorphism, we require
ψ((a ⊗ si)
(b ⊗ sj
))= ab ⊗
(zi + zj
)(4.2.2)
and
ψ(a ⊗ si + b ⊗ sj) =
(a + b) ⊗ zi if i = j
a ⊗ zi + b ⊗ zj otherwise.
(4.2.3)
63
Each vertex of the hypercube can be uniquely labelled with a length-n binary
string, and we can uniquely associate each of these labels to an element of Sn by
first representing the binary strings as binary vectors in⊕n
i=1 Z2 and then using the
isomorphism ϕ :⊕n
i=1 Z2 → Sn defined by
ϕ(b) =n∏
i=1
si(χb(i)), (4.2.4)
where χb(i) represents the indicator function of the ith component of b.
Example 4.2.1. If b = 100110 we find ϕ(b) = s145.
Please note the following abuse of notation: given qk ∈ R⊗ Sn, we shall write
qk ∈ Qn to mean the vertex of Qn uniquely identified by qk via the isomorphism ϕ.
We now observe that the random walk on the hypercube can be described by
qk+1 = qksYk, (4.2.5)
where qk ∈ Qn,∀k ≥ 0 and Yk is a sequence of independent random variables taking
values in [n] = 1, 2, . . . , n.
Proposition 4.2.2. Let Y be a random variable taking values in [n] with probabili-
ties pi = PrY = i for each 1 ≤ i ≤ n, and let Ykk>0 be the sequence of indepen-
dent random variables obtained from repeated observations of Y . Let x0 ∈ R ⊗ Sn
be any initial probability distribution on the vertices of Qn. Then for k > 0, the
distribution on Qn is given by
xk = x0
(n∑
i=1
pisi
)k
∈ R ⊗ Sn,∀k > 0. (4.2.6)
64
Proof. When k = 0, this is true by hypothesis. When k = 1, we obtain a probability
distribution by Lemma 4.1.5. Let us define the notation pi(k) as the probability of
being at vertex si at time step k. We see that the distribution at time k = 1 is given
by
x1 =∑i⊂[n]
pi(1) si =∑i⊂[n]
n∑j=1
pij(1) sij
=∑i⊂[n]
n∑j=1
pi(0)pjsisj
=∑i⊂[n]
pi(0) si
n∑j=1
pjsj = x0
(n∑
j=1
pjsi
), (4.2.7)
where denotes the symmetric difference of sets. Assuming true for k > 0 and
proceeding by induction, we find
xk+1 =∑i⊂[n]
pi(k + 1) si =∑i⊂[n]
n∑j=1
pij(k + 1) sij
=∑i⊂[n]
pi(k) si
n∑j=1
pjsj = xk
(n∑
j=1
pjsj
)= x0
(n∑
j=1
pjsj
)k (n∑
j=1
pjsj
)
= x0
(n∑
j=1
pjsj
)k+1
. (4.2.8)
We refer to the sequence in (4.2.6) as the vertex distribution sequence associ-
ated with the random walk on the hypercube. Now let η : 2[n] → Z≥0 be defined
by
η(i) ≡n∑
ℓ=1
(2ℓ−1)χ(ℓ∈i), (4.2.9)
where i ⊂ [n], η(∅) ≡ 0, and χ is the indicator function.
65
Corollary 4.2.3. If there exists a stationary probability distribution X ∈ R2non
the vertices of Qn corresponding to the random walk (4.2.5), then
x = limk→∞
xk =∑i⊂[n]
Xη(i)si. (4.2.10)
While representing random walks on the hypercube as random walks on the
combinatorial spin algebra is appealing because of the notational convenience, study-
ing self-avoiding random walks on Qn is not straightforward. The construction we
have described presents no way of determining whether or not a walk intersects it-
self. To overcome this we construct an auxiliary spin algebra on which we induce a
random walk.
Let V be a real 2n-dimensional vector space with orthonormal basis
vi0≤i≤2n−1 and let V ∗ denote its dual. Let L(V ) denote the space of linear oper-
ators on V . We shall denote elements of V by |v⟩ and elements of V ∗ by ⟨v|.
Let us now define a sequence in (R ⊗ S2n) ⊗ L(V ). We begin with the initial
distribution on vertices of Qn, written as pi(0) si, where i ranges over subsets of [n].
Using this, we define
Ξ0 =∑i⊂[n]
pi(0) s(η(i)+1) ⊗ |vη(i)⟩⟨vη(i)| ∈ (R ⊗ S2n) ⊗ L(V ), (4.2.11)
and
Ψ1 =∑i⊂[n]
n∑j=1
pjs(η(ij)+1) ⊗ |vη(i)⟩⟨vη(ij)| ∈ (R ⊗ S2n) ⊗ L(V ). (4.2.12)
Definition 4.2.4. For k > 1, we now define the step distribution sequence by
Ψk = (Ψ1)k . (4.2.13)
66
We define the walk distribution sequence by
Ξk = Ξ0Ψk. (4.2.14)
Theorem 4.2.5. Let W be a random variable on the collection
Ω = m-step random walks si → st ∈ Qn, (4.2.15)
where the random walk is defined as in Proposition 4.2.6. Note that by choice of Ω,
we have made the restriction
Ξ0 ≡ s(η(i)+1) ⊗ |vη(i)⟩⟨vη(i)|. (4.2.16)
Let W (ω) be defined by
W (ω) =
1 if ω is self-avoiding
0 otherwise.
(4.2.17)
Then the expectation of W is given by
EW ⊗ I =(1 ⊗ ⟨vη(i)|
)⟨⟨Ξm⟩⟩2(m+1)
(1 ⊗ |vη(t)⟩
). (4.2.18)
Proof. We begin by noting that for all m ≥ 1 and all i, t ⊂ [n] we have
(1 ⊗ ⟨vη(i)|
)Ξm (1 ⊗ |vt⟩) ∈ R ⊗ S2n . (4.2.19)
We prove the theorem by showing that(1 ⊗ ⟨vη(i)|
)Ξm (1 ⊗ |vt⟩) is a sum of
terms corresponding to all random m-paths si → st ∈ Qn with respective probabil-
ities as coefficients. We show that each step appears as a spin operator factor in
each path.
67
Claim. For k ≥ 1 and si, st ∈ Qn,
(1 ⊗ ⟨vη(i)|
)Ξk
(1 ⊗ |vη(t)⟩
)=
∑P∈k-paths si→st
pi(0)PrPs(η(i)+1)
∏sℓ∈P
s(η(ℓ)+1). (4.2.20)
Proof of claim. The claim is proved by induction on k. When k = 1, we see
that for any si, st ∈ Qn
(1 ⊗ ⟨vη(i)|
)Ξ0Ψ1
(1 ⊗ |vη(t)⟩
)= pi(0)Prsi → sts(η(i)+1)s(η(t)+1). (4.2.21)
This is clear from the construction of Ψ1. Assuming true for k, we find the inductive
step to be
(1 ⊗ ⟨vη(i)|
)Ξ0Ψk+1
(1 ⊗ |vη(t)⟩
)=
(1 ⊗ ⟨vη(i)|
)Ξ0ΨkΨ1
(1 ⊗ |vη(t)⟩
)=
∑sℓ∈Qn
|ℓt|=1
(1 ⊗ ⟨vη(i)|
)Ξ0Ψk
(1 ⊗ |vη(ℓ)⟩
) (1 ⊗ ⟨vη(ℓ)|
)Ψ1
(1 ⊗ |vη(t)⟩
)
=∑
sℓ∈Qn
|ℓt|=1
∑P∈k-paths si→sℓ
pi(0)PrPs(η(i)+1)
∏sι∈P
s(η(ι)+1)
(Prsℓ → sts(η(t)+1)
)
=∑
P∈k+1-paths si→st
pi(0)PrPs(η(i)+1)
∏sι∈P
s(η(ι)+1). (4.2.22)
Since Ξ0 contributes 2 to the degree of each term, paths of length-m are self-
avoiding if and only if their corresponding spin operators are of degree 2(m + 1).
Summing coefficients over terms of degree 2(m + 1) then gives the expectation of
W .
As in Proposition 4.1.14, let us turn to the problem of determining the “hitting
time” of a fixed vertex of Qn. Given the n-dimensional hypercube, consider the
68
Clifford algebra Cℓ2n,2n−1,1 with squaring rules
e2i =
1 if 1 ≤ i ≤ 2n, i = β
−1 if 2n + 1 ≤ i ≤ 2n+1 − 1, i = n + β
0 if i = 2n+1.
(4.2.23)
Let the combinatorial spin algebra be constructed over this Clifford algebra. We
then see that for any si, sj ∈ Cℓ2n,2n−1,1 we have sisj = sjsi. Moreover, we see
s2i =
1 if 1 ≤ i ≤ 2n − 1
0 if i = 2n.
(4.2.24)
Proposition 4.2.6 (Expected hitting time). Given the n-dimensional hypercube
Qn, let one vertex β be set aside and use the combinatorial spin algebra obeying
(4.2.24) to label the vertices of Qn, with β labelled by s2n. Consider the collection
Ω of random walks on the hypercube Qn beginning at vertex i. Let Ξk be the
associated walk-distribution sequence and let H denote a random variable taking
nonnegative integer values such that H(ω) is the first time ω “hits” vertex β. Then
EH ⊗ I =∞∑
k=2
(k ⊗ ⟨vη(i)|
)⟨⟨Ξk⟩⟩ (1 ⊗ |v2n⟩) . (4.2.25)
Proof. We observe that
(k ⊗ ⟨vη(i)|
)⟨⟨Ξk⟩⟩
(1 ⊗ |vη(β)⟩
)= k⟨⟨
∑k-paths p:i→β
Prk-path p existssi1···ik=η(β)⟩⟩ ⊗ I (4.2.26)
where the sum is over k-paths from i to β visiting β for the first time, for if a
path p revisits state β, s22n = 0 removes the probability of that path from the sum.
Summing over k, the result follows immediately.
69
CHAPTER 5
FUNCTIONS ON PARTITIONS AND THE
GRASSMANN ALGEBRA
5.1 THE GRASSMANN ALGEBRA
Definition 5.1.1. For any n > 0, we define the 2n-dimensional Grassmann algebra
∧Rn as the algebra generated by basis elements of the form
scalars: e0 = 1
vectors: e1, . . . , en
bivectors: ei ∧ ej, where 0 < i < j < n
...
n-vector: e1 ∧ e2 ∧ · · · ∧ en
(5.1.1)
subject to the multiplication rulesei ∧ ej = −ej ∧ ei
e1 ∧ e1 = e21 = e2
2 = · · · = e2n = 0.
(5.1.2)
As shorthand, we denote the exterior product ei ∧ ej as eij. Further, we allow
i to represent a multi-index consisting of some subset of [n] = 1, 2, . . . , n, where
we shall assume i = 0 corresponds to ∅ ⊂ [n].
70
Definition 5.1.2. Let Gn denote the collection of all products of elements
γi = ei ∧ en+i ∈ ∧2R2n (5.1.3)
for 1 ≤ i ≤ n. We define γ0 = 1. Since Gn is generated by disjoint bivectors, Gn
forms an abelian group (cf. Lemma 3.1.1). Putting an additive structure on Gn
we obtain the group ring Gn. Allowing real coefficients we obtain a commutative
sub-algebra of ∧R2n, which we denote R⊗Gn and refer to as the Grassmann bivector
algebra.
5.2 THE GRASSMANN ADJACENCY MATRIX
If G = (V , E) is a graph, we say two vertices i, j ∈ V(G) are adjacent if there
exists an edge ij ∈ E(G).
Definition 5.2.1. Let G be a simple graph on k vertices with vertex weights
w1, . . . , wk ∈ R, and fix n such that k < 2n. Let ϕ : [k] → 2[n] be a one-to-one
labelling of the vertices. We define the Grassmann adjacency matrix Γ associated
with G by
Γij =
wjγϕ(j) if ij ∈ E(G),
0 otherwise.
(5.2.1)
We note wjγϕ(j) ∈ R ⊗ Gn.
Lemma 5.2.2. Let Γ ∈ Mat(k, R ⊗ Gn) be defined as above. Then
Γℓ = 0,∀ℓ > n. (5.2.2)
71
Proof. We note that each nonzero entry of Γℓ must have degree at least 2ℓ > 2n,
by construction of the Grassmann adjacency matrix. Since Gn has only n distinct
bivectors and γ2i = 0 for all 1 ≤ i ≤ n, the result follows.
Lemma 5.2.3. Let G be a simple vertex-weighted graph on k vertices and fix n such
that k < 2n. For 0 < ℓ ≤ k and 1 ≤ i ≤ k,(Γℓ
)ii
corresponds to the set of ℓ-cycles
based at vertex i in G.
Proof. Proof is by induction on ℓ. When ℓ=1, the result is true by definition of the
Grassmann adjacency matrix. We assume the result is true for 0 < ℓ < k − 1 and
proceed to the inductive step.
(Γℓ+1
)ii
=k∑
ȷ=1
(Γℓ
)iȷ
Γȷi =∑
ℓ-walks i→ȷ
1-walks ȷ → i. (5.2.3)
5.3 FUNCTIONS ON PARTITIONS
In this section we consider arbitrary group-valued functions on subsets of [n]
and use these to define ring-valued functions on the set of all partitions of [n], which
we denote by P([n]).
We note that a typical element π ∈ P([n]) is a collection of disjoint subsets,
called blocks, whose union is [n]. Given a partition π ∈ P([n]), we denote by |π| the
number of blocks contained in π.
Definition 5.3.1. We denote by
nk
the Stirling numbers of the second kind. These
are defined to be the number of ways a set of n elements can be partitioned into k
nonempty subsets.
72
Given a multiplicative group G, let f : 2[n] → G be a function on the power set
of [n] with f(∅) = eG, where eG denotes the identity. Let R denote the group ring
of G; that is, the ring generated by G using formal sums with integer coefficients.
Define the function g : P([n]) → R by
g(π) =∏b∈π
f(b). (5.3.1)
Here we assume each partition element π ∈ P([n]) is canonically ordered.
Remark 5.3.2. Without altering our construction, f and g can be allowed to take
values in an algebra A, in which case we shall consider the algebra A⊗ Gn.
Fix n > 0 and let K2n−1 denote the complete graph on 2n−1 vertices with ver-
tex labels i ⊂ [n] and vertex weights f(i), with corresponding Grassmann adjacency
matrix Γn.
Since we use π to denote a partition of [n], we use σ to denote a permutation
of the blocks in π. In other words, if π ∈ P([n]) is a k-block partition of [n], denoted
|π| = k, then we use σ ∈ Sk.
Theorem 5.3.3. Let 0 < k ≤ n. Then
∫B
tr((Γn)k
)=
∑π∈P([n])|π|=k
∑σ∈Sk
g(σ(π)), (5.3.2)
where Sk is the symmetric group on k elements; i.e., we sum over all permutations
of each π ∈ P([n]) such that |π| = k.
Proof. By Lemma 5.2.3,((Γn)k
)ii
corresponds to the set of k-cycles based at the ith
vertex in K2n−1. Terms of this type have degree 2n if and only if:
73
1. they are indexed by pairwise disjoint sets, and
2. their product is indexed by [n].
Thus they form a k-set partition of [n]. By completeness of K2n−1, each such
product corresponds to a complete subgraph on k vertices and we obtain all permu-
tations of it.
Corollary 5.3.4. Let 0 < k ≤ n, and suppose G is an abelian group so that R is a
commutative ring with identity. Then∫B
tr((Γn)k
)= k!
∑π∈P([n])|π|=k
g(π). (5.3.3)
Theorem 5.3.5. If R is a commutative ring with identity, then∫B
tr(eΓn
)=
∑π∈P([n])
g(π). (5.3.4)
Proof. Applying Lemma 5.2.2, Corollary 5.3.4, and the linearity of the Berezin in-
tegral, we obtain
∫B
tr(eΓn
)=
∫B
∞∑k=0
1
k!tr
((Γn)k
)=
∫B
n∑k=1
1
k!tr
((Γn)k
)=
n∑k=1
∫B
tr((Γn)k
)k!
=n∑
k=1
∑π∈P([n])|π|=k
g(π) =∑
π∈P([n])
g(π). (5.3.5)
5.4 ADDITIVE WEIGHTS
In the previous section, we defined a function g : P([n]) → R as the product
over blocks of the partition. In order to accommodate additive functions we provide
the following construction.
74
Notation. We denote by ∆(2n − 1,R) the algebra of diagonal (2n − 1) × (2n − 1)
matrices with entries in R
Let f : 2[n] → R be an arbitrary R-valued function on the power set of [n]
with f(∅) ≡ 0. Let g : P([n]) → R be defined by
g(π) =∑b∈π
f(b). (5.4.1)
Let ϕ : [2n − 1] → 2[n] \ ∅ be a one-to-one mapping of integers to multi-indices.
For each i ∈ [2n − 1], let Wϕ(i) be defined by
Wϕ(i) = I2n−1 + (f(ϕ(i)) − 1)eie⊤i , (5.4.2)
where I2n−1 is the (2n − 1) × (2n − 1) identity matrix and ei ∈ R2n−1 is the unit
vector with 1 in the ith coordinate and zeros elsewhere. For each 1 ≤ i ≤ 2n − 1,
Wϕ(i) ∈ ∆(2n − 1,R).
We now consider the algebra ∆(2n − 1,R) ⊗ Gn generated by the disjoint
Grassmann bivectors with diagonal R-valued matrix coefficients. Fix n > 0 and let
K2n−1 denote the complete graph on 2n − 1 vertices with vertex labels i ⊂ [n] and
vertex weights Wi, with corresponding Grassmann adjacency matrix Γn.
Finally let us extend our definition of the Berezin integral to include the di-
agonal matrix which serves as the coefficient of the degree-2n part of
u ∈ ∆(2n − 1,R) ⊗ Gn. In other words,
u =∑i⊂[n]
Wiγi ⇒∫B
u = W[n] ∈ ∆(2n − 1,R). (5.4.3)
75
Theorem 5.4.1. Let 0 < k ≤ n. Then
tr
∫B
tr((Γn)k
) = k!
∑π∈P([n])|π|=k
g(π) + (n − k)
n
k
. (5.4.4)
Proof. By our previously stated results, the nonzero terms of tr((Γn)k) correspond
to k-partitions of [n] with multiplicative weights preserved. For each k-partition π,
let Mπ be defined as
Mπ =∏i∈π
Wi. (5.4.5)
In this case, the Berezin integral is
∫B
tr((Γn)k
)= k!
∑|π|=k
Mπ, (5.4.6)
i.e., a sum of diagonal matrices Mπ whose diagonal elements are values f(b) for blocks
b in a k-partition π. Since each matrix in the sum corresponds to a k-partition of
[n], its trace is of the form
tr (Mπ) =∑b∈π
f(b) + (n − k). (5.4.7)
Since n − k appears in the trace of each matrix and one such matrix appears for
each k-partition, we have the result.
5.5 COUNTING PARTITIONS OF [N ]
Let us now assume f, g are Z-valued functions. Let [n] = 1, 2, . . . , n and
consider the complete graph K2n−1 on 2n − 1 vertices with unit vertex weights. Let
ϕ : [2n] → 2[n] \ ∅ be a one-to-one vertex labelling of K2n−1. Let Γn denote the
Grassmann adjacency matrix associated with K2n−1.
76
Proposition 5.5.1. ∫B
tr((Γn)k
)= k!
n
k
. (5.5.1)
Proof. Let f ≡ 1, g ≡ 1 in Theorem 5.3.3.
Corollary 5.5.2.
∫B
tr((Γn)k
)= k
∫B
tr((Γn−1)
k)
+
∫B
tr((Γn−1)
k−1) . (5.5.2)
Proposition 5.5.3. Given n > 0, let Bn denote the nth Bell number, defined as the
number of ways of partitioning a set of n elements into nonempty subsets. Then
∫B
tr(eΓn
)= Bn. (5.5.3)
Proof. Let f ≡ 1 ⇒ g ≡ 1 in Theorem 5.3.5.
Proposition 5.5.4. For n > 0, define the polynomial pn(t) by
pn(t) =
∫B
tr(etΓn
). (5.5.4)
Then pn(t) satisfies the recurrence
p0(t) = 1 (5.5.5)
p1(t) = t + 1 (5.5.6)
pn(t) = t(p′n−1(t) + pn−1(t)
), for n ≥ 2 (5.5.7)
where p′n−1(t) =d
dtpn−1(t).
77
Proof. Values corresponding to n = 0, n = 1 are found directly. For n ≥ 2, we have
pn(t) =
∫B
tr(etΓn
)=
n∑k=1
tk
n
k
=n∑
k=1
tk(
k
n − 1
k
+
n − 1
k − 1
)
=n∑
k=1
ktk
n − 1
k
+
n∑k=2
tk
n − 1
k − 1
= t
n−1∑k=1
ktk−1
n − 1
k
+ t
n−1∑k=1
tk
n − 1
k
= tp′n−1(t) + tpn−1(t). (5.5.8)
In the appendix we have computed some examples using Maple. We find, for
example,
tr (I − tΓ3)−1 = 7 + (2γ23 + 6γ123 + 2γ12 + 2γ13)t
2 + 6γ123t3. (5.5.9)
5.6 AN ALTERNATE CONSTRUCTION
In this section we provide an alternate construction that yields the same results
while reducing the complexity of calculations.
For n > 0, let V = i : i ( [n], i = ∅ and let Γn denote the graph on 2n − 2
vertices labelled with the disjoint Grassmann bivectors γi for i ∈ V and vertex
adjacency defined by
γiγj ∈ E(Γn) ⇔ i ∩ j = ∅ (5.6.1)
for all i, j ∈ V . We note that the complete graph on 2n − 1 vertices of our initial
construction is a regular graph of degree 2n−2 containing(2n−1
2
)edges. In contrast,
78
the graph Γn is a graph on 2n − 2 vertices with(
nk
)vertices of degree 2n−k − 1 for
each 1 ≤ k ≤ n − 1.
Let Γn denote the Grassmann adjacency matrix associated with Γn. We claim
our previous results hold using this adjacency matrix.
Proposition 5.6.1. Let Γ denote the Grassmann adjacency matrix associated with
Γn, and let B denote the Grassmann adjacency matrix associated with the complete
graph K2n−2 defined as the completion of Γn; i.e., Γn with added edges making every
pair of vertices adjacent. Then for any k ≥ 1, 1 ≤ m ≤ 2n − 2,
(Γk
)m,m
=(Bk
)m,m
. (5.6.2)
Proof. Fix 1 ≤ m ≤ 2n − 2. Consider the complete graph on 2n − 2 vertices labelled
γi, where i ∈(2[n] \ ∅, [n]
). Let pk denote any k-cycle based at vertex ϕ(m) in
K2n−2 as represented in(Bk
)m,m
. In other words, pk = γi1i2···ik where i1, . . . , ik are
pairwise disjoint subsets of [n].
However, i1, . . . , ik pairwise disjoint implies γi1γi2 , γi2γi3 , . . . , γik−1γik ∈ E(Γn)
and so pk is also a term in(Γk
)m,m
. Since Γn is properly contained in K2n−2, every
term in(Γk
)m,m
must also be contained in(Bk
)m,m
, and the proof is complete.
79
•γ1
•γ2
•γ3
•γ4•
γ12
•γ13
•γ14
•γ23
•γ24
•γ34
•γ123•γ124
•γ134
•γ234
............................................................................................................................................................................................................
................................................................................................................................................................................................................................................................................................................................................................................................................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
......................
.....
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.............................................................................................................................................................................................................................................................
......................
......................
......................
......................
......................
......................
......................
......................
......................
...........
............................
............................
............................
............................
............................
............................
............................
............................
............................
............................
............................
............................
............................
............................
............................
.......
....................................................................................
....................................................................................
....................................................................................
....................................................................................
....................................................................................
....................................................................................
....................................................................................
....................................................................................
....................................................................................
........................................................
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. .....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
....................................
....................................
....................................
....................................
....................................
....................................
.....
........................................................................
........................................................................
........................................................................
........................................................................
........................................................................
........................................................................
...........................
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...................................................................................................................................................................................................................................................... ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Figure 5.1. Γ4.
80
CHAPTER 6
MAPLE COMPUTATIONS
We conclude Part I with a few examples computed using MAPLE.
Let A be the Clifford adjacency matrix of any finite graph G. We can use the
following identity to count m-cycles of G for any m > 0:
tr (I − tA)−1 =∞∑
k=0
tktr(Ak
). (6.0.1)
In particular, we obtain the m-cycle of G by examining the coefficients of tm
in the series expansion of tr (I − tA)−1.
> with(linalg):
Example 6.0.2. We begin with the three-dimensional hypercube Q3.>
A:=matrix([[0,s[2],s[3],s[4],0,0,0,0],[s[1],0,0,0,s[5],s[6],0,0],[s[1
>
],0,0,0,0,s[6],s[7],0],[s[1],0,0,0,s[5],0,s[7],0],[0,s[2],0,s[4],0,0,0
>
,s[8]],[0,s[2],s[3],0,0,0,0,s[8]],[0,0,s[3],s[4],0,0,0,s[8]],[0,0,0,0,
> s[5],s[6],s[7],0]]);
81
A :=
0 s2 s3 s4 0 0 0 0
s1 0 0 0 s5 s6 0 0
s1 0 0 0 0 s6 s7 0
s1 0 0 0 s5 0 s7 0
0 s2 0 s4 0 0 0 s8
0 s2 s3 0 0 0 0 s8
0 0 s3 s4 0 0 0 s8
0 0 0 0 s5 s6 s7 0
> J:=Matrix(8,shape=identity);
J :=
1 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 1 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 1
> f:=taylor(expand(trace(evalm(inverse(J-t*A)))),t,9):
> for k from 1 to 8 do: f:=algsubs(s[k]^2=0,expand(f));od:
> expand(f);
82
8 + (2 s7 s3 + 2 s2 s5 + 2 s7 s8 + 2 s1 s4 + 2 s6 s8 + 2 s6 s2 + 2 s6 s3 + 2 s3 s1 + 2 s4 s5 + 2 s5 s8
+ 2 s1 s2 + 2 s4 s7)t2+
(8 s4 s5 s7 s8 + 8 s5 s2 s4 s1 + 8 s1 s4 s3 s7 + 8 s6 s1 s2 s3 + 8 s7 s3 s6 s8 + 8 s6 s2 s5 s8)
t4 + (12 s5 s4 s6 s1 s2 s3 + 12 s6 s1 s3 s5 s2 s8 + 12 s6 s7 s1 s2 s3 s4 + 12 s5 s6 s7 s2 s3 s4
+ 12 s5 s4 s3 s2 s1 s7 + 12 s6 s5 s4 s1 s8 s2 + 12 s6 s1 s7 s2 s8 s3 + 12 s6 s7 s8 s5 s2 s3
+ 12 s7 s8 s5 s2 s3 s1 + 12 s6 s1 s4 s3 s7 s8 + 12 s7 s8 s5 s3 s4 s1 + 12 s6 s7 s8 s5 s3 s4
+ 12 s7 s8 s5 s2 s4 s1 + 12 s6 s7 s8 s5 s2 s4 + 12 s6 s1 s4 s7 s2 s8 + 12 s6 s5 s4 s3 s1 s8)t6
+96 s7 s6 s4 s8 s1 s5 s3 s2 t8 + O(t10)
Here the term (96s7s6s4s8s1s5s3s2)t8 corresponds to the 96 Hamiltonian cir-
cuits contained in Q3.
Example 6.0.3. We now consider partitions of the set 1, 2, 3, 4 using the Grass-
mann adjacency matrix.
83
A :=
0 γ2 γ3 γ4 0 0 0 γ2γ3 γ2γ4 γ3γ4 0 0 0 γ2γ3γ4
γ1 0 γ3 γ4 0 γ1γ3 γ1γ4 0 0 γ3γ4 0 0 γ1γ3γ4 0
γ1 γ2 0 γ4 γ1γ2 0 γ1γ4 0 γ2γ4 0 0 γ1γ2γ4 0 0
γ1 γ2 γ3 0 γ1γ2 γ1γ3 0 γ2γ3 0 0 γ1γ2γ3 0 0 0
0 0 γ3 γ4 0 0 0 0 0 γ3γ4 0 0 0 0
0 γ2 0 γ4 0 0 0 0 γ2γ4 0 0 0 0 0
0 γ2 γ3 0 0 0 0 γ2γ3 0 0 0 0 0 0
γ1 0 0 γ4 0 0 γ1γ4 0 0 0 0 0 0 0
γ1 0 γ3 0 0 γ1γ3 0 0 0 0 0 0 0 0
γ1 γ2 0 0 γ1γ2 0 0 0 0 0 0 0 0 0
0 0 0 γ4 0 0 0 0 0 0 0 0 0 0
0 0 γ3 0 0 0 0 0 0 0 0 0 0 0
0 γ2 0 0 0 0 0 0 0 0 0 0 0 0
γ1 0 0 0 0 0 0 0 0 0 0 0 0 0
> J:=Matrix(14,shape=identity):
> f:=taylor(expand(trace(evalm(inverse(J-t*A)))),t,5):
> for k from 1 to 4 do: f:=algsubs(gamma[k]^2=0,expand(f));od:
> expand(f);
14 + (2 γ2 γ4 + 6 γ1 γ2 γ3 + 6 γ2 γ3 γ4 + 14 γ1 γ2 γ3 γ4 + 6 γ1 γ3 γ4 + 2 γ1 γ3 + 2 γ1 γ2
+ 6 γ1 γ2 γ4 + 2 γ3 γ4 + 2 γ1 γ4 + 2 γ2 γ3)t2+
(6 γ1 γ2 γ3 + 6 γ1 γ2 γ4 + 6 γ1 γ3 γ4 + 6 γ2 γ3 γ4 + 36 γ1 γ2 γ3 γ4) t3 + 24 γ1 γ2 γ3 γ4 t4+
O(t5)
Here the term 14γ1γ2γ3γ4t2 corresponds to the 14
2!= 7 ways of partitioning
84
1, 2, 3, 4 into two nonempty subsets; 36γ1γ2γ3γ4t3 corresponds to 36
3!= 6 ways
of partitioning into three nonempty subsets; and 24γ1γ2γ3γ4t4 corresponds to the
244!
= 1 way of partitioning into 4 nonempty subsets.
Example 6.0.4. Five-state Markov chain.
1
2
3
45
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.........................................................................
.........................................................
...............................................
................. ............
.75.........................................................................................................................................................................................................................................................................................................................................................................................
.25
........................................................................................................................................................................................................................................................................................................................................................................
..................................
.5
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.33..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
...............
.33............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.33
..............................................
..................................
.............................
...........................
.........................
........................
......................
......................
....................
....................
....................
....................
....................
....................
.........................................................................................................................................................................................
.17
.........................................................................................................................................................................................................................................................................................................................................................................................................................
.33
.................................................................................
..........................
............................................................
............... ............
.17
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.33
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
......................................................................
......................................................
..............................................
.............................................................
.5
........................
........................
........................
..........................
..........................
..........................
............................
............................
.............................
..............................
...............................
................................
..................................
...................................
.....................................
.......................................
.........................................
.............................................
.................................................
.....................................................
............................................................
.......................................................................
.....................................................
.25
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.5
............................................................................................................................................... ............
.25
>
T:=matrix([[0,0,.75*s[3],0,.25*s[5]],[.33*s[1],0,.33*s[3],.33*s[4],0]
>
,[0,.17*s[2],0,.33*s[4],.5*s[5]],[.25*s[1],0,0,.25*s[4],.5*s[5]],[.5*s
> [1],.33*s[2],0,0,.17*s[5]]]);
85
T :=
0 0 .75 s3 0 .25 s5
.33 s1 0 .33 s3 .33 s4 0
0 .17 s2 0 .33 s4 .5 s5
.25 s1 0 0 .25 s4 .5 s5
.5 s1 .33 s2 0 0 .17 s5
> J:=Matrix(5,shape=identity);
J :=
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
> f:=taylor(expand(trace(evalm(inverse(J-t*T)))),t,6):
> for k from 1 to 5 do:f:=algsubs(s[k]^2=1,expand(f));od:
> expand(f);
5.000000000 + (.250000000 s4 + .1700000000 s5) t+
(.0914000000 + .2500000000 s1 s5 + .1122000000 s2 s3) t2 + (.1633500000 s2 s4 s5
+ .1633500000 s2 s3 s5 + .5625000000 s1 s3 s5 + .00491300000 s5
+ .08167500000 s1 s2 s5 + .1856250000 s1 s3 s4 + .01562500000 s4
+ .1262250000 s1 s2 s3 + .06375000000 s1)t3 + (.04228588000
+ .02722500000 s4 s1 s2 s5 + .2475000000 s1 s3 s4 s5 + .07187400000 s2 s3 s4 s5
+ .1633500000 s1 s2 s3 s5 + .05445000000 s2 s5 + .01445000000 s1 s5
+ .03702600000 s2 s4 + .1893750000 s1 s3 + .04207500000 s4 s1 s2 s3
+ .01851300000 s1 s2 + .03702600000 s2 s3)t4 + (.02488365000 s2 s4 s5
+ .05394510000 s3 s4 s5 + .1120741125 s1 s3 s5 + .02869646070 s5
+ .06503557500 s1 s2 s5 + .1822528125 s1 s2 s3 s4 s5 + .05662552500 s2 s3 s5
+ .02858625000 s2 + .000976562500 s4 + .01487266250 s1
+ .05717250000 s4 s1 s2 + .07192968750 s1 s3 s4 + .08189156250 s1 s2 s3
+ .01527322500 s2 s3 s4 + .1171875000 s3)t5 + O(t6)
86
The term .04207500000s4s1s2s3t4 gives the conditional probability that a 4-
step random walk starting in state w0 ∈ 1, 2, 3, 4 will satisfy wi ∈ 1, 2, 3, 4 for
0 ≤ i ≤ 4 and that w4 = w0; in this case, .042075/4 = .01051875.
The term .1822528125s1s2s3s4s5 · t5 reveals the probability that in 5-steps the
process forms a Hamilton circuit: .1822528125/5 = .0364505625.
Example 6.0.5. We now consider a directed random graph on 5 vertices with
equiprobable edges p = 12. The expected number of Hamilton circuits in such a
graph is given by n!pn = 5! 125 = 15
4.
>
A:=matrix([[0,.5*s[2],.5*s[3],.5*s[4],.5*s[5]],[.5*s[1],0,.5*s[3],.5*
>
s[4],.5*s[5]],[.5*s[1],.5*s[2],0,.5*s[4],.5*s[5]],[.5*s[1],.5*s[2],.5*
> s[3],0,.5*s[5]],[.5*s[1],.5*s[2],.5*s[3],.5*s[4],0]]);
A :=
0 .5 s2 .5 s3 .5 s4 .5 s5
.5 s1 0 .5 s3 .5 s4 .5 s5
.5 s1 .5 s2 0 .5 s4 .5 s5
.5 s1 .5 s2 .5 s3 0 .5 s5
.5 s1 .5 s2 .5 s3 .5 s4 0
> J:=Matrix(5,shape=identity);
87
J :=
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
> f:=taylor(expand(trace(evalm(inverse(J-t*A)))),t,6):
> for k from 1 to 5 do:f:=algsubs(s[k]^2=1,expand(f));od:
> expand(f);
5.000000000 + (.5000000000 s2 s5 + .5000000000 s5 s4 + .5000000000 s1 s3
+ .5000000000 s2 s4 + .5000000000 s3 s5 + .5000000000 s2 s3 + .5000000000 s3 s4
+ .5000000000 s1 s2 + .5000000000 s1 s5 + .5000000000 s1 s4)t2 + (
.7500000000 s3 s5 s4 + .7500000000 s1 s2 s3 + .7500000000 s2 s3 s4
+ .7500000000 s2 s5 s4 + .7500000000 s1 s2 s4 + .7500000000 s2 s3 s5
+ .7500000000 s1 s3 s5 + .7500000000 s1 s5 s4 + .7500000000 s1 s2 s5
+ .7500000000 s1 s3 s4)t3 + (.7500000000 s5 s4 + 1.500000000 s1 s2 s3 s5
+ .7500000000 s2 s5 + .7500000000 s1 s5 + 1.500000000 s2 s3 s5 s4
+ .7500000000 s3 s5 + 1.500000000 s1 s2 s5 s4 + 1.500000000 s1 s3 s5 s4
+ .7500000000 s2 s4 + .7500000000 s1 s2 + .7500000000 s1 s4
+ 1.500000000 s1 s2 s3 s4 + 1.250000000 + .7500000000 s3 s4 + .7500000000 s1 s3
+ .7500000000 s2 s3)t4 + (1.875000000 s2 s5 s4 + 1.875000000 s1 s2 s5
+ 1.875000000 s3 s5 s4 + 1.875000000 s2 s3 s5 + 1.875000000 s1 s3 s5
+ 1.875000000 s1 s5 s4 + 1.875000000 s5 + 3.750000000 s1 s2 s3 s5 s4
+ 1.875000000 s2 + 1.875000000 s1 s2 s3 + 1.875000000 s1 s2 s4 + 1.875000000 s3
+ 1.875000000 s1 s3 s4 + 1.875000000 s2 s3 s4 + 1.875000000 s1 + 1.875000000 s4
)t5 + O(t6)
The expected number of Hamilton circuits in the graph is given by the real
coefficient of the degree-10 term in the coefficient of t5: 3.750000000s1s2s3s5s4.
88
Part II
Stochastic Processes on Clifford
Algebras
89
CHAPTER 7
MORE PROPERTIES OF CLIFFORD ALGEBRAS
In this chapter, we establish general properties of Clifford algebras useful for
proving results related to stochastic processes on Clifford algebras. We begin with
the expansion of the productm∏
ℓ=1
uℓ for an arbitrary finite collection uℓ ⊂ Cℓp,q.
Given two arbitrary canonically ordered subsets i, j ⊂ [n], let us define the
notation
µ(i > j) =
|j|∑ℓ=1
|i|∑k=1
χ(ik > jℓ), (7.0.1)
where
χ(ik > jℓ) =
1, if the kth element of i > the ℓth element of j,
0, otherwise.
(7.0.2)
Given j1, j2, . . . , jm ⊂ [n], we define σm : [n]m → 0, 1 by
σm(j1, j2, . . . , jm) =
(m∑
ℓ=2
ℓ∑k=1
µ(jk > jℓ)
)(mod 2). (7.0.3)
Definition 7.0.6. Let Z ∋ p, q ≥ 0, p+q = n be arbitrary. Let us define the product
signature map φm : [n]m → 0, 1 by
φm(j1, . . . , jm) =
(σ(j1, . . . , jm) +
n∑k=p+1
⌊1
2
m∑ℓ=1
|jℓ ∩ k|⌋
)(mod 2). (7.0.4)
90
Notation. For convenience we shall omit the subscript m with the understanding
that φ(j1, . . . , jm) is the mapping φm : (2[n])m → N0 with the number of arguments
uniquely identifying the mapping.
Lemma 7.0.7. Given u1, · · · , um ∈ Cℓp,q we have
m∏ℓ=1
uℓ =∑i⊂[n]
∑j1,...,jm⊂[n]
j1···jm=i
(−1)φ(j1,...,jm)u1,j1u2,j2 · · · um,jm
ei (7.0.5)
where uk,jkdenotes the real coefficient of ejk
in uk.
Proof. Since for arbitrary i, j ⊂ [n] the product eiej = ±eij, we see that for any
fixed i ⊂ [n] the multivector ei appears as the product of any collection of subsets
of [n] whose symmetric difference is i. Thus in a product of m arbitrary elements
of the algebra, the coefficient of ei must be a sum over all products of multi-vectors
whose multi-indices satisfy this condition.
The sign of each term depends on two things: (i) the order of the indices
and (ii) the number of vectors squaring to −1 in the product. It should be clear
that σ(j1, . . . , jm) takes care of the first part, since we have totalled the number
of transpositions required to canonically order the multi-indices involved in the
product. It should be equally clear that the term
n∑k=p+1
⌊1
2
m∑ℓ=1
|jℓ ∩ k|⌋ (mod 2) (7.0.6)
counts the number of pairwise cancellations that contribute factors of −1 to the
product.
One remaining concern is whether cancelling prior to reordering makes a dif-
ference in the sign of the product. The answer to this question is “no.” Since
91
cancellation occurs pairwise, the parity of the number of transpositions required for
reordering is unchanged by cancelling prior to regrouping.
Corollary 7.0.8. For u, v ∈ Cℓp,q we have
⟨uv⟩0 =∑i⊂[n]
(−1)(|i|−1)|i|
2+|i\[p]|uivi. (7.0.7)
Proof. This follows immediately from the lemma once we see
σ(i, i) =
|i|∑ℓ=1
|i|∑k=1
χ(ik > iℓ) =
|i|∑ℓ=1
|i| − ℓ =(|i| − 1)|i|
2. (7.0.8)
For each u ∈ Cℓp,q, we define the map τu : Cℓp,q → R by
τu(v) = ⟨uv⟩0. (7.0.9)
7.1 NORMS AND INNER PRODUCTS
Lemma 7.1.1. The map ∥ · ∥∞ : Cℓp,q → R, defined by u 7→ maxi⊂[n]
|ui|, defines a
norm on Cℓp,q, which we refer to as the infinity-norm on Cℓp,q.
Proof. Clearly ∥u∥∞ = 0 ⇔ u ≡ 0. Given α ∈ R, we have
∥αu∥∞ = maxi⊂[n]
|αui| = maxi⊂[n]
|α||ui| = |α|maxi⊂[n]
|ui| = |α|∥u∥∞. (7.1.1)
Letting u, v ∈ Cℓp,q, we obtain the triangle inequality:
∥u + v∥∞ = maxi⊂[n]
|ui + vi| ≤ maxi⊂[n]
(|ui| + |vi|)
= maxi⊂[n]
|ui| + maxi⊂[n]
|vi| = ∥u∥∞ + ∥v∥∞. (7.1.2)
92
Remark 7.1.2. We note that the infinity norm is not multiplicative. For example,
letting u = e1 + e3, v = e23 + e12 in Cℓ3,0 we have ∥u∥∞ = ∥v∥∞ = 1, but ∥uv∥∞ =
∥2e123∥∞ = 2 = ∥u∥∞∥v∥∞.
Lemma 7.1.3. Given u, v ∈ Cℓp,q, we have the following inequality:
∥uv∥∞ ≤ 2n∥u∥∞∥v∥∞. (7.1.3)
Proof.
∥uv∥∞ = maxi⊂[n]
|(uv)i| = maxi⊂[n]
|∑j⊂[n]
ujvij|
≤ maxi⊂[n]
∑j⊂[n]
|ujvij| = maxi⊂[n]
∑j⊂[n]
|uj||vij| ≤ maxi⊂[n]
maxj⊂[n]
|uj|∑j⊂[n]
|vij|
= ∥u∥∞
maxi⊂[n]
∑j⊂[n]
|vij|
≤ ∥u∥∞(
2n maxi⊂[n]
|vi|)
= 2n∥u∥∞∥v∥∞. (7.1.4)
Lemma 7.1.4. For fixed j, as i runs through all subsets of [n], ij runs through
all subsets of [n] exactly once.
Proof. Let j ⊂ [n] be fixed. We show that for arbitrary i ⊂ [n], there exists a unique
ℓ ⊂ [n] such that jℓ = i. Denoting set complementation by k′ = [n] \ k we have
j(ji) = j((j ∪ i) ∩ (j ∩ i)′
)= j
((j ∪ i) ∩ (j′ ∪ i′)
)=
(j ∪
((j ∪ i) ∩ (j′ ∪ i′)
))∩
(j ∩
((j ∪ i) ∩ (j′ ∪ i′)
))′
93
=((j ∪ i) ∩ (j ∪ j′ ∪ i′)
)∩
(((j ∩ j) ∪ (j ∩ i)
)∩
((j ∩ j′) ∪ (j ∩ i′)
))′= (j ∪ i) ∩
(j ∩ (j ∩ i′)
)′= (j ∪ i) ∩ (j ∩ i′)′ = (j ∪ i) ∩ (j′ ∪ i)
= i ∪ (j ∩ j′) = i. (7.1.5)
Hence, setting ℓ = ji proves existence. To see that this choice of ℓ is unique, we
let k, ℓ ⊂ [n] and consider
jℓ = jk (7.1.6)
⇒ j(jℓ) = j(jk) (7.1.7)
⇒ ℓ = k. (7.1.8)
Remark 7.1.5. This makes (2[n],) an abelian group of order 2n.
The result of Lemma 7.1.4 is used implicitly in proofs throughout the remain-
der of the current work.
Lemma 7.1.6. The map ∥ · ∥1 : Cℓp,q → R, defined by u 7→∑i⊂[n]
|ui|, defines a
sub-multiplicative norm on Cℓp,q, which we refer to as the 1-norm on Cℓp,q.
Proof. Clearly ∥u∥1 = 0 ⇔ u ≡ 0. Given α ∈ R we have
∥αu∥1 =∑i⊂[n]
|αui| =∑i⊂[n]
|α||ui| = |α|∑i⊂[n]
|ui| = |α|∥u∥1. (7.1.9)
94
Letting u, v ∈ Cℓp,q, we obtain the triangle inequality:
∥u + v∥1 =∑i⊂[n]
|ui + vi| ≤∑i⊂[n]
|ui| + |vi|
=∑i⊂[n]
|ui| +∑i⊂[n]
|vi| = ∥u∥1 + ∥v∥1. (7.1.10)
Finally, we establish sub-multiplicativity of ∥ · ∥1.
∥uv∥1 =∑i⊂[n]
|(uv)i| =∑i⊂[n]
|∑j⊂[n]
ujvij|
≤∑i⊂[n]
∑j⊂[n]
|ujvij| =∑i⊂[n]
∑j⊂[n]
|uj||vij| =∑j⊂[n]
|uj|∑i⊂[n]
|vij|
=∑j⊂[n]
|uj|∑ℓ⊂[n]
|vℓ| = ∥u∥1∥v∥1, (7.1.11)
where we have used Lemma 7.1.4.
Notation. For u ∈ Cℓp,q, we denote the 1-norm of u by
|u| = ∥u∥1 =∑i⊂[n]
|ui|.
Lemma 7.1.7. Let u ∈ Cℓp,q be arbitrary. Then
|u| =∑i⊂[n]
|⟨uei⟩0|. (7.1.12)
Proof. By the properties of the Clifford product, we have
(uiei) ej = ±uieij (7.1.13)
for any i, j ⊂ [n], where denotes symmetric difference. Specifically we have
(uiei) ei = ±ui (7.1.14)
95
for all i ⊂ [n], and observe that this implies
⟨uei⟩0 = ±ui (7.1.15)
for all i ⊂ [n]. We conclude the proof by noting that for arbitrary u =∑
i⊂[n] uiei ∈
Cℓp,q,
|u| =∑i⊂[n]
|ui| =∑i⊂[n]
|⟨uei⟩0|. (7.1.16)
Definition 7.1.8. Letting u, v ∈ Cℓp,q be written as u =∑i⊂[n]
uiei and v =∑i⊂[n]
viei,
we define the inner product of u and v by
⟨u, v⟩ =∑i⊂[n]
uivi. (7.1.17)
Remark 7.1.9. Note that this inner product is symmetric.
We observe that this definition of the inner product implies⟨u, ei⟩ = ui, and
that any element u ∈ Cℓp,q can therefore be expanded as
u =∑i⊂[n]
⟨u, ei⟩ei. (7.1.18)
We also obtain the inner product representation of the Berezin integral for any
u ∈ Cℓp,q : ∫B
u = ⟨u, e[n]⟩. (7.1.19)
This inner product is induced from the vector space isomorphism of Cℓp,q with
R2nand hence defines a norm according to
Lemma 7.1.10.
∥u∥ =√
⟨u, u⟩. (7.1.20)
96
Like the infinity norm, the inner-product norm is not sub-multiplicative. In
Cℓ1,0, for example, ∥(1 + e1)2∥ =
√8 > 2 = ∥(1 + e1)∥2.
Lemma 7.1.11. Given u, v ∈ Cℓp,q, the inner product norm satisfies the following
inequalities:
∥uv∥ ≤ 2n2 ∥u∥ · ∥v∥, (7.1.21)
∥uv∥ ≤ ∥u∥1∥v∥1, (7.1.22)
∥uv∥ ≤ 2n2 ∥u∥∞∥v∥1. (7.1.23)
Proof. Consider the product uv =∑
i,j⊂[n]
(−1)φ(i,j)uivjeij. By definition of the inner
product norm, we have
∥uv∥2 =∑k⊂[n]
(uv)2k ≤
∑k⊂[n]
∑ij=k
|uivj|
2
=∑k⊂[n]
∑j⊂[n]
|ujvjk|
2
≤∑k⊂[n]
(∥u∥ · ∥v∥)2 by Schwartz’ Inequality
≤ 2n∥u∥2∥v∥2. (7.1.24)
For the 1-norm we have
∥uv∥2 =∑k⊂[n]
(uv)2k ≤
∑i⊂[n]
|ui|
∑j⊂[n]
|vj|
2
=
∑i⊂[n]
|ui|
2 ∑j⊂[n]
|vj|
2
= (∥u∥1)2(∥v∥1)
2, (7.1.25)
97
and finally, in terms of the infinity norm and 1-norm,
∥uv∥2 =∑k⊂[n]
∑j⊂[n]
|ujvjk|
2
≤∑k⊂[n]
maxj⊂[n]
|uj|∑j⊂[n]
|vjk|
2
=∑k⊂[n]
(∥u∥∞∥v∥1)2 ≤ 2n(∥u∥∞)2(∥v∥1)
2. (7.1.26)
Definition 7.1.12. For every u ∈ Cℓp,q, we can define the map τ ⋆u : Cℓp,q → R by
τ ⋆u(v) ≡
∑i⊂[n]
(−1)φ(i,[n]\i)ui v[n]\i. (7.1.27)
Lemma 7.1.13. For arbitrary u, v ∈ Cℓp,q, we have
τ ⋆u(v) =
∫B
uv. (7.1.28)
Proof. Given u, v ∈ Cℓp,q, we have∫B
uv
e[n] =∑i⊂[n]
ui ei v[n]\i e[n]\i =∑i⊂[n]
(−1)φ(i,[n]\i)ui v[n]\i e[n]
= τ ⋆u(v) e[n]. (7.1.29)
Lemma 7.1.14. Let u ∈ Cℓp,q and i ⊂ [n]. The coefficient of ei in u is
ui = (−1)|i\[p]|⟨uei⟩0, (7.1.30)
where ei denotes grade involution.
98
Proof.
u ei =∑j⊂[n]
uj ej ei
⇒ ⟨u ei⟩0 = ui ei ei = ui ei1 ei2 · · · ei|i| ei|i| · · · ei2 ei1
= (−1)αui, (7.1.31)
where α is the number of vectors among ei1 , . . . , e|i| squaring to −1; i.e., α =
|i \ [p]|.
Corollary 7.1.15. For any u ∈ Cℓp,q, we have
⟨u, ei⟩ = ⟨ei, u⟩ = (−1)|i\[p]|⟨u ei⟩0. (7.1.32)
Corollary 7.1.16. Let u, v ∈ Cℓn,0. Then
uv =∑
i,j⊂[n]
(−1)σ(i,j)(ui vj) eij. (7.1.33)
Corollary 7.1.17. For u, v ∈ Cℓp,q, we can write uv = (uv)+ − (uv)−, where
(uv)+ =∑
i,j⊂[n]
φ(i,j)≡0 (mod 2)
ui vj eij (7.1.34)
and
(uv)− =∑
i,j⊂[n]
φ(i,j)≡1 (mod 2)
ui vj eij. (7.1.35)
Lemma 7.1.18. Let u, v ∈ Cℓn,0 be written as in definition 7.1.8. Then
τu(v) = ⟨u, v⟩, (7.1.36)
where v denotes the reversion of v.
99
Proof. In Cℓn,0 we observe that
ei ei = ei1i2···ik−1ik eikik−1···i2i1 = 1. (7.1.37)
From which it follows that
τu(v) ≡ ⟨uv⟩0 =∑i⊂[n]
ui vi ei ei =∑i⊂[n]
ui vi ≡ ⟨u, v⟩. (7.1.38)
7.2 COMPUTING POWERS VIA GENERATING FUNCTIONS
Given u ∈ Cℓp,q, we wish to obtain a method of expressing um for m > 1.
To this end, we consider the expansion of (1 − tu)−1 as a polynomial in t with
Clifford-algebraic coefficients.
It is clear that (1 − tu)(1 + tu + t2u2 + t3u3 + · · · ) = 1, provided the sum
(1+ tu+ t2u2 + · · · ) exists. Then um is the Clifford-algebraic coefficient of tm in the
series expansion of (1 − tu)−1.
Lemma 7.2.1. A sufficient condition for invertibility of (1 − tu) is
|t| <
(2
n2 ∥u∥2
)−1, if ∥u∥ > 1,
2−n2 , if ∥u∥ < 1,
2−n, if ∥u∥ = 1.
(7.2.1)
The norm used here is the inner-product norm.
Proof. We prove the lemma by showing convergence of 1 + tu + t2u2 + · · · . First let
100
us assume ∥u∥ > 1. Then
∥∞∑
m=0
(tu)m∥ ≤∞∑
m=0
∥(tu)m∥ =∞∑
m=0
|tm| ∥um∥
applying Lemma 7.1.11,
≤∞∑
m=1
|tm| 2n(m−1)
2 ∥u∥m.
Now choosing |t| <(2
n2 ∥u∥2
)−1,
≤∞∑
m=1
(2
nm2 ∥u∥2m
)−12
n(m−1)2 ∥u∥m =
∞∑m=0
2−n2
∥u∥m→ 2−
n2
1 − ∥u∥, (7.2.2)
since ∥u∥ > 1. On the other hand, if ∥u∥ < 1, we have
∥∞∑
m=0
(tu)m∥ ≤∞∑
m=1
2n(m−1)
2 |tm| ∥u∥m,
choosing |t| < 2−n2 ,
≤∞∑
m=1
2−nm2 2
n(m−1)2 ∥u∥m =
∞∑m=0
2−n2
∥1/u∥m→ 2−
n2
1 − ∥1/u∥, (7.2.3)
since ∥1/u∥ > 1. Finally, we consider the case ∥u∥ = 1. Then
∥∞∑
m=0
(tu)m∥ ≤∞∑
m=1
2n(m−1)
2 |tm|
choosing |t| < 2−n,
≤∞∑
m=1
2−nm2n(m−1)
2 =∞∑
m=1
2−n2(m−1) =
∞∑m=0
(2−
n2
)m, (7.2.4)
which converges.
Throughout the remainder of this section, we assume that (1 − tu)−1 exists.
101
For any i ⊂ [n] and any non-negative integer ki, let us define the notation
χ(ki > 0) i =
i, if ki > 0
∅, otherwise.
(7.2.5)
Further we define the map ϕ : (N0)2n → N0 by
ϕ(k∅, . . . , k[n]) = φ(χ(k1 > 0)1, χ(k12 > 0)1, 2, . . . , χ(k[n] > 0)[n]), (7.2.6)
where φ is the product signature map defined by (7.0.4).
Lemma 7.2.2.1 − t∑i⊂[n]
αiei
−1
=∞∑
m=0
tm
∑(k∅,...,k[n])∈N2n
0k∅+···+k[n]=m
(−1)ϕ(k∅,...,k[n])∏i⊂[n]
αki
i eki (mod 2)i
(7.2.7)
where on the right-hand side the inner sum is taken over points in the intersection
of the hyperplane (x0, . . . , x2n) : x0 + · · ·+x2n = m ⊂ R2nwith the 2n-dimensional
lattice of nonnegative integers N2n
0 .
Proof. From the series expansion of (1 − t∑i⊂[n]
αiei)−1, we obtain
1 − t∑i⊂[n]
αiei
−1
=∞∑
m=0
tm
∑i⊂[n]
αiei
m
. (7.2.8)
From the standard (commutative) multinomial theorem, we see
(a1 + a2 + · · · + aℓ)m =
∑k1+···+kℓ=m
0≤k1,k2,...,kℓ∈Z
(m
k1, . . . , kℓ
)ak1
1 ak22 · · · akℓ
ℓ . (7.2.9)
From this we obtain ∑i⊂[n]
αiei
m
=∑
(k∅,...,k[n])∈N2n0
k∅+···+k[n]=m
±∏i⊂[n]
αki
i eki
i . (7.2.10)
102
By properties of the Clifford product, we can only have ±∏i⊂[n]
αki
i eki (mod 2)i in each
term, and the sign varies according to signature and order of multiplication. Since ei
only occurs in the product when ki > 0, the result follows by applying the product
signature map to multi-indices meeting this criterion.
We now turn our attention to linear mappings Cℓp,q → Cℓp,q. By lexicographi-
cally ordering the subsets of [n] and enumerating them with 0, . . . , 2n − 1, we are
able to represent an arbitrary element u ∈ Cℓp,q as a vector u ∈ R2nin the following
way:
u =∑i⊂[n]
uiei 7→ u =∑i⊂[n]
uief(i), (7.2.11)
where ei0≤i≤2n−1 is the standard orthonormal basis for R2nand f : [n] →
0, 1, . . . , 2n − 1 is the enumeration of the lexicographically ordered collection of
subsets of [n].
Representing elements of the Clifford algebra in this way, we are able to define
linear operators on Cℓp,q as 2n × 2n matrices with real-valued entries. Let L(Cℓp,q)
denote the space of linear operators on the Clifford algebra. Now given A ∈ L(Cℓp,q)
we can represent u′ = Au as
u′ = Au =∑i⊂[n]
∑j⊂[n]
aijujei. (7.2.12)
Here (aij) ∈ GL(2n, R) is indexed by lexicographically ordered subsets of [n].
103
Theorem 7.2.3. Let A ∈ L(Cℓp,q) be represented by (aij) ∈ GL(2n, R). Then
(1 − tAu)−1 =∞∑
m=0
tm∑
(k∅,...,k[n])∈N2n0
k∅+···+k[n]=m
(−1)ϕ(k∅,...,k[n])
∏i⊂[n]
∑ℓ∅+···+ℓ[n]=ki0≤ℓ∅,...,ℓ[n]∈Z
(ki
ℓ∅, . . . , ℓ[n]
) ∏j⊂[n]
(aijuj
)ℓj
eki (mod 2)i . (7.2.13)
Proof. We apply Lemma 7.2.2, letting αi =∑j⊂[n]
aijuj in (7.2.7) to get
1 − t∑i⊂[n]
∑j⊂[n]
aijuj
ei
−1
=∞∑
m=0
tm
∑k∅+···+k[n]=m
(k∅,...,k[n])∈N2n0
(−1)ϕ(k∅,...,k[n])∏i⊂[n]
(∑j⊂[n]
aijuj)kie
ki (mod 2)i
. (7.2.14)
Whence we observe∑j⊂[n]
aijuj
ki
=∑
ℓ∅+···+ℓ[n]=kiℓ∅,...,ℓ[n]∈N0
(ki
ℓ∅, . . . , ℓ[n]
) ∏j⊂[n]
(aijuj
)ℓj
. (7.2.15)
Alternatively, we can consider linear transformations on the underlying vector
space of Cℓp,q. Let V be an n-dimensional vector space with orthonormal basis
ei1≤i≤n, and let us generate the Clifford algebra Cℓp,q(V ).
Theorem 7.2.4. For any A ∈ GLn(R), let the set fi be defined by
fi =n∑
j=1
Aijej ∈ Cℓp,q(V ). (7.2.16)
104
Then the vectors fi satisfy:
fi, fj = 2
p∑ℓ=1
AiℓAjℓ − 2n∑
k=p+1
AikAjk, (7.2.17)
[fi, fj] = 2∑
1≤ℓ<k≤n
(AiℓAjk − AikAjℓ) eℓk, (7.2.18)
(fi)m =
(p∑
ℓ=1
A2iℓ −
n∑k=p+1
A2ik
)⌊m2⌋
fm (mod 2)i , ∀m ≥ 0. (7.2.19)
Proof. Let fi =n∑
ℓ=1
Aiℓeℓ for 1 ≤ i ≤ n. Then
fifj =
(n∑
ℓ=1
Aiℓeℓ
)(n∑
k=1
Ajkek
)=
∑1≤ℓ≤k≤n
(AiℓAjk − AikAjℓ) eℓk +n∑
ℓ=1
AiℓAjℓe2ℓ
=∑
1≤ℓ≤k≤n
(AiℓAjk − AikAjℓ) eℓk +
p∑ℓ=1
AiℓAjℓ −n∑
k=p+1
AikAjk. (7.2.20)
The anti-commutation relation is given by
fi, fj = fifj + fjfi =∑
1≤ℓ≤k≤n
(AiℓAjk − AikAjℓ) eℓk +n∑
ℓ=1
AjℓAiℓe2ℓ
+∑
1≤ℓ≤k≤n
(AjℓAik − AjkAiℓ) eℓk +n∑
ℓ=1
AjℓAiℓe2ℓ
= 2n∑
ℓ=1
AjℓAiℓe2ℓ = 2
(p∑
ℓ=1
AjℓAiℓ −n∑
k=p+1
AjkAik
). (7.2.21)
The commutator is given by
[fi, fj] = fifj − fjfi =∑
1≤ℓ≤k≤n
(AiℓAjk − AikAjℓ) eℓk +n∑
ℓ=1
AjℓAiℓe2ℓ
−∑
1≤ℓ≤k≤n
(AjℓAik − AjkAiℓ) eℓk +n∑
ℓ=1
AjℓAiℓe2ℓ
= 2∑
1≤ℓ≤k≤n
(AiℓAjk − AikAjℓ) eℓk. (7.2.22)
105
The form of fmi is obtained by induction on m. When m = 2 we observe
f 2i =
∑1≤ℓ≤k≤n
(AiℓAik − AikAiℓ) eℓk +n∑
ℓ=1
A2iℓe
2ℓ =
p∑ℓ=1
A2iℓ −
n∑k=p+1
A2ik ∈ R. (7.2.23)
Assuming (7.2.17) holds for m, we see
fm+1i =
(p∑
ℓ=1
A2iℓ −
n∑k=p+1
A2ik
)⌊m2⌋
fm (mod 2)i × fi
=
(∑p
ℓ=1 A2iℓ −
∑nk=p+1 A2
ik
)⌊m2⌋+1
, if m is odd(∑pℓ=1 A2
iℓ −∑n
k=p+1 A2ik
)⌊m2⌋fi, if m is even.
=
(p∑
ℓ=1
A2iℓ −
n∑k=p+1
A2ik
)⌊m+12
⌋
f(m+1) (mod 2)i . (7.2.24)
Corollary 7.2.5.
(1 − tfi)−1 =
1 + tfi
1 − t2(∑p
ℓ=1 A2iℓ −
∑nk=p+1 A2
ik
) . (7.2.25)
106
CHAPTER 8
CLIFFORD-ALGEBRAIC RANDOM VARIABLES AND
MARKOV CHAINS
8.1 RANDOM VARIABLES
We recall from Definition 4.1.1 that a probability space is a triple (Ω,F , Pr)
where Ω is an arbitrary set, F is a σ-algebra of subsets of Ω, and Pr : F → [0, 1] is
a probability measure. We now recall some other useful definitions from probability
theory.
Definition 8.1.1. A measurable space is a pair (S, E) where S is an abstract set
and E = E is a collection of subsets of S satisfying
1. S ∈ E ,
2. E ∈ E ⇒ S − E ∈ E ,
3. Ei ∈ E , i = 1, 2, . . . ⇒∞⋃i=1
Ei ∈ E .
In other words, E is a σ-algebra.
Definition 8.1.2. Let B denote the Borel σ-algebra on R. A set of random vari-
ables X1(ω), X2(ω), . . . , Xn(ω) is said to form an independent system if for any
107
B1, B2, . . . , Bn ∈ B we have
PrX1 ∈ B1, X2 ∈ B2, . . . , Xn ∈ Bn =n∏
k=1
PrXk ∈ Bk. (8.1.1)
Definition 8.1.3. A real-valued stochastic process is a family of random variables
X(t, ω) : t ∈ R+ for which the function X(t, ω) : R+ × Ω → R is measurable. If
X(t, ω) ∈ L2(Ω),∀t ∈ R+, then we say that X(t, ω) is an L2-stochastic process.
Definition 8.1.4. X(t) is a Levy process if for any set of disjoint intervals
I1, I2, . . . , Ik the set X(I1), X(I2), . . . X(Ik) forms an independent system. Such
processes are also said to have independent increments.
Definition 8.1.5. For fixed p, q ≥ 0 with n = p+q, let e1, . . . , en denote the basis
vectors of the 2n-dimensional Clifford algebra Cℓp,q and assume standard notation
for basis bivectors, trivectors, etc. Let (Ω,F , Pr) be a fixed probability space, and
let ξi(ω) be real-valued random variables with finite expectation. Then
Ξ(ω) =∑i⊂[n]
ξi(ω) ei (8.1.2)
will be referred to as a Clifford-algebraic random variable with expectation
E(Ξ) =∑i⊂[n]
E(ξi) ei. (8.1.3)
The following special cases hold when we consider the Clifford algebra Cℓ0,n.
n = 0 corresponds to the real-valued mean mΞ random variable
Ξ(ω) = ξ0(ω). (8.1.4)
n = 1 corresponds to the complex random variable
Ξ(ω) = ξ0(ω) + ξ1(ω) e1. (8.1.5)
108
n = 2 yields the quaternionic random variable
Ξ(ω) = ξ0(ω) + ξ1(ω) e1 + ξ2(ω) e2 + ξ3(ω) e12. (8.1.6)
Remark 8.1.6. Letting the ξi be Hermitian operators on a Hilbert space, we obtain
a Clifford-algebraic quantum random variable.
8.1.1 Discrete Clifford-Algebraic Random Variables
In this section we develop sample spaces of Clifford-algebraic random variables.
We define the Clifford-algebraic expectation and kth moments over these sample
spaces.
Let (Ω,F , Pr) be a probability space with finite discrete random variables
ξii⊂[n], and let Ξ(ω) = ξ0(ω) + ξ1(ω) e1 + · · · + ξ1,2,...,n(ω) e12···n be a Clifford-
algebraic random variable on Ω. Then it is easy to see that Ξ(ω) is a discrete
random variable taking only finitely many values.
Let EΞ denote the range or event space of Ξ. Then letting |·| denote cardinality,
we see
|EΞ| = |ξ0| × |ξ1| × · · · × |ξ1,2,...,n| =∏i⊂[n]
|ξi|. (8.1.7)
Let us denote the elementary outcomes of EΞ by ζk ∈ EΞ ⊂ Cℓp,q where
1 ≤ k ≤ |EΞ|.
Definition 8.1.7. Given a Clifford-algebraic random variable Ξ(ω) with associated
event space EΞ and elementary outcomes ζk, the expectation of Ξ is given by
E (Ξ) =
|EΞ|∑k=1
ζkPrΞ = ζk =∑i⊂[n]
E (ξi) ei. (8.1.8)
109
The special cases n = 0, 1, 2 lead to some familiar results.
Example 8.1.8. Quaternionic expectation Let n = 2 and consider the Clifford
random variable Ξ(ω) = ξ0 + ξ1e1 + ξ2e2 + ξ3e12, where e1, e2, e12 ∈ Cℓ0,2∼= H and
ξi, 0 ≤ i ≤ 3 are Bernoulli random variables described by:
1. ξ0 takes values 0, 1 with probabilities 14, 3
4;
2. ξ1 takes values 0, 1 with probabilities 18, 7
8;
3. ξ2 takes values 0, 1 with probabilities 13, 2
3;
4. ξ3 takes values 0, 1 with probabilities 35, 2
5.
We see that the event space of Ξ consists of 16 outcomes.
EΞ =
0, 1, e1, e2, e12,
1 + e1, 1 + e2, 1 + e12, e1 + e2, e1 + e12, e2 + e12,
1 + e1 + e2, 1 + e1 + e12, 1 + e2 + e12, e1 + e2 + e12, 1 + e1 + e2 + e12
. (8.1.9)
A straightforward calculation yields the expectation:
E(Ξ) =16∑
k=1
ζkPrΞ = ζk =1
480(360 + 420e1 + 320e2 + 192e12)
=3
4+
7
8e1 +
2
3e2 +
2
5e12. (8.1.10)
110
8.1.2 Products of Clifford-Algebraic Random Variables
Things become interesting when we consider products of random variables in
the Clifford algebra setting. Consider the following example.
Example 8.1.9. Let X0, X1 be independent, R+-valued random variables with re-
spective mean values EX0 = x0, EX1 = x1. Further define the Clifford-algebraic
random variable
Ξ = X0 + X1e1 ∈ Cℓp,q (8.1.11)
where p + q = 1. We note that the expectation of Ξ is x0 + x1e1, and we wish to
compute the expectation of Ξ2. We have two cases to consider.
1. Cℓ1,0:
E(Ξ2) = E(X2
0 + X21 + 2X0X1e1
)= x2
0 + x21 + 2x0x1e1; (8.1.12)
2. Cℓ0,1:
E(Ξ2) = E(X2
0 − X21 + 2X0X1e1
)= x2
0 − x21 + 2x0x1e1. (8.1.13)
In particular, when x0 = x1 = x we obtain
E(Ξ2) =
2x2(1 + e1) when p = 1, q = 0
2x2e1 when p = 0, q = 1.
(8.1.14)
As our example shows, the expectation of a Clifford-algebraic random variable
depends on the signature of the enveloping Clifford algebra.
Let Ξ ∈ L2(Ω) ⊗ Cℓp,q be a Clifford-algebraic random variable written as
Ξk =∑i⊂[n]
ξiei, (8.1.15)
111
where for each i ⊂ [n], ξi is a real-valued random variable with finite second moment;
i.e., E(|ξi|2) = xi < ∞. Then
κ = E(Ξ2) = E
∑i⊂[n]
∑j,ℓ⊂[n]
jℓ=i
(−1)φ(j,ℓ)ξjξℓ
ei
=∑i⊂[n]
∑j,ℓ⊂[n]
jℓ=i
(−1)φ(j,ℓ)E(ξjξℓ
) ei =∑
i,j⊂[n]
(−1)φ(j,(ij))E(ξjξij) ei, (8.1.16)
which is then also bounded in the 1−norm since
∥κ∥1 = ∥∑
i,j⊂[n]
(−1)φ(j,(ij))E(ξjξij) ei∥1
=∑
i,j⊂[n]
|(−1)φ(j,(ij))E(ξjξij)| ≤∑
i,j⊂[n]
|E(ξjξij)|
≤∑
i,j⊂[n]
maxxj, xij < ∞. (8.1.17)
8.2 CLIFFORD-ALGEBRAIC MARKOV CHAINS
In this section we define Markov chains on Clifford algebras and briefly discuss
a few of their properties. A full treatment of this subject lies beyond the scope of
the current work.
Definition 8.2.1. Let n ≥ 0, [n] = 1, 2, . . . , n be fixed. For each i ⊂ [n], let
ξi(k), k ≥ 0 be a finite-state Markov chain having state space Si. We will assume
that the ξi are mutually independent processes. Then we define the finite-state
Clifford-algebraic Markov chain Ξ(k) by
Ξk =∑i⊂[n]
ξi(k) ei. (8.2.1)
112
We observe that Ξk has state space S consisting of all linear combinations of the
form∑i⊂[n]
siei, where si ∈ Si for each i ⊂ [n]. We further observe that
|S| =∏i⊂[n]
|Si|. (8.2.2)
Remark 8.2.2. We assume the component Markov chains are adapted to the same
given family of σ-fields, say
F0 ⊂ F1 ⊂ · · · ⊂ Fk ⊂ · · · .
Lemma 8.2.3. Let n ≥ 0 be fixed and consider the Clifford algebra Cℓp,q where
p+ q = n. For each multi-index i ⊂ [n], let ξi : N×Ω → Si ⊂ N be a Markov chain.
Then the sequence of Clifford-algebraic random variables Ξk defined by
Ξk =∑i⊂[n]
ξi(k) ei (8.2.3)
satisfies the Markov property.
Proof. For u(k) =∑
i⊂[n] ui(k) ei ∈ Cℓp,q, we find
PrΞk = u(k)|Ξk−1 = u(k − 1), Ξk−2 = u(k − 2), . . . , Ξk0 = u(k0)
= Prξi(k) = ui(k),∀i ⊂ [n] | ξj(k − 1) = uj(k), . . . , ξj(k0) = uj(k0),∀j ⊂ [n]
= Prξi(k) = ui(k)|ξi(k − 1) = ui(k − 1), ∀i ⊂ [n], by independence, thus
= PrΞk = u(k)|Ξk−1 = u(k − 1). (8.2.4)
Lemma 8.2.4. Given a Clifford-algebraic Markov process Ξk =∑i⊂[n]
ξi(k) ei, the
expectation of Ξk is given by
E(Ξk) =∑i⊂[n]
E(ξi(k)) ei. (8.2.5)
113
Proof. For each i ⊂ [n],
E(⟨Ξk, ei⟩) = E(ξi(k)). (8.2.6)
Therefore,
E(Ξk) = E(∑i⊂[n]
⟨Ξ(k), ei⟩ei) =∑i⊂[n]
E (⟨Ξ(k), ei⟩) ei =∑i⊂[n]
E(ξi(k)) ei. (8.2.7)
By definition of the expectation, assuming pairwise independence of the
Markov chains ξi(k)i⊂[n], we have the following equivalent formulation:
E(Ξ(k)) =∑
(s∅,...,s[n])∈Q
i⊂[n] Si
∏i⊂[n]
Prξi(k) = si
∑i⊂[n]
siei. (8.2.8)
Definition 8.2.5. Let ξi(t, ω)i⊂[n] denote a collection of finite-state, time-
homogeneous Markov chains. Then Ξ(t, ω) =∑i⊂[n]
ξi(t, ω) ei is referred to as a
finite-state time-homogeneous Clifford-algebraic Markov chain.
For any set A, we define
χ(A) ≡
0, if A = ∅
1, otherwise.
(8.2.9)
Letting N =∑i⊂[n]
|Si|, we can represent the distribution of the Clifford-algebraic
Markov chain Ξ at time k with a vector x(k) ∈ RN defined by
(x(k))f(i)+j = Pr⟨ξi(k), ei⟩ = sj, (8.2.10)
where i ⊂ [n], 0 ≤ j ≤ |Si| − 1, sj ∈ Si, and f : [n] → 0, . . . , |S| − 1 is defined by
f(i) =
Pnℓ=0 2ℓχ(ℓ∩i)∑
m=1
|Sf−1(m−1)|. (8.2.11)
114
Proposition 8.2.6. Let Ξk be a finite-state, time-homogeneous Clifford-algebraic
Markov chain, where for each i ⊂ [n], ξi(k) is a real-valued random variable taking
values in the state space Si. Let Mi denote the transition probability matrix for
the Markov chain ξi(k) for each i ⊂ [n]. Further, let vj0≤j≤2n−1 be the standard
orthonormal basis for R2n. Then the transition probability matrix for the Clifford-
algebraic Markov chain Ξk is given by
M =∑i⊂[n]
((|vf(i)⟩⟨vf(i)|
)⊗Mi
)(8.2.12)
under the mapping
f(i) =n∑
ℓ=0
2ℓχ(ℓ ∩ i), (8.2.13)
where for any set A
χ(A) ≡
0, if A = ∅
1, otherwise.
(8.2.14)
Proof. By construction, the ith component of the vector x(k) is Prξi(k) = sj where
f(i)+j = i. Since for each i ⊂ [n], ξi is a Markov chain on the state space Si having
associated transition matrix Mi, we see that(|vf(i)⟩⟨vf(i)|
)⊗ Mi is a transition
matrix acting only on the components of x(k) corresponding to the Markov chain
ξi. Hence∑i⊂[n]
(|vf(i)⟩⟨vf(i)|
)⊗Mi is a transition matrix acting simultaneously and
independently on all the Markov chains ξi.
We note that the Clifford-algebraic Markov chain Ξk has |S| =∏i⊂[n]
|Si| states.
We represent states s ∈ S as s =∑i⊂[n]
siei. The probability distribution at time
k > 0 on these states can be represented by a row vector x(k) ∈ [0, 1]|S|.
115
For each i ⊂ [n], let xi(k) ∈ [0, 1]|Si| denote the probability distribution of ξi
at time k. We define the probability distribution vector x(k) ∈ [0, 1]|S| by
x(k) =∑i⊂[n]
⟨vf(i)| ⊗ xi(k). (8.2.15)
Corollary 8.2.7. Letting x0 represent the initial distribution of the finite-state,
time-homogeneous Clifford-algebraic Markov chain Ξk, the distribution at time k > 0
is given by
xk = x0Mk. (8.2.16)
Example 8.2.8. Let ξ∅, ξ1, ξ2, ξ12 be time-homogeneous Markov chains with tran-
sition matrices
M∅ =
78
18
34
14
,M1 =
0 1
323
0 14
34
310
110
35
,
M2 =
58
38
1 0
,M12 =
58
0 38
14
0 34
0 12
12
.
From these we obtain the transition matrix for
Ξ(k) = ξ∅(k) + ξ1(k) e1 + ξ2(k) e2 + ξ12(k) e12. (8.2.17)
116
M =
78
18
0 0 0 0 0 0 0 0
34
14
0 0 0 0 0 0 0 0
0 0 0 13
23
0 0 0 0 0
0 0 0 14
34
0 0 0 0 0
0 0 310
110
35
0 0 0 0 0
0 0 0 0 0 58
38
0 0 0
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 58
0 38
0 0 0 0 0 0 0 14
0 34
0 0 0 0 0 0 0 0 12
12
.
Definition 8.2.9. Given a Clifford-algebraic Markov chain Ξk, a state s ∈ S is an
absorbing state if ∃k > 0 such that
PrΞk′ = s, for all k′ > k |Ξk = s = 1. (8.2.18)
Definition 8.2.10. A state s ∈ S is called recurrent if for any k > 0,
PrΞk′ = s for some k′ > k |Ξk = s = 1. (8.2.19)
A state that is not recurrent is transient.
Given a Clifford-algebraic Markov chain Ξk, let us define the notation
Ts = minm ≥ 1 : Ξm = s.
117
Definition 8.2.11. The mean recurrence time of a state s is defined by
µs = E(Ts|Ξ0 = s) =
∑m
mPrΞm = s|Ξ0 = s, if s is recurrent
∞, if s is transient.
(8.2.20)
Definition 8.2.12. A recurrent state s is called null if µs = ∞ and non-null if
µs < ∞.
Definition 8.2.13. The period d(s) of a state s is defined by
d(s) = g.c.d.n : pss(n) > 0,
the greatest common divisor of the time steps at which return to state s is possible.
We call s periodic if d(s) > 1 and aperiodic if d(s) = 1.
Definition 8.2.14. A state that is recurrent, non-null, and aperiodic is called er-
godic.
Proposition 8.2.15. A recurrent state s of Ξ(k) is periodic with period d if and
only if for each i ⊂ [n], si is a recurrent state of the Markov chain ξi(k) with period
di < ∞. In this case
d = l.c.m.d∅, . . . , d[n]. (8.2.21)
Proof. Let s =∑
i⊂[n] siei be a periodic recurrent state of the Markov chain Ξk. If
s has period d, then
PrΞk+md = s|Ξk = s > 0, ∀m ≥ 1.
Thus for each i ⊂ [n],
Prξi(k + md) = si|ξi(k) = si > 0,
118
and therefore si(k) is a periodic state of ξi(k) with period di, where di|d.
Conversely, let us suppose that for each i ⊂ [n], ξi(k) has period di < ∞.
Then letting d′ = l.c.m.di : i ⊂ [n] gives
PrΞk+md = s|Ξk = s =∏i⊂[n]
Prξi(k + md) = si|ξk = si > 0
and thus s is a recurrent state of Ξk with period d|d′. However d′ is the least common
multiple of the di’s, and thus d′|d. Hence d′ = d.
119
CHAPTER 9
THE CLIFFORD-ALGEBRAIC POISSON PROCESS
The work appearing in this chapter is based on the work of D. Engel [10].
By extending his results, we define Poisson processes on Cℓp,q, and consider their
properties.
Definition 9.0.16. A Poisson process with parameter λ ≥ 0 is a nonnegative
integer-valued process Y (t, ω), t ≥ 0 with stationary, independent, integer-valued
increments such that Y (0, ω) = 0 and
PrY (t, ω) = ℓ =(λt)ℓ
ℓ!e−λt. (9.0.1)
The following theorem is proved in [9]
Theorem 9.0.17. [Doob] For any Poisson process P (t, ω), there exists an equiv-
alent process P ′(t, ω) such that P ′(t, ω) is non-decreasing as a function of t for all
ω ∈ Ω. Such processes are called regular.
Definition 9.0.18. Fix p, q ≥ 0, n = p + q, and the Clifford algebra Cℓp,q. Let
υi(t, ω)i⊂[n] be a collection of independent regular Poisson processes; i.e., for each
i ⊂ [n], we have
Prυi(t, ω) = ℓ =(λit)
ℓ
ℓ!e−λit (9.0.2)
120
for some parameter λi > 0. We then define the Clifford-algebraic Poisson process
with parameter λ =∑i⊂[n]
λi by
Υ(t, ω) =∑i⊂[n]
υi(t, ω)ei. (9.0.3)
We restrict our attention to those elements of Cℓp,q having non-negative integer
coefficients by defining the notation
Cℓp,q ≡ ∑i⊂[n]
uiei ∈ Cℓp,q : 0 ≤ ui ∈ Z,∀i ⊂ [n]. (9.0.4)
Lemma 9.0.19. Let Υ(t, ω) be a Clifford-algebraic Poisson process with parameter
λ. Given u =∑i⊂[n]
uiei ∈ Cℓp,q, we have
PrΥ(t, ω) = u = e−λt∏i⊂[n]
(λit)ui
ui!. (9.0.5)
Proof.
PrΥ(t, ω) = u =∏i⊂[n]
Prυi(t, ω) = ui =∏i⊂[n]
(λit)ui
ui!e−λit
= e−P
i⊂[n] λit∏i⊂[n]
(λit)ui
ui!
= e−λt∏i⊂[n]
(λit)ui
ui!. (9.0.6)
Corollary 9.0.20. Let Υ(t, ω) be a Clifford-algebraic Poisson process with param-
eter λ =∑i⊂[n]
λi. If λ∅ = · · · = λ[n] = κ, then given u ∈ Cℓp,q we have
PrΥ(t, ω) = u = e−2nκt(κt)|u|∏i⊂[n]
1
ui!
, (9.0.7)
where |u| denotes the Clifford-algebraic 1-norm of u.
121
Corollary 9.0.21. Suppose λ∅ = · · · = λ[n] = κ. Then given u ∈ Cℓp,q written as
in Lemma 9.0.19 and 0 ≤ k ≤ n, we see
Pr⟨Υ(t, ω)⟩k = ⟨u⟩k = e−(nk)κt
∏i⊂[n]|i|=k
(κt)ui
ui!. (9.0.8)
Proof. We observe that the degree-k part of u has(
nk
)coefficients, all of which must
match the corresponding coefficients of Υ(t, ω). By independence,
Prυi = ui : i ⊂ [n], |i| = k =∏i⊂[n]
Prυi = ui, (9.0.9)
and we have
Pr⟨Υ(t, ω)⟩k = ⟨u⟩k =∏i⊂[n]|i|=k
(κt)ui
ui!e−κt =
(e−κt
)(nk)
∏i⊂[n]|i|=k
(κt)ui
ui!
= e−(nk)κt
∏i⊂[n]|i|=k
(κt)ui
ui!. (9.0.10)
Lemma 9.0.22. Let u ∈ Cℓp,q be chosen arbitrarily. Let U be the collection of
elements in Cℓp,q having norm |u|. That is
U ≡ v ∈ Cℓp,q : |v| = |u|. (9.0.11)
Then, noting that u ∈ U , we find
|U | =
|u|∑k=1
k!
|u|k
(2n
k
). (9.0.12)
Proof. We observe that there are 2n coefficients of which no more than |u| can be
nonzero. Let us represent the coefficients as a 2n-vector ordered by lexicographically
ordering the index sets of the corresponding multivectors.
122
For any 0 < k ≤ |u| nonzero coefficients, we choose k coordinates of the 2n-
vectors to make nonzero. There are(
nk
)ways to do this. Next we assign positive
coefficients totalling |u| to the k coordinates chosen; i.e., we partition the integer
|u| into k nonempty subsets. There are|u|
k
ways to do this. Since the coordinates
are ordered, we must count permutations as well. There are k! of them. Finally we
sum over k from 1 to |u| and obtain the stated result.
Lemma 9.0.23. Let m ≥ 0 be fixed. Then
Pr|Υ(t, ω)| = m =(λt)m
m!e−λt. (9.0.13)
In other words, the 1-norm of the Clifford-algebraic Poisson process is a regular
Poisson process.
Proof.
Pr|Υ(t, ω) = ℓ =∑
k∅+···+k[n]=ℓ
∏i⊂[n]
Prυi(t, ω) = ki
=∑
k∅+···+k[n]=ℓ
∏i⊂[n]
(λit)ki
ki!e−λit
=∑
k∅+···+k[n]=ℓ
∏i⊂[n]
e−λt∏i⊂[n]
(λit)ki
ki!
=e−λt
ℓ!
∑k∅+···+k[n]=ℓ
ℓ!∏i⊂[n]
(λit)ki
ki!
= e−λt
(∑i⊂[n] λit
)ℓ
ℓ!=
(λt)ℓ
ℓ!e−λt. (9.0.14)
Corollary 9.0.24. If λ∅ = · · · = λ[n] = κ, then
Pr|Υ(t, ω)| = m = e−2nκt (κt)m
m!. (9.0.15)
123
9.1 CONTINUOUS CLIFFORD POISSON PROCESSES AND THE
ITERATED STOCHASTIC INTEGRAL
Definition 9.1.1. We say Υ(t, ω) =∑i⊂[n]
υi(t, ω) ei is a continuous Clifford-algebraic
Poisson process if its parameter λ(t) =∑i⊂[n]
mi((0, t]) for some family of non-atomic
measures mi. In other words,
Prυi(t, ω) = k =mi((0, t])
k
k!e−mi((0,t]), (9.1.1)
where mi((0, t]) is a continuous, non-negative, monotonically non-decreasing func-
tion of t.
It is clear from the definition of Υ(t, ω) that
E(Υ(t, ω)) =∑i⊂[n]
E(υi(t, ω)) ei. (9.1.2)
If m is Lebesgue measure and m∅ = · · · = m[n] = m, then
E(Υ(t, ω)) = t∑i⊂[n]
ei. (9.1.3)
It therefore follows that
E(|Υ(t, ω)|) = 2nt. (9.1.4)
We now define some notation:
Υ(t + 0, ω) ≡ limϵ0
Υ(t + ϵ, ω); (9.1.5)
Υ(t − 0, ω) ≡ limϵ0
Υ(t − ϵ, ω). (9.1.6)
D. Engel [10] proved the following lemma for Poisson processes on R. The
proof for the Clifford-algebraic case is based on his approach.
124
Lemma 9.1.2. If Υ(t, ω) is a continuous Clifford-algebraic Poisson process, then
1. For any t ∈ R+,
Pr|Υ(t + 0, ω) − Υ(t − 0, ω)| ≥ 1 = 0; (9.1.7)
2. For all b ∈ R+,
Pr sup0≤t≤b
|Υ(t + 0, ω) − Υ(t − 0, ω)| ≥ 2 = 0. (9.1.8)
Proof. First we prove part 1.
Pr|Υ(t + 0, ω) − Υ(t − 0, ω)| ≥ 1
≤ Pr|Υ(t + h, ω) − Υ(t − h, ω)| ≥ 1,∀h > 0
= 1 − PrΥ(t + h, ω) − Υ(t − h, ω) = 0
= 1 −∏i⊂[n]
e−mi((t−h,t+h]) → 0 as h → 0 (9.1.9)
because λi(t) = mi((0, t]) is continuous.
Now we prove part 2. Partition [0, b] into M subintervals (tk, tk+1] for
k = 0, 1, . . . ,M − 1 where 0 = t0 < t1 < · · · < tM = b such that for each i ⊂ [n]
mi((tk, tk+1]) =1
Mmi([0, b]). (9.1.10)
125
Pr|Υ(t + 0, ω) − Υ(t − 0, ω)| ≥ 2 for some t ∈ [0, b]
≤M∑
k=1
Pr|Υ(tk, ω) − Υ(tk−1, ω)| ≥ 2
(where (1) ensures no jumps occur exactly when t = tk for some k ∈ [M ])
=M∑
k=1
1 −∏i⊂[n]
e−mi((tk−1,tk])
︸ ︷︷ ︸probability of no jumps
−∑i⊂[n]
mi((tk−1, tk])∏j⊂[n]
e−mj((tk−1,tk])
︸ ︷︷ ︸probability of 1 jump
. (9.1.11)
We observe that for each i ⊂ [n], mi((tk−1, tk]) = 1M
mi([0, b]), which implies that
(9.1.11) is less than or equal to
M
1 −∏i⊂[n]
e−1M
mi([0,b]) − 1
M
∑i⊂[n]
mi([0, b])∏j⊂[n]
e−1M
mj([0,b])
→ 0 as M → ∞.
(9.1.12)
If for each i ⊂ [n], mi is a continuous measure, then the 1-norm of the Clifford-
algebraic Poisson process Υ(t, ω) is a monotone increasing function of t starting at
0 taking unit jumps for almost all ω ∈ Ω. We shall assume from now on that Υ(t, ω)
is continuous from the right and has left-hand limits.
Since Υ(t, ω) =∑i⊂[n]
υi(t, ω) ei is of bounded variation by definition, we can
126
define as a Stieltjes sum the integral
∫ T
0
Υ(s, ω)dΥ(s, ω)
= lim0=t0<t1<···<tm=T
|ti−ti−1|0
m∑k=1
Υ(tk − 0, ω) (Υ(tk, ω) − Υ(tk−1, ω))
= lim0=t0<t1<···<tm=T
|ti−ti−1|0
m∑k=1
∑i⊂[n]
υi(tk − 0, ω) ei
×
∑j⊂[n]
(υj(tk, ω) − υj(tk−1, ω)
)ej
= lim
0=t0<t1<···<tm=T|ti−ti−1|0
m∑k=1
∑i,j⊂[n]
(−1)φ(j,ji)υj(tk − 0, ω)×
(υji(tk, ω) − υji(tk−1, ω)) ei. (9.1.13)
Lemma 9.1.3.
⟨∫ t
0
Υ(s, ω)dΥ(s, ω)⟩0 =1
2
∑i⊂[n]
(−1)φ(i,i)υi(t, ω)(2), (9.1.14)
where m(n) = m(m − 1) · · · (m − n + 1).
Proof.
⟨∫ t
0
Υ(s, ω)dΥ(s, ω)⟩0
= lim0=t0<t1<···<tm=t
|ti−ti−1|0
m∑k=1
∑j⊂[n]
(−1)φ(j,j)υj(tk − 0, ω)(υi(tk, ω) − υj(tk−1, ω)
)
= lim0=t0<t1<···<tm=t
|ti−ti−1|0
∑j⊂[n]
(−1)φ(j,j)
m∑k=1
υj(tk − 0, ω)(υj(tk, ω) − υj(tk−1, ω)
)
=∑j⊂[n]
(−1)φ(j,j)
∫ t
0
υj(s, ω)dυj(s, ω). (9.1.15)
Since υj(t, ω) is a step function taking unit jumps and Prυj(t, ω) < ∞ = 1,
∀t < ∞, there are only finitely many jumps between 0 and t for each fixed ω ∈ Ω.
127
Letting τj(k) denote the time of the kth jump in υj(t, ω), i.e.
τj(k) = infs > 0 : υj(s, ω) ≥ k, (9.1.16)
we see that
∫ t
0
υj(s, ω)dυj(s, ω) =
υj(t,ω)∑k=1
υj(τj(k) − 0, ω)
=
υj(t,ω)∑k=1
(k − 1) =
(υj(t, ω) − 1
)υj(t, ω)
2=
1
2υj(t, ω)(2). (9.1.17)
Combining this with (9.1.15) completes the proof.
Lemma 9.1.4. ∫ t
0
|dΥ(s, ω)| = |Υ(t, ω)|. (9.1.18)
Proof. Using the fact that for each i ⊂ [n] and fixed ω ∈ Ω, υi(t, ω) is a monotone
nondecreasing function of t, we have
∫ t
0
|dΥ(s, ω)| = lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
|Υ(tk, ω) − Υ(tk−1, ω)|
= lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i⊂[n]
|υi(tk, ω) − υi(tk−1, ω)|
=∑i⊂[n]
lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
υi(tk, ω) − υi(tk−1, ω)
=
∑i⊂[n]
υi(t, ω) = |Υ(t, ω)|. (9.1.19)
Let us now define the notation
Υ(m)(t, ω) ≡∫
· · ·∫
0≤t1<···<tm≤t
dΥ(t1, ω) · · · dΥ(tm, ω). (9.1.20)
128
Remark 9.1.5. The following two lemmas follow the approach of D. Engel [10].
Lemma 9.1.6. ∫ ∫0≤t1<t2≤t
|dΥ(t1, ω)||dΥ(t2, ω)| =1
2|Υ(t, ω)|(2). (9.1.21)
Proof. If Υ(t, ω) is a Clifford-algebraic Poisson process, then by definition |Υ(t, ω)|
is a nonnegative, monotonically non-decreasing integer-valued process taking unit
increments. Thus,
1
2|Υ(t, ω)|(2) =
(|Υ(t, ω)| − 1) (|Υ(t, ω)|)2
=
|Υ(t,ω)|∑k=1
(k − 1) =
|Υ(t,ω)|∑k=1
|Υ(τ(k) − 0, ω)|,
where τ(k) is the time of the kth jump in |Υ(t, ω)|
= lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
|Υ(tk − 0, ω)| · |Υ(tk, ω) − Υ(tk−1, ω)|
=
∫ t
0
|Υ(s, ω)| · |dΥ(s, ω)|. (9.1.22)
Lemma 9.1.7.∫· · ·
∫0≤t1<···<tm≤t
|dΥ(t1, ω)| · · · |dΥ(tm, ω)| =1
m!|Υ(t, ω)|(m). (9.1.23)
Proof. Proof is by induction on m, with the base case m = 2 proved in the previous
lemma. Let us assume that
|Υ(t, ω)|(m−1) ≡∫
· · ·∫
0≤t1<···<tm−1≤t
|dΥ(t1, ω)| · · · |dΥ(tm−1, ω)|
=1
(m − 1)!|Υ(t, ω)|(m−1). (9.1.24)
129
Now let us consider
∫ ∫0≤t1<t2≤t
|Υ(t1, ω)|(m−1)|dΥ(t2, ω)|
=
|Υ(t,ω)|∑k=1
|Υ(τ(k) − 0, ω)|(m−1),
where τ(k) denotes the time of the kth jump in |Υ(t, ω)|
=1
(m − 1)!
|Υ(t,ω)|∑k=1
|Υ(τ(k) − 0, ω)|(m−1)
=1
(m − 1)!
|Υ(t,ω)|∑k=1
(k − 1)(m−1) (9.1.25)
Using the combinatorial identity
L∑k=1
(k − 1)(k − 2) · · · (k − m + 1) =L(L − 1)(L − 2) · · · (L − m + 1)
m, (9.1.26)
we obtain
1
(m − 1)!
|Υ(t,ω)|∑k=1
(k − 1)(m−1)
=1
(m − 1)!
1
m(|Υ(t, ω)|) (|Υ(t, ω)| − 1) · · · (|Υ(t, ω)| − m + 1)
=1
m!|Υ(t, ω)|(m). (9.1.27)
Proposition 9.1.8.
|Υ(t, ω)(m)| ≤ |Υ(t, ω)|(m). (9.1.28)
130
Proof. Proof is by induction on m. When m = 2, we have
|Υ(t, ω)(2)| = |∫ ∫
0≤t1<t2≤t
dΥ(t1, ω)dΥ(t2, ω)| = |∫ t
0
Υ(s, ω)dΥ(s, ω)|
= | lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i⊂[n]
υi(tk − 0, ω) ei
×
∑j⊂[n]
(υj(tk, ω) − υj(tk−1, ω)
)ej|
= | lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i,j⊂[n]
(−1)φ(j,(ji))υj(tk − 0, ω)
×(υji(tk, ω) − υji(tk−1, ω)
)ei|
≤ lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i,j⊂[n]
υj(tk − 0, ω) ×(υji(tk, ω) − υji(tk−1, ω)
)
= lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i⊂[n]
υi(tk − 0, ω)
∑j⊂[n]
υj(tk, ω) − υj(tk−1, ω)
= lim
0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i⊂[n]
|υi(tk − 0, ω)|
∑j⊂[n]
|υj(tk, ω) − υj(tk−1, ω)|
=
∫ t
0
|Υ(s, ω)||dΥ(s, ω)| = |Υ(t, ω)|(2). (9.1.29)
Let us assume true for m − 1 and make note of the recursive definition
|Υ(t, ω)(m)| = |∫ t
0
Υ(s, ω)(m−1)dΥ(s, ω)|. (9.1.30)
Further, let us write
Υ(s, ω)(m−1) =∑i⊂[n]
υ(m−1)i (s, ω) ei. (9.1.31)
131
Then
|Υ(t, ω)(m)| = |∫ t
0
Υ(s, ω)(m−1)dΥ(s, ω)|
= |∫ t
0
∑i⊂[n]
υ(m−1)i (s, ω)dΥ(s, ω)|
= | lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i⊂[n]
υ(m−1)i (tk − 0, ω) ei
×
∑j⊂[n]
υj(tk, ω) − υj(tk−1, ω)
ej|
= | lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i,j⊂[n]
(−1)φ(j,(ji))υ(m−1)j (tk − 0, ω)
×(υji(tk, ω) − υji(tk−1, ω)
)ei|
≤ lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i,j⊂[n]
υ(m−1)j (tk − 0, ω)
(υji(tk, ω) − υji(tk−1, ω)
)
= lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i⊂[n]
υ(m−1)i (tk − 0, ω)
∑j⊂[n]
υj(tk, ω) − υj(tk−1, ω)
= lim
0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i⊂[n]
|υ(m−1)i (tk − 0, ω)|
∑j⊂[n]
|υj(tk, ω) − υj(tk−1, ω)|
=
∫ t
0
|Υ(s, ω)|(m−1)| · |dΥ(s, ω)| = |Υ(t, ω)|(m). (9.1.32)
Theorem 9.1.9. Let Υ(t, ω) be a continuous Clifford-algebraic Poisson process.
Then the following inequality holds:
|∫
· · ·∫
0≤t1<···<tm≤t
dΥ(t1, ω) · · · dΥ(tm, ω)| ≤ 1
m!|Υ(t, ω)|(m). (9.1.33)
Proof. The result is obtained by combining the results of Proposition 9.1.8 and
Lemma 9.1.7.
132
Proposition 9.1.10.
∫B
∫ t
0
Υ(s, ω)dΥ(s, ω) =∑i⊂[n]
(−1)φ([n]\i,i)υi(t,ω)∑k=1
υ[n]\i(τi(k), ω). (9.1.34)
Proof. Applying definitions and properties of Clifford multiplication, we obtain
⟨∫ t
0
Υ(s, ω)dΥ(s, ω)⟩n
= lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i⊂[n]
(−1)φ(i,[n]\i)υi(tk − 0, ω)(υ[n]\i(tk, ω) − υ[n]\i(tk−1, ω)
)e[n]
=∑i⊂[n]
(−1)φ(i,[n]\i) lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
υi(tk − 0, ω)(υ[n]\i(tk, ω) − υ[n]\i(tk−1, ω)
)e[n]
=∑i⊂[n]
(−1)φ(i,[n]\i)∫ t
0
υi(s, ω)dυ[n]\i(s, ω) e[n]
=∑i⊂[n]
(−1)φ([n]\i,i)∫ t
0
υ[n]\i(s, ω)dυi(s, ω) e[n]
=∑i⊂[n]
(−1)φ([n]\i,i)υi(t,ω)∑k=1
υ[n]\i(τi(k), ω) e[n]. (9.1.35)
Theorem 9.1.11. Writing Υ(m)(t, ω) =∑i⊂[n]
υmi (t, ω) ei , where m ≥ 1, we obtain
the following recurrence:
υmi (t, ω) =
υi(t, ω), when m = 1
∑j⊂[n]
(−1)φ(j,ji)
υji(t,ω)∑k=1
υm−1j (τji(k), ω), when m ≥ 2.
(9.1.36)
133
Proof. The case m = 1 is established by definition. When m ≥ 2 we have
∑i⊂[n]
υmi (t, ω) ei = Υ(m)(t, ω) =
∫ t
0
Υ(m−1)(s, ω)dΥ(s, ω)
= lim0=t0<t1<···<tM =t
|t1−ti−1|0
M∑k=1
∑i⊂[n]
υm−1i (tk − 0, ω) ei
×
∑j⊂[n]
(υj(tk, ω) − υj(tk−1, ω)
)ej
= lim
0=t0<t1<···<tM =t
|t1−ti−1|0
M∑k=1
∑i,j⊂[n]
(−1)φ(j,ji)υm−1j (tk − 0, ω)
×(υji(tk, ω) − υji(tk−1, ω)
)ei
=∑
i,j⊂[n]
(−1)φ(j,ji) lim0=t0<t1<···<tM =t
|t1−ti−1|0
M∑k=1
υm−1j (tk − 0, ω)
×(υji(tk, ω) − υji(tk−1, ω)
)ei
=∑
i,j⊂[n]
(−1)φ(j,ji)
∫ t
0
υm−1j (s, ω)dυji(s, ω) ei
=∑
i,j⊂[n]
(−1)φ(j,ji)
υji(t,ω)∑k=1
υm−1j (τji(k), ω) ei
⇒ υmi (t, ω) =
∑j⊂[n]
(−1)φ(j,ji)
υji(t,ω)∑k=1
υm−1j (τji(k), ω). (9.1.37)
9.2 EXAMPLES: COMPLEX NUMBERS AND QUATERNIONS
We now compute a few examples.
Example 9.2.1 (The Complex Poisson Process). Let Υ(t, ω) denote the
Clifford-algebraic Poisson process. We compute the integral
134
∫ t
0
Υ(s, ω)dΥ(s, ω) = lim0=t0<t1<···<tM =t
|ti−ti−1|0
Υ(tk − 0, ω) (Υ(tk, ω) − Υ(tk−1, ω))
= lim0=t0<t1<···<tM =t
|ti−ti−1|0
(υ∅(tk − 0, ω) + υ1(tk − 0, ω) e1)
× (υ∅(tk, ω) − υ∅(tk−1, ω) + (υ1(tk, ω) − υ1(tk−1, ω)) e1)
= lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
υ∅(tk − 0, ω) (υ∅(tk, ω) − υ∅(tk−1, ω))
− υ1(tk − 0, ω) (υ1(tk, ω) − υ1(tk−1, ω))
+ υ∅(tk − 0, ω) (υ1(tk, ω) − υ1(tk−1, ω)) e1
+ υ1(tk − 0, ω) (υ∅(tk, ω) − υ∅(tk−1, ω)) e1
=
∫ t
0
υ∅(s, ω)dυ∅(s, ω) −∫ t
0
υ1(s, ω)dυ1(s, ω)
+
(∫ t
0
υ∅(s, ω)dυ1(s, ω) +
∫ t
0
υ1(s, ω)dυ∅(s, ω)
)e1
=1
2υ∅(t, ω)(2) −
1
2υ1(t, ω)(2) +
(∫ t
0
υ∅(s, ω)dυ1(s, ω) +
∫ t
0
υ1(s, ω)dυ∅(s, ω)
)e1.
(9.2.1)
Letting τi(k) ≡ time of the kth jump in υi(t, ω), we see
∫ t
0
Υ(s, ω)dΥ(s, ω) =1
2υ∅(t, ω)(2) −
1
2υ1(t, ω)(2)
+ e1
υ1(t,ω)∑k=1
υ∅(τ1(k), ω) +
υ∅(t,ω)∑k=1
υ1(τ∅(k), ω)
. (9.2.2)
Example 9.2.2 (The Quaternionic Poisson Process). Let Υ(t, ω) denote the
Clifford-algebraic Poisson process in L2(Ω) ⊗ Cℓ0,2. We compute the integral
135
∫ t
0
Υ(s, ω)dΥ(s, ω) = lim0=t0<t1<···<tM =t
|ti−ti−1|0
Υ(tk − 0, ω) (Υ(tk, ω) − Υ(tk−1, ω))
= lim0=t0<t1<···<tM =t
|ti−ti−1|0
∑i⊂1,2
υi(tk − 0, ω) ei
∑i⊂1,2
(υi(tk, ω) − υi(tk−1, ω) ei)
=
1
2
(υ∅(t, ω)(2) − υ1(t, ω)(2) − υ2(t, ω)(2) − υ12(t, ω)(2)
)+ e1
(∫ t
0
υ∅dυ1 +
∫ t
0
υ1dυ∅ −∫ t
0
υ12dυ2 +
∫ t
0
υ2dυ12
)+ e2
(∫ t
0
υ∅dυ2 +
∫ t
0
υ2dυ∅ +
∫ t
0
υ12dυ1 −∫ t
0
υ1dυ12
)+ e12
(∫ t
0
υ∅dυ12 +
∫ t
0
υ12dυ∅ +
∫ t
0
υ1dυ2 −∫ t
0
υ2dυ1
). (9.2.3)
Letting τi(k) ≡ time of the kth jump in υi(t, ω), we see
∫ t
0
Υ(s, ω)dΥ(s, ω) =
1
2
(υ∅(t, ω)(2) − υ1(t, ω)(2) − υ2(t, ω)(2) − υ12(t, ω)(2)
)+ e1
υ1(t,ω)∑k=1
υ∅(τ1(k), ω) + e1
υ∅(t,ω)∑k=1
υ1(τ∅(k), ω) + e1
υ12(t,ω)∑k=1
υ2(τ12(k), ω)
− e1
υ2(t,ω)∑k=1
υ12(τ2(k), ω)
+ e2
υ2(t,ω)∑k=1
υ∅(τ2(k), ω) + e2
υ∅(t,ω)∑k=1
υ2(τ∅(k), ω) + e2
υ1(t,ω)∑k=1
υ12(τ1(k), ω)
− e2
υ12(t,ω)∑k=1
υ1(τ12(k), ω)
+ e12
υ12(t,ω)∑k=1
υ∅(τ12(k), ω) + e12
υ∅(t,ω)∑k=1
υ12(τ∅(k), ω) + e12
υ2(t,ω)∑k=1
υ1(τ2(k), ω)
− e12
υ1(t,ω)∑k=1
υ2(τ1(k), ω). (9.2.4)
136
CHAPTER 10
CLIFFORD-ALGEBRAIC MULTIPLE STOCHASTIC
INTEGRALS
Letting Y (t, ω) be an L2-stochastic process with independent increments, one
obtains a finitely additive L2-valued measure on the field of elementary subsets
of R+ via Y ([s, t), ω) = Y (t, ω) − Y (s, ω). Engel [10] showed that under specific
regularity conditions this measure can be extended to a norm countably additive
L2-valued measure on the σ-field of Borel subsets of R+. Engel further showed that
this measure defines a finitely additive L2-valued measure on the product space
(R+)m
, m ≥ 2, and that this product measure can be extended uniquely to a count-
ably additive L2-valued product measure defined on the σ-field of all Borel subsets
of (R+)m
,m ≥ 2.
In this chapter we extend Engel’s results to obtain a countably additive
L2(Ω)⊗Cℓp,q-valued measure defined on the σ-field of Borel subsets of (R+)m, m ≥ 2.
Moreover, we provide a Clifford-algebraic graph-theoretic construction that unifies
the two parts of the current work by expressing the multiple stochastic integral as
the limit in mean of a sequence of Berezin integrals of traces of Clifford adjacency
matrices. This approach is distinct from the combinatorial approach employed by
Rota and Wallstrom [19] and Anshelevich [1] while achieving similar results.
137
10.1 THE CLIFFORD-ALGEBRAIC STOCHASTIC INTEGRAL
OVER [0, T ]
Throughout this chapter, we shall assume we are given a probability space
(Ω,F , Pr). Let us denote by Ξ(t, ω) =∑i⊂[n]
ξi(t, ω) ei a Clifford-algebraic stochastic
process, i.e. a process on L2(Ω) ⊗ Cℓp,q.
Further we assume there exists an increasing family of σ-fields Ft ⊂ F with
respect to which each component of Ξ(t, ω) is adapted; that is, for all t ≥ 0, ξi(t, ω)
is Ft-measurable for every i ⊂ [n].
Definition 10.1.1. We define the L2-norm of Ξ as E(∥Ξ∥2)12 , where ∥Ξ∥ denotes
the Clifford-algebraic inner-product norm defined in Lemma 7.1.10.
Definition 10.1.2. We say a sequence of Clifford-algebraic random variables
Ξk(ω), k = 1, 2, . . . converges to Ξ(ω) in L2(Ω)-mean if
limk→∞
E(∥Ξk(ω) − Ξ(ω)∥2) = 0. (10.1.1)
This convergence is denoted by L.I.M.k→∞
Ξk(ω) = Ξ(ω).
We further observe that linearity of expectation allows us to write
E(∥Ξ(t, ω)∥2) = E(∑i⊂[n]
|ξi(t, ω)|2) =∑i⊂[n]
E(|ξi(t, ω)|2) (10.1.2)
for any Ξ(t, ω) =∑i⊂[n]
ξi(t, ω) ∈ L2(Ω) ⊗ Cℓp,q.
Definition 10.1.3. Given a Clifford-algebraic stochastic process Ξ(t, ω), we define
the Clifford-algebraic stochastic integral of Ξ as∫ T
0
dΞ(s) = L.I.M.M→∞
M∑k=1
Ξ(Ik), (10.1.3)
138
provided the limit in mean on the right-hand-side exists, whereM⋃
k=1
Ik = [0, t] and
Ij ∩ Ik = ∅ for j = k.
Thus,
∫ T
0
dΞ(s) = L.I.M.M→∞
M∑k=1
∑i⊂[n]
ξi(Ik) ei
= L.I.M.M→∞
∑i⊂[n]
M∑k=1
ξi(Ik) ei
=∑i⊂[n]
(L.I.M.M→∞
M∑k=1
ξi(Ik)
)ei
=∑i⊂[n]
∫ T
0
dξi(s) ei. (10.1.4)
10.2 L2(Ω)⊗CℓP,Q-VALUED MEASURES ON THE M-DIMENSIONAL
SIMPLEX
Given a Clifford-algebraic stochastic process Ξ(t, ω), we wish to express∫· · ·
∫0≤t1<t2<···<tm≤t
dΞ(t1, ω) · · · dΞ(tm, ω) (10.2.1)
as the limit in mean of sums of the form
∑1≤i1<···<in≤q
Ξ(Ii1)Ξ(Ii2) · · ·Ξ(Iin). (10.2.2)
Given an interval I = [s, t) and a stochastic process X(t), we shall adhere to the
convention X(I) ≡ X(t) − X(s).
Let ei0≤i≤m−1 denote the standard orthonormal basis for (R+)m. Given fixed
t ∈ R+, let us consider the set of points
S0 = xi ∈ (R+)m : xi =m−1∑k=i
tei, (10.2.3)
139
where 0 ≤ i ≤ m, so that xm = 0. We then obtain the m-dimensional simplex
S = (t1, t2, . . . , tm) : 0 ≤ t1 < t2 < · · · < tm ≤ t as the convex hull of the set S0.
Let E(S) denote the Borel σ-field of the set
S = (t1, t2, . . . , tm) : 0 ≤ t1 < t2 < · · · < tm ≤ t. (10.2.4)
E(S) is the smallest σ-field containing all elementary sets of the form
E =⋃
1≤i1<···<im≤q
χi1···imIi1 × Ii2 × · · · × Iim , (10.2.5)
where I1, . . . , Iq is some partition of [0, t] into disjoint intervals depending on E
for which Ik < Ik+1, k = 1, 2, . . . , q − 1 and
χi1···im =
0 if Ii1 × · · · × Iim is not included in the union,
1 if Ii1 × · · · × Iim is included in the union.
(10.2.6)
We associate to each elementary set E of the form (10.2.5) an element of
L2(Ω) ⊗ Cℓp,q denoted Ξ(E)
Ξ(E) =
∑j⊂[n]
∑1≤i1<···<im≤q
χi1···im
∑j1,...,jm⊂[n]
j1···jm=j
((−1)φ(j1,...,jm)ξ1,j1(Ii1) · · · ξm,jm(Iim)
) ej,
(10.2.7)
where ξk,j(Iik) are L2(Ω)-valued random variables. Simultaneously we have associ-
ated to E an element of L2(Ω) of the form
Y (E) =∑
1≤i1<···<im≤q
χi1···im
∑j1,...,jm⊂[n]
j1···jm=j
((−1)φ(j1,...,jm)ξ1,j1(Ii1) · · · ξm,jm(Iim)
).
(10.2.8)
140
The following results appear in [10]. We extend them to the Clifford algebra
Cℓp,q.
Let X1(t), . . . , Xm(t) be a system of m L2-valued stochastic processes. We
say that the system satisfies the regularity conditions (R) if the following conditions
hold:
(R1) The function E(Xk(t)) = mk(t) is a continuous function of bounded variation
on [0, t] for k = 1, 2, . . . ,m.
(R2) Setting Xk(t) = Xk(t)−mk(t), E(|Xk(t)|2) = µk(t) is a continuously monotone
increasing function on [0, t].
(R3) If I1, . . . , Iq is any set of disjoint intervals contained in [0, t] and j1, . . . , jq
is any sequence of integers where each jk is between 1 and m inclusively, then
Xj1(I1), . . . , Xjq(Iq) forms an independent system. Note this implies Xk(t) is a
Levy process for each k.
(R4) If I ⊂ [0, t] is any interval and 1 ≤ j1 < · · · < jk ≤ m is any sequence of
integers between 1 and m inclusively, then E(|Xj1(I)Xj2(I) · · ·Xjk(I)|2) < ∞ and
E(|Xj1(I) · · ·Xjk(I)|2) → 0 as m(I) → 0.
Theorem 10.2.1 (Engel 1). If the system X1(t), . . . , Xm(t) satisfies conditions
(R1) − (R3) and Y is defined as in (10.2.8), then there exists a countably additive
positive real-valued measure λ defined on E(S) for which
∥Y (E)∥2 ≤ λ(E). (10.2.9)
141
Remark 10.2.2. The norm used here is the L2 norm; i.e.,
∥Y (E)∥2 = E(|∑
1≤i1<···<im≤q
χi1···imX1(Ii1) · · ·Xm(Iim)|2). (10.2.10)
Theorem 10.2.3 (Engel 2). Let X1(t), . . . , Xm(t) and Y be defined as in theorem
10.2.1. Then Y can be extended to a countably additive L2(Ω)-valued measure defined
on E(S), the Borel σ-field of S.
For any permutation π ∈ Sm, this is also possible on the set
Sπ = (t1, . . . , tm) : 0 ≤ tπ(1) < · · · < tπ(m) ≤ t. (10.2.11)
Thus, in light of theorem 10.2.3, Y can be defined on the field Fm0 of all elementary
subsets of [0, t]m by
Y (E) =∑
(i1,...,im)
χi1···imX1(Ii1) · · ·Xm(Iim) (10.2.12)
whenever
E =⋃
(i1,...,im)
χi1···imIi1 × · · · × Iim . (10.2.13)
Theorem 10.2.4 (Engel 3). Let X1(t), . . . , Xm(t) be a system of m ≥ 1 stochas-
tic processes satisfying the regularity conditions (R1)− (R4). Then the finitely addi-
tive L2(Ω)-valued measure Y defined by (10.2) on the field Fm0 of elementary subsets
of [0, t]m can be extended to a countably additive L2(Ω)-valued measure Y defined on
the Borel σ-field Fm generated by Fm0 .
We now extend Engel’s results to stochastic processes on L2(Ω) ⊗ Cℓp,q. We
begin with an essential lemma.
142
Lemma 10.2.5. Fix an arbitrary set i ⊂ [n] and consider the set
J = (j1, j2, . . . , jm
)∈ [n]m : j1· · ·jm = i. (10.2.14)
Then
|J | = 2n(m−1). (10.2.15)
Proof. We begin by noting that
|(j1, j2, . . . , jm
)∈ [n]m| = (2n)m = 2nm. (10.2.16)
To satisfy the condition j1· · ·jm = i, we need only restrict one of the sets jk.
Without loss of generality, let us assume the first m − 1 sets are chosen arbitrarily,
then
J = (j1, . . . , jm
)∈ [n]m : jm = i(j1· · ·jm−1). (10.2.17)
Thus |J | = (2n)m−1 = 2n(m−1).
Let Ξ1(t), . . . , Ξm(t) be a system of m Clifford-algebraic stochastic processes,
where each Ξk(t) can be written as
Ξk(t) =∑i⊂[n]
ξk,i(t) ei, (10.2.18)
where for each 1 ≤ k ≤ m and for each i ⊂ [n], ξk,i(t) is a real-valued stochastic
process. We say the system Ξℓ(t)1≤ℓ≤m satisfies the regularity conditions (CR) if
the system ξℓ,i(t) : 1 ≤ ℓ ≤ m, i ⊂ [n] of (m2n) real-valued stochastic processes
satisfies the following conditions:
(CR1) The function E(ξk,i(t)) = mk,i(t) is a continuous function of bounded variation
on [0, t] for k = 1, 2, . . . ,m and i ⊂ [n].
143
(CR2) Setting ξk,i(t) = ξk,i(t) − mk,i(t), E(|ξk,i(t)|2) = µk,i(t) is a continuously
monotone increasing function on [0, t] for each i ⊂ [n].
(CR3) If I1, . . . , Iq is any set of disjoint intervals contained in [0, t] and j1, . . . , jq
is any sequence of integers where each jk is between 1 and m inclusively, then
ξ(j1),i1(I1), . . . , ξ(jq),iq(Iq) forms an independent system. Note this implies ξk,ik(t)
is a Levy process for each k.
(CR4) If I ⊂ [0, t] is any interval and 1 ≤ j1 < · · · jk ≤ m is any sequence of
integers between 1 and m inclusively, then E(|ξj1,i1(I)ξj2,i2(I) · · · ξjk,ik(I)|2) < ∞
and E(|ξj1,i1(I) · · · ξjk,ik(I)|2) → 0 as m(I) → 0.
Remark 10.2.6. We observe that the regularity conditions (CR1)-(CR4) are equiva-
lent to the regularity conditions (R1)-(R4) when n = 0. We therefore recover Engel’s
results in the case of real-valued stochastic processes.
Theorem 10.2.7. Let Ξ1(t), . . . , Ξm(t) be a system of m Clifford-algebraic
stochastic processes satisfying the regularity conditions (CR1)−(CR3), and let Ψ(E)
be defined as
Ψ(E) =∑
1≤i1<···<im≤q
χi1···inΞ1(Ii1) · · ·Ξm(Iim). (10.2.19)
Then there exists a countably additive positive real-valued measure ν defined on
E(S) for which
E(∥Ψ(E)∥2) ≤ ν(E) (10.2.20)
for all elementary subsets E ⊂ S, where ∥Ψ(E)∥ denotes the inner-product norm of
Ψ(E).
144
Proof. Given Ψ(E) ∈ L2(Ω) ⊗ Cℓp,q as defined in the theorem, we note that Ψ(E)
can be written as
Ψ(E) =∑i⊂[n]
ψi(E) ei, (10.2.21)
where ψi(E) ∈ L2(Ω) is a real-valued random variable. It then follows that
Ψ(E) =∑
1≤i1<···<im≤q
χi1i2···im
∑j1⊂[n]
ξ1,j1(Ii1) ej1
∑j2⊂[n]
ξ2,j2(Ii2) ej2
· · ·
∑jm⊂[n]
ξm,jm(Iim) ejm
. (10.2.22)
From this we see
ψi(E) =∑
1≤i1<···<im≤q
χi1i2···im
∑j1,...,jm⊂[n]
j1···jm=i
(−1)φ(j1,...,jm)ξ1,j1(Ii1) · · · ξm,jm(Iim)
=∑
j1,...,jm⊂[n]
j1···jm=i
(−1)φ(j1,...,jm)∑
1≤i1<···<im≤q
χi1i2···imξ1,j1(Ii1) · · · ξm,jm(Iim)
=∑
j1,...,jm⊂[n]
j1···jm=i
(−1)φ(j1,...,jm)Yj1,...,jm(E), (10.2.23)
where Yj1,...,jm(E) is a product of m L2(Ω)-valued stochastic processes satisfying the
regularity conditions (R1) − (R3). Thus there exists a real-valued positive measure
λj1,...,jm defined on E(S) such that
∥Yj1,...,jm(E)∥2 ≤ λj1,...,jm(E). (10.2.24)
Now applying Lemma 10.2.5, we have
145
E(|ψi(E)|2) = E(|∑
j1,...,jm⊂[n]
j1···jm=i
(−1)φ(j1,...,jm)Yj1,...,jm(E)|2)
≤ E((∑
j1,...,jm⊂[n]
j1···jm=i
|Yj1,...,jm(E)|)2)
≤ 22n(m−1) maxj1···jm=i
E(|Yj1,...,jm(E)|2
)≤ 22n(m−1) max
j1···jm=iλj1,...,jm(E). (10.2.25)
This holds for each i ⊂ [n], so taking
νi(E) = 22n(m−1) maxj1···jm=i
λj1,...,jm(E) (10.2.26)
and summing over i ⊂ [n] we obtain the desired real-valued positive measure
E(∥Ψ(E)∥2) = E(∑i⊂[n]
|ψi(E)|2) =∑i⊂[n]
E(|ψi(E)|2) ≤∑i⊂[n]
νi(E) ≡ ν(E). (10.2.27)
Theorem 10.2.8. Let Ξ1(t), . . . , Ξm(t) be a system of Clifford-algebraic stochastic
processes satisfying (CR1)− (CR3), and let Ψ be defined as in (10.2.7). Then Ψ can
be extended to a countably additive L2(Ω) ⊗ Cℓp,q-valued measure defined on E(S),
the Borel σ-field of S.
Proof. As in the proof of theorem 10.2.7, we may write for each i ⊂ [n]
ψi(E) =∑
j1,...,jm⊂[n]
j1···jm=i
(−1)φ(j1,...,jm)Yj1,...,jm(E), (10.2.28)
146
where Yj1,...,jm(E) is a product of m real-valued stochastic processes satisfying the
regularity conditions (R1) − (R3), which can then be extended to a countably ad-
ditive L2(Ω)-valued measure on E(S). Summing over i ⊂ [n], the result is ob-
tained.
Given a system Ξ1(t), . . . , Ξm(t) of m Clifford-algebraic stochastic processes
satisfying regularity conditions (CR1) − (CR3), we can construct a countably ad-
ditive L2(Ω)-valued measure Ψ defined on Borel subsets of S. Let us denote this
measure by
Ψ(B) =
∫· · ·
∫B
dΞ1(t1)dΞ2(t2) · · · dΞm(tm). (10.2.29)
For any permutation π ∈ Sm, this construction is also possible on the set Sπ =
(t1, . . . , tm) : 0 ≤ tπ(1) < · · · < tπ(m) ≤ T. We are now able to define the finitely
additive L2(Ω) ⊗ Cℓp,q-valued measure Ψ on the field Fm0 of all elementary subsets
of [0, t]m by
Ψ(E) =∑
(i1,...,im)
χi1···imΞ1(Ii1) · · ·Ξm(Iim) (10.2.30)
whenever
E =⋃
(i1,...,im)
χi1···imIi1 × · · · × Iim . (10.2.31)
Regularity condition (CR4) is needed to guarantee Ψ(E) ∈ L2(Ω) ⊗ Cℓp,q. It
is not obvious, however, that Ψ can be extended to Fm, the σ-field of Borel subsets
of [0, t]m generated by Fm0 , in a countably additive way. Engel proved (cf. Theorem
10.2.4) that in the case of an L2(Ω)-valued stochastic process, this extension can
be accomplished uniquely. We can therefore apply Engel’s result to extend the
Clifford-algebraic stochastic measure.
147
Theorem 10.2.9. Let Ξ1(t), . . . , Ξm(t) be a system of m ≥ 1 Clifford-algebraic
stochastic processes satisfying the regularity conditions (CR1) − (CR4). Then the
finitely additive L2(Ω)⊗Cℓp,q-valued measure Ψ defined by (10.2.30) on the field Fm0
of elementary subsets of [0, t]m can be extended to a countably additive L2(Ω)⊗Cℓp,q-
valued measure Ψ defined on the Borel σ-field Fm generated by Fm0 .
Proof. Writing each Ξk(t) in the system as
Ξk(t) =∑i⊂[n]
ξk,i(t) ei, (10.2.32)
we have the system of 2nm real-valued stochastic processes
ξk,i : 1 ≤ k ≤ m, i ⊂ [n] (10.2.33)
which satisfy the regularity conditions (R1) − (R4). We then have
Ψ(E) =∑
(i1,...,im)
χi1···imΞ1(Ii1) · · ·Ξm(Iim)
=∑
(i1,...,im)
χi1···im
∑j1⊂[n]
ξ1,j1(Ii1) ej1
· · ·
∑jm⊂[n]
ξm,jm(Iim) ejm
. (10.2.34)
This implies that for each ℓ ⊂ [n] we have
ψℓ =∑
(i1,...,im)
χi1···im
∑j1,...,jm⊂[n]
j1···jm=ℓ
(−1)φ(j1,...,jm)ξ1,j1(Ii1) · · · ξm,jm(Iim)
=∑
j1,...,jm⊂[n]
j1···jm=ℓ
(−1)φ(j1,...,jm)∑
(i1,...,im)
χi1···imξ1,j1(Ii1) · · · ξm,jm(Iim)
=∑
j1,...,jm⊂[n]
j1···jm=ℓ
(−1)φ(j1,...,jm)Yj1,...,jm(E), (10.2.35)
148
where Yj1,...,jm(E) is a finitely additive L2(Ω)-valued measure defined on the σ-field
of elementary subsets of [0, t]m, which can then be extended to a countably additive
L2(Ω)-valued measure Y defined on the Borel σ-field Fm generated by Fm0 . Summing
over ℓ ⊂ [n], we obtain the desired L2(Ω) ⊗ Cℓp,q-valued measure
Ψ(E) =∑ℓ⊂[n]
ψℓ(E) eℓ. (10.2.36)
Notation. From this point forward, we shall denote the multiple stochastic integral
of Ξ(t, ω) defined on the m-dimensional simplex by Ξ(m)(t, ω)s.
10.3 THE MULTIPLE STOCHASTIC INTEGRAL ON THE SQUARE
[0, T ]2
Let Ξ(t, ω) be a Clifford-algebraic stochastic process satisfying regularity con-
ditions (CR1)-(CR4). We wish to compute the iterated stochastic integral Ξ(2)(t, ω).
From the definition of the multiple stochastic integral we see
Ξ(2)(t, ω) = L.I.M.N→∞
∑0=t0<t1<t2=t
t1∈0, tN
, 2tN
,...,t
Ξ([0, t1), ω)Ξ([t1, t), ω) + Ξ([t1, t), ω)Ξ([0, t1), ω).
(10.3.1)
If the process Ξ(t, ω) is commutative, we can actually write
Ξ(2)(t, ω) = 2×L.I.M.N→∞
∑0=t0<t1<t2=t
t1∈0, tN
, 2tN
,...,t
Ξ([0, t1), ω)Ξ([t1, t), ω) = 2×Ξ(2)(t, ω)s. (10.3.2)
Theorem 10.3.1. Let Ξ(t, ω) be an L2(Ω)⊗Cℓp,q-valued stochastic process satisfying
149
regularity conditions (CR1)-(CR4). Then
Ξ(2)(t, ω) =∑
i,j⊂[n]
δ2(i, j)ξi,j(t, ω) eij, (10.3.3)
where ξi,j(t, ω) is the L2(Ω)-valued stochastic integral
∫ t
0
∫ t
0
dξi(s, ω)dξj(s, ω) on the
square [0, t]2 and
δ2(i, j) =
2 if φ(i, j) ≡ φ(j, i) ≡ 0 (mod 2)
−2 if φ(i, j) ≡ φ(j, i) ≡ 1 (mod 2)
0 otherwise.
(10.3.4)
Proof. Expanding (10.3.1) we obtain
Ξ(2)(t, ω) = L.I.M.N→∞
∑0=t0<t1<t2=t
t1∈ tN
,...,t(N−1)
N
∑i⊂[n]
ξi([0, t1), ω) ei
∑j⊂[n]
ξj([t1, t), ω) ej
+
∑i⊂[n]
ξi([t1, t), ω) ei
∑j⊂[n]
ξj([0, t1), ω) ej
= L.I.M.
N→∞
∑0<t1<t
t1∈ tN
,...,t(N−1)
N
∑i,j⊂[n]
(−1)φ(i,j)ξi([0, t1), ω)ξj([t1, t), ω) eij
+∑
i,j⊂[n]
(−1)φ(i,j)ξi([t1, t), ω)ξj([0, t1), ω) eij
= L.I.M.N→∞
∑0<t1<t
t1∈ tN
,...,t(N−1)
N
∑i,j⊂[n]
(−1)φ(i,j)ξi([0, t1), ω)ξj([t1, t), ω) eij
+ (−1)φ(i,j)ξi([t1, t), ω)ξj([0, t1), ω) eij
= L.I.M.N→∞
∑0<t1<t
t1∈ tN
,...,t(N−1)
N
∑i,j⊂[n]
(−1)φ(i,j)ξi([0, t1), ω)ξj([t1, t), ω) eij
+ (−1)φ(i,j)ξj([0, t1), ω)ξi([t1, t), ω) eij
150
= L.I.M.N→∞
∑i,j⊂[n]
(−1)φ(i,j)∑
0<t1<t
t1∈ tN
,...,t(N−1)
N
ξi([0, t1), ω)ξj([t1, t), ω) eij
+ L.I.M.N→∞
∑i,j⊂[n]
(−1)φ(j,i)∑
0<t1<t
t1∈ tN
,...,t(N−1)
N
ξi([0, t1), ω)ξj([t1, t), ω) eij
=∑
i,j⊂[n]
((−1)φ(i,j)ξi,j(t, ω)s + (−1)φ(j,i)ξi,j(t, ω)s
)eij
=∑
i,j⊂[n]
δ2(i, j)ξi,j(t, ω) eij. (10.3.5)
Example 10.3.2. Let Ξ(t, ω) ∈ L2(Ω) ⊗ Cℓ0,2 be a quaternionic stochastic pro-
cess satisfying regularity conditions (CR1)-(CR4). Then the 2nd iterated stochastic
integral over the square [0, t]2 is given by the following:
Ξ(2)(t, ω) = 2(ξ
(2)∅ (t, ω) + ξ
(2)12 (t, ω) − ξ
(2)1 (t, ω) − ξ
(2)2 (t, ω)
)+ 2 (ξ∅,1(t, ω) e1 + ξ∅,2(t, ω) e2 + ξ∅,12(t, ω) e12) . (10.3.6)
10.4 A GRAPH-THEORETIC APPROACH USING CLIFFORD AL-
GEBRAS
Combinatorial approaches to the theory of stochastic integrals are not new
[19],[1]. However, the approach presented here, based on graph theory and the
Grassmann algebra, is original with the author.
Let Ξ(t, ω) =∑i⊂[n]
ξi(t, ω) ∈ L2(Ω) ⊗ Cℓp,q be a Clifford-algebraic stochastic
process satisfying the regularity conditions (CR1)-(CR4) of section 10.2. Recalling
the definition of the multiple stochastic integral, let us construct a graph on(
N+12
)−1
vertices weighted with elements of the Grassmann bivector algebra R ⊗ GN .
151
Let t > 0 be fixed and consider a partition of the interval [0, t) into N subin-
tervals of length tN
, 0 = t0 < tN
< 2tN
< · · · < NtN
= t. Let us label these subintervals
as Tk = [tk−1, tk) for 1 ≤ k ≤ N , and let us associate with each of these subintervals
a Grassmann bivector γk ∈ R ⊗ GN . That is γk ∼ [tk−1, tk).
Let us now construct a graph GN whose vertices are labelled with Grassmann
bivectors γk, where k ⊂ [N ] \ ∅, [N ] satisfies
⋃κ∈k
[tκ−1, tκ) = [tℓ, tr) (10.4.1)
for some 0 ≤ tℓ, tr ≤ t. The weight of each vertex is then chosen to be Ξ([tℓ, tr), ω) ≡
Ξ(tr, ω) − Ξ(tℓ, ω). We refer to GN as the Grassmann evolution graph associated
with the process Ξ(t, ω).
For 1 ≤ k ≤ N , let us denote by Tk the interval [tk−1, tk).
Let us denote by ΓN the weighted adjacency matrix of GN . We refer to ΓN as
the Grassmann evolution matrix associated with Ξ(t, ω).
Lemma 10.4.1. The graph GN contains(
N+12
)− 1 vertices.
Proof. Given a partition 0 = t0 < t1 < · · · < tN = t, we are interested in the number
of intervals of the form [tk−1, tk) ⊂ [0, t) for 1 ≤ k ≤ N . When N = 1, there is
clearly only one interval, and we have 1 =(1+12
)− 1 = 0. Hence we assume the
lemma is true for N and proceed by induction.
Beginning with the initial set of partition points t0, t1, . . . , tN, we append
an additional point tα. In addition to the original(
N+12
)subintervals, we obtain
N + 1 new intervals of the form [tj, tα) and [tα, tk) for all points tj less than tα and
152
all points tk greater than tα, respectively. Hence we have
(N + 1
2
)− 1 + (N + 1) =
N(N + 1)
2+
2N
2=
N2 + 3N
2=
(N + 1)(N + 2)
2− 1
(10.4.2)
subintervals, and the proof is complete.
Example 10.4.2. As an example, we consider a 4-partition of the interval [0, t).
For i ⊂ [4] we define the notation:
Ii =⋃k∈i
[tk−1, tk). (10.4.3)
•Ξ(I1)γ1
•Ξ(I2)γ2
•Ξ(I3)γ3
•Ξ(I4)γ4
•Ξ(I12)γ12
•Ξ(I23)γ23
•Ξ(I34)γ34 •
Ξ(I123)γ123
•Ξ(I234)γ234
.................................................................................................................................................................................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
..........................
.....
....................................
....................................
....................................
....................................
....................................
....................................
....................................
....................................
....................................
....................................
....................................
....................................
............
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.............................................................................................................................................................................................................................
..................................................
..................................................
..................................................
.....
..................................................................................................
..................................................................................................
..................................................................................................
........
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
.................................................................................................................................................. .................................................................................................................................................................................................................................................................................................
.................................................................................................................................................................................
..........................................................................................................................................................................................................................................................................................................................
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Figure 10.1. Graph construction for a 4-partition of [0, t).
For compactness of notation, we write Ξi for the increment in Ξ(t, ω) over the
interval Ii in the Grassmann evolution matrix.
153
Γ4 =
0 Ξ2γ2 Ξ3γ3 Ξ4γ4 0 Ξ23γ23 Ξ34γ34 0 Ξ234γ234
Ξ1γ1 0 Ξ3γ3 Ξ4γ4 0 0 Ξ34γ34 0 0
Ξ1γ1 Ξ2γ2 0 Ξ4γ4 Ξ12γ12 0 0 0 0
Ξ1γ1 Ξ2γ2 Ξ3γ3 0 Ξ12γ12 Ξ23γ23 0 Ξ123γ123 0
0 0 Ξ3γ3 Ξ4γ4 0 0 Ξ34γ34 0 0
Ξ1γ1 0 0 Ξ4γ4 0 0 0 0 0
Ξ1γ1 Ξ2γ2 0 0 Ξ12γ12 0 0 0 0
0 0 0 Ξ4γ4 0 0 0 0 0
Ξ1γ1 0 0 0 0 0 0 0 0
.
As partitions of [0, t) become infinitely fine, the graphs in our construction
become infinitely large. We now redefine our construction in the language of linear
operators. Fixing N > 0 and t ≥ 0, let PN denote the collection
PN = [tk, tℓ) ⊂ [0, t) : tk =kt
N, tℓ =
ℓt
N, 0 ≤ k < ℓ ≤ N. (10.4.4)
We associate with each interval of the form [tk, tℓ) a Grassmann bivector γ[tk,tℓ) as
described previously. Let |VN | =(
N+12
)− 1 denote the cardinality of the vertex set
of the Grassmann evolution matrix GN .
Let vk denote the standard orthonormal basis of R|VN |. Since this collection
of vectors is in one-to-one correspondence with the collection of intervals tk, tℓ),
we choose to index them with intervals.
154
We define ΓN ∈ L2(Ω) ⊗ Cℓp,q ⊗ GN ⊗ R|VN | ⊗ R|VN |∗ by
ΓN =∑
[tj ,tk),[tℓ,tm)∈PN[tj ,tk)∩[tℓ,tm)=∅
(Ξ(tm, ω) − Ξ(tℓ, ω)) ⊗ γ[tℓ,tm) ⊗ v[tℓ,tm) ⊗ v⊤[tj ,tk)
=∑
[tj ,tk),[tℓ,tm)∈PN[tj ,tk)∩[tℓ,tm)=∅
∑i⊂[n]
(ξi(tm, ω) − ξi(tℓ, ω)) ei ⊗ γ[tℓ,tm) ⊗ v[tℓ,tm) ⊗ v⊤[tj ,tk)
or, using the Dirac notation,∑[tj ,tk),[tℓ,tm)∈PN[tj ,tk)∩[tℓ,tm)=∅
∑i⊂[n]
(ξi(tm, ω) − ξi(tℓ, ω)) ei ⊗ γ[tℓ,tm) ⊗ |v[tℓ,tm)⟩⟨v[tj ,tk)|. (10.4.5)
We define tr :(L2(Ω) ⊗ Cℓp,q
)⊗ GN ⊗ R|VN | ⊗ R|VN |∗ →
(L2(Ω) ⊗ Cℓp,q
)⊗ GN by
tr
(∑ℓ,m
uℓ,m ⊗ gℓ,m ⊗ vℓ ⊗ v⊤m
)=
∑ℓ
uℓ,ℓ ⊗ gℓ,ℓ. (10.4.6)
We define the operator I ⊗∫B
:(L2(Ω) ⊗ Cℓp,q
)⊗ GN → L2(Ω) ⊗ Cℓp,q by
considering its action on the generators of GN ,
u ⊗ γi 7→ u ⊗∫B
γi =
u, if i = [N ]
0, otherwise
(10.4.7)
and extending by linearity.
We now have the operatorI ⊗∫B
tr :(L2(Ω) ⊗ Cℓp,q
)⊗ GN ⊗ R|VN | ⊗ R|VN |∗ → L2(Ω) ⊗ Cℓp,q. (10.4.8)
Theorem 10.4.3. If Ξ(t, ω) ∈ L2(Ω)⊗Cℓp,q is a Clifford-algebraic stochastic process
satisfying regularity conditions (CR1)-(CR4) of section 10.2, then
L.I.M.N→∞
I ⊗∫B
tr ((ΓN)m) = Ξ(m)(t, ω), (10.4.9)
155
where ΓN is the Grassmann evolution matrix associated with Ξ(t, ω) and Ξ(m)(t, ω) is
the iterated stochastic integral of Ξ(t, ω) defined on the Borel σ-algebra of elementary
subsets of [0, t]m.
Proof. Let m > 0 be fixed. For each N > 0, we have by construction of the
adjacency matrix ΓN and Theorem 5.3.3I ⊗∫B
tr ((ΓN)m) =
∑0=t0<t1<···<tm=t
ti∈0, tN
, 2tN
,...,(N−1)t
N,t
∑π∈Sm
m∏j=1
Ξ([tπ(j)−1, tπ(j)), ω)
.
(10.4.10)
We observe that [ti1−1, ti1) × [ti2−1, ti2) × · · · × [tim−1, tim)i1<i2<···<im are ele-
mentary subsets of the simplex S. By taking the union over all permutations of the
set i1, . . . , im, we obtain the product space [0, t)m \∆, where ∆ is the union of all
lower-dimensional subsets of [0, t)m, i.e.
∆ = (x1, . . . , xm) ⊂ [0, t)m : xi = xj for some i = j, (10.4.11)
which then implies
[0, t)m \ ∆ =⋃
π∈Sm
[tπ(i1)−1, tπ(i1)) × · · · × [tπ(im)−1, tπ(im)). (10.4.12)
Hence, for each N > 0 we obtain a sum over elementary sets in the product
space [0, t)m with mesh size tN
. By hypothesis Ξ(t, ω) satisfies regularity conditions
(CR1)-(CR4), so by Theorem 10.2.9 these sums converge in mean to a countably-
additive L2(Ω) ⊗ Cℓp,q-valued measure on [0, t]m as N → ∞.
Corollary 10.4.4. If Φ(t, ω) is a stochastic process defined on a commutative sub-
algebra of Cℓp,q, such that Φ(t, ω) satisfies regularity conditions (CR1)-(CR4) of
156
section 10.2, then
Φ(m)(t, ω)s = L.I.M.N→∞
1
m!
I ⊗∫B
tr ((ΓN)m) , (10.4.13)
where ΓN is the Grassmann evolution matrix associated with Ξ(t, ω) and Φ(m)(t, ω)s
is the iterated stochastic integral of Φ(t, ω) defined on the Borel σ-algebra of elemen-
tary subsets of the m-dimensional simplex S = (t1, t2, . . . , tm) ∈ [0, t]m : 0 ≤ t1 ≤
t2 ≤ · · · ≤ tm = t.
10.5 THE ENVELOPING ALGEBRA
We have seen how the multiple stochastic integral can be constructed from
the limit of an increasing sequence of finite graphs by considering the algebra tensor
product Cℓp,q ⊗ GN as N → ∞. We now reformulate our results within a single
algebra.
Let us define the 2n-dimensional Clifford algebra Cℓp,q,r, where p + q + r = n,
as a Clifford algebra generated by k-vectors 1 ≤ k ≤ n whose constituent vectors
satisfy
e2i =
1, 1 ≤ i ≤ p
−1, p + 1 ≤ i ≤ p + q
0, p + q + 1 ≤ i ≤ n.
(10.5.1)
Let us also define the involution ⋆ : Cℓp,q,r → Cℓp,q,r and the evaluation map
157
ϵB : Cℓp,q,r → Cℓp,q by
⋆(u) =∑i⊂[n]
uie[n]\i (10.5.2)
ϵB(u) = ⋆(⋆(u) e([n]\[p+q])). (10.5.3)
Proposition 10.5.1. The evaluation map ϵB : Cℓp,q,r → Cℓp,q is equivalent toI ⊗∫B
: Cℓp,q,0 ⊗ Cℓ0,0,r → Cℓp,q
in the sense that the following diagram is commutative.
Cℓp,q,rϵB−−−→ Cℓp,qyȷ
yι
Cℓp,q,0 ⊗ Cℓ0,0,r −−−→I⊗R
B
Cℓp,q
(10.5.4)
where ι is the inclusion mapping and ȷ : Cℓp,q,r → Cℓp,q,0 ⊗ Cℓ0,0,r is defined by
αiei 7→ αie(i∩[p+q]) ⊗ e(i\[p+q]) (10.5.5)
for i ⊂ [n] and αi ∈ R.
Proof. For any u ∈ Cℓp,q,r, we have
ϵB(u) = ⋆
⋆
∑i⊂[n]
uiei
e([n]\[p+q])
= ⋆
∑i⊂[n]
uie([n]\i)e([n]\[p+q])
= ⋆
∑([n]\i)∩([n]\[p+q])=∅
uie([n]\i)
=∑
([n]\i)∩([n]\[p+q])=∅
uiei. (10.5.6)
On the other hand, we findI ⊗∫B
ȷ
(u) =
I ⊗∫B
∑i⊂[n]
uie(i∩[p+q]) ⊗ e(i\[p+q])
=
∑i∩([n]\[p+q])=[n]\[p+q]
uie(i∩[p+q]) =∑
([n]\i)∩([n]\[p+q])=∅
uiei. (10.5.7)
158
The last equality follows from the fact that
i ∩ ([n] \ [p + q]) = [n] \ [p + q] ⇒ [n] \ [p + q] ⊂ i.
Theorem 10.5.2. Let Ξ(t, ω) be a Clifford-algebraic stochastic process satisfying
regularity conditions (CR1)-(CR4), n′ = maxp, q. Let ΓN be the Grassmann
evolution matrix associated with Ξ(t, ω) for some N > 0, written as in (10.4.5).
Then the following diagram is commutative:
(L2(Ω) ⊗ Cℓp,q) ⊗(GN ⊗ R|VN | ⊗ R|VN |∗)
I⊗R
B
!
tr
−−−−−−→ L2(Ω) ⊗ Cℓp,qyι′
yι
L2(Ω) ⊗ Cℓn′,n′,2N ⊗ R|VN | ⊗ R|VN |∗ ϵBtr−−−→ L2(Ω) ⊗ Cℓn′,n′
(10.5.8)
where ι, ι′ are defined by linear extension of
ι(αe(i∩[p])e(i\[p])) = αe(i∩[p])
∏ℓ∈i\[p]
e(n′+ℓ−p) (10.5.9)
ι′(αei ⊗ γℓ ⊗ vj ⊗ v⊤k ) = αe(i∩[p])
∏k∈([p+q]\[p])
e(k−p+n′)
∏k∈ℓ
e2n′+2k−1e2n′+2k ⊗ vj ⊗ v⊤k .
(10.5.10)
Proof. We prove the theorem by expanding the diagram and proving commutativity
of two sub-diagrams. Let H denote R|VN | and consider
(L2(Ω) ⊗ Cℓp,q) ⊗ (GN ⊗H⊗H∗)tr−−−→ L2(Ω) ⊗ Cℓp,q ⊗ GN
I⊗R
B
!
−−−−→ L2(Ω) ⊗ Cℓp,qyι′
yκ
yι
(L2(Ω) ⊗ Cℓn′,n′,∞) ⊗H⊗H∗ tr−−−→ L2(Ω) ⊗ Cℓn′,n′,∞ϵB−−−→ L2(Ω) ⊗ Cℓn′,n′
(10.5.11)
159
where κ is defined by
κ(αei ⊗ γℓ) = αe(i∩[p])
∏k∈([p+q]\[p])
e(k−p+n′)
∏k∈ℓ
e2n′+2k−1e2n′+2k (10.5.12)
and tr : A⊗H⊗H∗ is defined by
tr(ξ ⊗ vj ⊗ v⊤k ) =
ξ, if j = k
0, otherwise
(10.5.13)
for any algebra A ∋ ξ. We observe that ι′ can be defined in terms of κ.
ι′(αei ⊗ γℓ ⊗ vj ⊗ v⊤k ) = κ(αei ⊗ γℓ) ⊗ vj ⊗ v⊤
k . (10.5.14)
The left half of the diagram is commutative because by definition of κ and ι′
we have κ tr = tr ι′. The right half of the diagram is commutative by noting that
Cℓp,q ⊗GN is canonically isomorphic to Cℓp,q,2N and applying Proposition 10.5.1.
Corollary 10.5.3. Let Ξ(t, ω) be a Clifford-algebraic stochastic process satisfying
regularity conditions (CR1)-(CR4). For each N > 0, let ΓN denote the N th Grass-
mann evolution matrix associated with Ξ(t, ω), as defined by (10.4.5). Then
Ξ(k)(t, ω) = L.I.M.N→∞
ϵB tr((ΓN)k). (10.5.15)
10.6 ORTHOGONAL POLYNOMIALS
This section will show how orthogonal polynomials are recovered using
the Grassmann-algebraic graph-theoretic approach to Clifford-algebraic multiple
stochastic integrals. We also present evidence of the difficulty involved in extending
the results to Clifford-algebraic stochastic processes of non-trivial signature.
160
For now we assume we are working with L2(Ω)-valued stochastic processes;
i.e., our Clifford algebra has signature p = q = 0. In this way, we are restricting
to the commutative case. Let us assume a priori that all stochastic processes sat-
isfy regularity conditions (CR1)-(CR4), noting that in the (0,0) signature these are
equivalent to (R1)-(R4).
Given a real-valued regular Poisson process P (t, ω), we define
D(t, ω) = P (t, ω) − E(P (t, ω)). (10.6.1)
This has mean zero and hence orthogonal increments.
Definition 10.6.1. For m ∈ N, let
Km(u, t) =1
m!
m∑q=0
(m
q
)(−1)qtqu(m−q), (10.6.2)
where u(m−q) = u(u − 1)(u − 2) · · · (u − m + q − 1). Then Km(u, t) is the mth
Poisson-Charlier polynomial.
Definition 10.6.2. We define the nth generalized Hermite polynomial by
Hn(u, t) =(−t)n
n!e
u2
2tdn
dun (e−u2
2t ). (10.6.3)
The following two results appear in [10].
Theorem 10.6.3 (Engel 4).
∫· · ·
∫0≤t1<t2<···<tm≤t
dD(t1, ω) · · · dD(tm, ω) = Km(P (t, ω), t), (10.6.4)
where P (t, ω) is the Poisson process and D(t, ω) = P (t, ω) − t.
161
Theorem 10.6.4 (Engel 5). If X(t, ω) is standard Brownian motion, then∫· · ·
∫0≤t1<t2<···<tm≤t
dX(t1, ω) · · · dX(tm, ω) = Hm(X(t, ω), t), (10.6.5)
where Hm(X(t, ω), t) is the mth Hermite polynomial.
Utilizing the Grassmann graph-theoretic construction of Section 10.4, we ob-
tain the following pair of corollaries.
Corollary 10.6.5. Let D(t, ω) be the compensated Poisson process of (10.6.1). For
each N ≥ 1, constructing the N th Grassmann evolution matrix ΓN associated with
D(t, ω), we obtain
L.I.M.N→∞
I ⊗∫B
tr((ΓN)m) = m! Km(P (t, ω), t). (10.6.6)
Corollary 10.6.6. Let X(t, ω) be standard Brownian motion. For each N ≥ 1,
constructing the N th Grassmann evolution matrix ΓN associated with X(t, ω), we
obtain
L.I.M.N→∞
I ⊗∫B
tr((ΓN)m) = m! Hm(X(t, ω), t). (10.6.7)
A goal of future work is to understand the relationship of orthogonal poly-
nomials to Clifford-algebraic stochastic processes in non-trivial signatures. The
preliminary results here give an indication of the complexity of the problem.
For real-valued regular Poisson processes υi(t, ω) we define, for each i ⊂ [n],
δi(t, ω) = υi(t, ω) − E(υi(t, ω)). (10.6.8)
We then define
∆(t, ω) = Υ(t, ω) − E(Υ(t, ω)) =∑i⊂[n]
(υi(t, ω) − E(υi(t, ω))) ei. (10.6.9)
162
Let us define the notation
∆(m)(t, ω) =
∫· · ·
∫0≤t1<···<tm≤t
d∆(t1, ω) · · · d∆(tm, ω). (10.6.10)
Lemma 10.6.7.
⟨∆(2)(t, ω)⟩0 =∑i⊂[n]
(−1)φ(i,i)K2(υi(t, ω), t). (10.6.11)
Proof. Proof is by straightforward calculation and follows from (10.6.4).
⟨∆(2)(t, ω)⟩0 = ⟨∫ t
0
∆(s, ω)d∆(s, ω)⟩0
= ⟨ lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
(Υ(tk − 0, ω) − EΥ(tk − 0, ω))
× (Υ(tk, ω) − Υ(tk−1, ω) − EΥ(tk, ω) + EΥ(tk−1, ω))⟩0
= ⟨ lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
∑i⊂[n]
(υi(tk − 0, ω) − Eυi(tk − 0, ω)) ei
×
∑j⊂[n]
(υj(tk, ω) − υj(tk−1, ω) − Eυj(tk, ω) + Eυj(tk−1, ω)
)ej
⟩0
= lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
⟨∑
i,j⊂[n]
(−1)φ(j,(ji))(υj(tk − 0, ω) − Eυj(tk − 0, ω)
)
×(υji(tk, ω) − υji(tk−1, ω) − Eυji(tk, ω) + Eυji(tk−1, ω)
)ei⟩0
= ⟨∑
i,j⊂[n]
(−1)φ(j,(ji)) lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
(υj(tk − 0, ω) − Eυj(tk − 0, ω)
)
×(υji(tk, ω) − υji(tk−1, ω) − Eυji(tk, ω) + Eυji(tk−1, ω)
)ei⟩0
=∑i⊂[n]
(−1)φ(i,i) lim0=t0<t1<···<tM =t
|ti−ti−1|0
M∑k=1
(υi(tk − 0, ω) − Eυi(tk − 0, ω))
× (υi(tk, ω) − υi(tk−1, ω) − Eυi(tk, ω) + Eυi(tk−1, ω))
163
=∑i⊂[n]
(−1)φ(i,i)
∫ t
0
δi(s, ω)dδi(s, ω)
=∑i⊂[n]
(−1)φ(i,i)K2(υi(t, ω), t). (10.6.12)
Definition 10.6.8. Given u = (u1, u2, . . . , uk) ∈ (Cℓp,q)k, we define the multi-linear
inner product ⟨u⟩k = ⟨u1, u2, . . . , uk⟩ ∈ R by
⟨u1, u2, . . . , uk⟩ =∑i⊂[n]
(u1)i · (u2)i · · · · · (uk)i. (10.6.13)
Lemma 10.6.9. ⟨·, . . . , ·⟩ : (Cℓp,q)k → R defines a k-linear functional on (Cℓp,q)
k.
Proof. Let u ∈ (Cℓp,q)k , v ∈ Cℓp,q, 1 ≤ j ≤ k. Then
⟨u1, . . . , uj−1, uj + v, uj+1, . . . , uk⟩ =∑i⊂[n]
(u1)i · · · (uj−1)i(uj + v)i(uj+1)i · · · (uk)i
=∑i⊂[n]
(u1)i · · · (uj−1)i(uj)i(uj+1)i · · · (uk)i + (u1)i · · · (uj−1)i(v)i(uj+1)i · · · (uk)i
= ⟨u1, . . . , uj, . . . , uk⟩ + ⟨u1, . . . , v, . . . , uk⟩. (10.6.14)
Thus we have linearity in all k arguments. Further given α ∈ R, we have
⟨u1, . . . , αuj, . . . , uk⟩ =∑i⊂[n]
(u1)i · · ·α(uj)i · · · (uk)i = α∑i⊂[n]
(u1)i · · · (uk)i
= α⟨u1, . . . , uk⟩, (10.6.15)
establishing homogeneity.
Proposition 10.6.10.∫· · ·
∫0≤t1<···<tm≤t
⟨d∆(t1, ω), . . . , d∆(tm, ω)⟩ =∑i⊂[n]
(−1)ϕ(i,...,i)Km(υi(t, ω), t). (10.6.16)
164
Proof.∫· · ·
∫0≤t1<t2<···<tm≤t
⟨d∆(t1, ω), . . . , d∆(tm, ω)⟩ =
∫· · ·
∫0≤t1<t2<···<tm≤t
∑i⊂[n]
dδi(t1, ω) · · · dδi(tm, ω)
=∑i⊂[n]
∫· · ·
∫0≤t1<t2<···<tm≤t
dδi(t1, ω) · · · dδi(tm, ω) =∑i⊂[n]
Km(υi(t, ω), t). (10.6.17)
Again the last equality follows from (10.6.4).
10.7 STOCHASTIC PROCESSES ON SPIN+(N)
It is well known that the group Spin(n) forms a double covering of the group
of rotations SO(n). In this section, we construct a stochastic process on the normal
subgroup Spin+(n) ∼= SO(n).
We begin by constructing a Clifford-algebraic stochastic process Ξ(t, ω) ∈
L2(Ω) ⊗ Cℓn,0.
Let us define Σ ⊂ 2[n] to be any collection of subsets of [n] with the following
properties:
|i| ≡ 0 (mod 2),∀i ⊂ Σ, (10.7.1)
eiej + ej ei = 0, ∀i = j ⊂ Σ. (10.7.2)
By restricting the multi-indices in the expansion of Ξ(t, ω) to the set Σ, we
can guarantee that Ξ(t, ω)Ξ(t, ω) ∈ R. We thus define the process
Ξ(t, ω)+ =∑i⊂Σ
ξi(t, ω) ei, (10.7.3)
where ξi(t, ω) are real-valued stochastic processes satisfying regularity conditions
(R1)-(R4) as previously defined. By construction of Σ, Ξ(t, ω)+ is restricted to the
even sub-algebra Cℓ+n,0 and that Ξ(t, ω)Ξ(t, ω) ∈ R.
165
Finally we define the spin stochastic process
Ψ(t, ω) =Ξ(t, ω)+
∥Ξ(t, ω)+∥, (10.7.4)
where ∥ · ∥ denotes the inner product norm. This process exists as long as
Ξ(t, ω)+ = 0. It should be clear from the construction that Ψ(t, ω)t≥0 ⊂ Spin+(n).
Thus given a family Ik1≤k≤N partitioning [0, t] for each N > 0, we define
∫ T
0
dΨ(s) = L.I.M.M→∞
M∑k=1
∑i⊂Σ
ψi(Ik) ei
= L.I.M.M→∞
∑i⊂Σ
M∑k=1
ψi(Ik) ei
=∑i⊂Σ
(L.I.M.M→∞
M∑k=1
ψi(Ik)
)ei
=∑i⊂Σ
∫ T
0
dψi(s) ei, (10.7.5)
where ψi(t, ω) are L2(Ω)-valued stochastic processes satisfying (R1)-(R4) and the
additional condition ∑i⊂Σ
(ψi(t, ω))2 = 1. (10.7.6)
Example 10.7.1 (A Process on Spin+(2) Integrated over the 2-Simplex). Let
us define the stochastic process Ψ(t, ω) = ψ∅(t, ω) + ψ12(t, ω) e12 ∈ L2(Ω) ⊗ Cℓ+2,0.
After normalization we assume (ψ∅(t, ω))2 + (ψ12(t, ω))2 = 1 to get a stochastic
process on L2(Ω) ⊗ Spin+(2, 0).
166
Integrating over the 2-simplex, we obtain
Ψ(2)(t, ω)s = L.I.M.N→∞
∑0<t1<t
t1∈ tN
, 2tN
,...,t
Ψ([0, t1), ω)Ψ([t1, t), ω)
= L.I.M.N→∞
∑0<t1<t
t1∈ tN
, 2tN
,...,t
(ψ∅([0, t1), ω) + ψ12([0, t1), ω) e12) (ψ∅([t1, t), ω) + ψ12([t1, t), ω) e12)
= L.I.M.N→∞
∑0<t1<t
t1∈ tN
, 2tN
,...,t
ψ∅([0, t1), ω)ψ∅([t1, t), ω) − ψ12([0, t1), ω)ψ12([t1, t), ω)
+ L.I.M.N→∞
∑0<t1<t
t1∈ tN
, 2tN
,...,t
(ψ∅([0, t1), ω)ψ12([t1, t), ω) + ψ12([0, t1), ω)ψ∅([t1, t), ω)) e12
=
∫ t
0
ψ∅(s, ω)dψ∅(s, ω) −∫ t
0
ψ12(s, ω)dψ12(s, ω)
+ e12
(∫ t
0
ψ∅(s, ω)dψ12(s, ω) +
∫ t
0
ψ12(s, ω)dψ∅(s, ω)
)= ψ
(2)∅ (t, ω)s − ψ
(2)12 (t, ω)s + 2ψ∅,12(t, ω)se12. (10.7.7)
Thus letting X(t, ω) be a real-valued random variable such that the family
sin(X(s, ω)), cos(X(s, ω)) satisfies (R1)-(R4), we can take ψ∅(t, ω) = sin(X(t, ω))
and ψ12(t, ω) = cos(X(t, ω)) to obtain
Ψ(2)(t, ω)s =
∫ t
0
sin(X(s, ω))d sin(X(s, ω)) −∫ t
0
cos(X(s, ω))d cos(X(s, ω))
+ 2
∫ t
0
cos(X(s, ω))d sin(X(s, ω)) e12. (10.7.8)
167
CHAPTER 11
DIRECTIONS FOR FUTURE WORK
We conclude with a list of avenues for continuing study.
1. Defining Clifford-algebraic Brownian motion and investigating properties of
the associated multiple stochastic integral, existing work in complex white
noise analysis [17] is to be extended to Clifford-algebraic white noise analysis.
2. Further investigating the relationship between the Clifford-algebraic Poisson
process and the Poisson-Charlier polynomials, a Clifford-algebraic analog of
Wiener’s polynomial chaos [22] is to be developed.
3. Extending the multiple stochastic integral to infinite-dimensional Clifford al-
gebras of arbitrary signature and developing a Clifford-algebraic Ito formula,
existing work using fermions [2], [3], [4] is to be generalized.
4. Stochastic processes on quantum Clifford algebras (QCA’s), otherwise known
as Clifford-Hopf gebras [15], are to be studied applying the methods of
Schurmann [20].
5. Quantum stochastic processes on finite-dimensional Clifford algebras are to be
studied by considering the algebra O(H) ⊗ Cℓp,q, where O(H) represents the
collection of Hermitian operators in the Hilbert space H.
168
6. Using the combinatorial spin algebra, self-avoiding quantum random walks
are to be considered by forming the algebra O(H)⊗(R ⊗ Sn
). In particular,
self-avoiding quantum random walks on the hypercube are to be studied.
169
REFERENCES
[1] M. Anshelevich, Free stochastic measures via noncrossing partitions, Adv. Math.,
155 (2000), 154-179.
[2] D. Applebaum, Fermion stochastic calculus in Dirac-Fock space, J. Phys. A,
28(1995), 257-270.
[3] D. Applebaum, R. Hudson, Fermion Ito’s formula and stochastic evolutions,
Commun. Math. Phys., 96 (1984), 473-96.
[4] C. Barnett, R. Streater, I. Wilde, The Ito-Clifford integral I, J. Functional Anal-
ysis, 48 (1982), 172-212.
[5] F.A. Berezin, The Method of Second Quantization, Academic Press, New York,
1966.
[6] R. Brauer, H. Weyl, Spinors in n-dimensions, Amer. J. Math., 57 (1935), 425-
449.
[7] E. Cartan, The Theory of Spinors, Hermann, Paris, 1966.
[8] P.A.M. Dirac, Spinors in Hilbert Space, Plenum Press, New York, 1974.
[9] J.L. Doob, Stochastic Processes, John Wiley and Sons, Inc., New York, 1953.
[10] D. Engel, The Multiple Stochastic Integral, Mem. Amer. Math. Soc. No. 265,
Providence, 1982.
[11] B. Fauser, Hecke algebra representations within Clifford geometric algebras of
170
multivectors, J. Phys. A, 32 (1999), 1919-1936.
[12] P. Feinsilver, R. Schott, Algebraic Structures and Operator Calculus Vol. III,
Kluwer, Dordrecht, 1996.
[13] W. Feller, An Introduction to Probability Theory and its Applications, Vol. II,
Wiley, New York, 1966.
[14] P. Lounesto, Clifford Algebras and Spinors, Cambridge University Press, Cam-
bridge, 2001.
[15] Z. Oziewicz, Clifford Hopf-gebra and bi-universal Hopf-gebra, Czech. J. Phys.,
47 (1997), 1267-1274.
[16] I. Porteous, Clifford Algebras and the Classical Groups, Cambridge Studies in
Advanced Mathematics: 50, Cambridge University Press, Cambridge, 1995.
[17] M. Redfern, Complex white noise, Infinite Dimensional Analysis, Quantum
Probability and Related Topics, 4 (2001), 347-375.
[18] A. Renyi, Foundations of Probability, Holden-Day, San Francisco, 1970.
[19] G-C. Rota, T. Wallstrom, Stochastic integrals: a combinatorial approach, Ann.
Prob., 25 (1997), 1257-1283.
[20] M. Schurmann, White Noise on Bialgebras, Springer-Verlag, Berlin, 1993.
[21] D. West, Introduction to Graph Theory, Second Ed., Prentice Hall, Upper Sad-
dle River, 2001.
[22] N. Wiener, The homogeneous chaos, Amer. J. Math., 60 (1938), 897-936.
171