Buscar

TheFractionalDerivativesoftheRiemannzetaandDirichletetafunction

Prévia do material em texto

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/339627752
The Fractional Derivatives of the Riemann Zeta and Dirichlet Eta Function
Thesis · March 2020
DOI: 10.13140/RG.2.2.31378.61124
CITATIONS
0
READS
13,657
1 author:
Some of the authors of this publication are also working on these related projects:
The Cayley type theorem for semigroups View project
Jens Fehlau
Universität Potsdam
2 PUBLICATIONS   0 CITATIONS   
SEE PROFILE
All content following this page was uploaded by Jens Fehlau on 02 March 2020.
The user has requested enhancement of the downloaded file.
https://www.researchgate.net/publication/339627752_The_Fractional_Derivatives_of_the_Riemann_Zeta_and_Dirichlet_Eta_Function?enrichId=rgreq-2c8b65f4793afa7347c2eb9c99e4218e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTYyNzc1MjtBUzo4NjQ2OTA5NTk4MTQ2NjFAMTU4MzE2OTc5ODI5OQ%3D%3D&el=1_x_2&_esc=publicationCoverPdf
https://www.researchgate.net/publication/339627752_The_Fractional_Derivatives_of_the_Riemann_Zeta_and_Dirichlet_Eta_Function?enrichId=rgreq-2c8b65f4793afa7347c2eb9c99e4218e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTYyNzc1MjtBUzo4NjQ2OTA5NTk4MTQ2NjFAMTU4MzE2OTc5ODI5OQ%3D%3D&el=1_x_3&_esc=publicationCoverPdf
https://www.researchgate.net/project/The-Cayley-type-theorem-for-semigroups?enrichId=rgreq-2c8b65f4793afa7347c2eb9c99e4218e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTYyNzc1MjtBUzo4NjQ2OTA5NTk4MTQ2NjFAMTU4MzE2OTc5ODI5OQ%3D%3D&el=1_x_9&_esc=publicationCoverPdf
https://www.researchgate.net/?enrichId=rgreq-2c8b65f4793afa7347c2eb9c99e4218e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTYyNzc1MjtBUzo4NjQ2OTA5NTk4MTQ2NjFAMTU4MzE2OTc5ODI5OQ%3D%3D&el=1_x_1&_esc=publicationCoverPdf
https://www.researchgate.net/profile/Jens_Fehlau?enrichId=rgreq-2c8b65f4793afa7347c2eb9c99e4218e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTYyNzc1MjtBUzo4NjQ2OTA5NTk4MTQ2NjFAMTU4MzE2OTc5ODI5OQ%3D%3D&el=1_x_4&_esc=publicationCoverPdf
https://www.researchgate.net/profile/Jens_Fehlau?enrichId=rgreq-2c8b65f4793afa7347c2eb9c99e4218e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTYyNzc1MjtBUzo4NjQ2OTA5NTk4MTQ2NjFAMTU4MzE2OTc5ODI5OQ%3D%3D&el=1_x_5&_esc=publicationCoverPdf
https://www.researchgate.net/institution/Universitaet_Potsdam?enrichId=rgreq-2c8b65f4793afa7347c2eb9c99e4218e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTYyNzc1MjtBUzo4NjQ2OTA5NTk4MTQ2NjFAMTU4MzE2OTc5ODI5OQ%3D%3D&el=1_x_6&_esc=publicationCoverPdf
https://www.researchgate.net/profile/Jens_Fehlau?enrichId=rgreq-2c8b65f4793afa7347c2eb9c99e4218e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTYyNzc1MjtBUzo4NjQ2OTA5NTk4MTQ2NjFAMTU4MzE2OTc5ODI5OQ%3D%3D&el=1_x_7&_esc=publicationCoverPdf
https://www.researchgate.net/profile/Jens_Fehlau?enrichId=rgreq-2c8b65f4793afa7347c2eb9c99e4218e-XXX&enrichSource=Y292ZXJQYWdlOzMzOTYyNzc1MjtBUzo4NjQ2OTA5NTk4MTQ2NjFAMTU4MzE2OTc5ODI5OQ%3D%3D&el=1_x_10&_esc=publicationCoverPdf
University of Potsdam
Department of Mathematics and Science
Institute of Mathematics
Master’s thesis
The Fractional Derivatives
of the Riemann Zeta and
Dirichlet Eta Function
By: Jens Fehlau
Date of birth: 18.06.1994
Supervisors: Dr. Andreas Braunß
Prof. Dr. Matthias Keller
Published: 02.03.2020
This page intentionally left blank
ii
ABSTRACT
This paper is mainly interested in exploring the fractional derivatives of
analytic number theoretical functions and results, such as the Riemann
zeta function ζ(s) and its corresponding functional equation. The basis of
this paper are the results provided by Emanuel Guariglia’s paper Riemann
zeta fractional derivative - functional equation and link with primes [5] and
similar scientific papers dealing with this topic.
PREFACE
In the last years fractional calculus has become a well-established branch of math-
ematics gaining increasing importance in many fields such as theoretical physics,
quantum mechanics and many more. The motivation for introducing concepts such
as the non-integer derivative of a function lies in the nature of iterating the deriva-
tive via composition. Consider for this the well-known property of the differential
operator DnxD
m
x = D
n+m
x .
From this simple property one could ask, what could the half-derivative of a
function possibly be? One valid definition would thus be:
The half-derivative of a function is defined in such a way, that when taking the half
derivative again, we arrive at the regular first derivative of our function, i.e.
D1/2x D
1/2
x = D
1
x .
From such a simple idea many different definitions of Dα, α ∈ R arose over the
course of the last couple hundred years, where it has even been generalized to α ∈ C.
In this paper we are interested in exploring one of those many definitions motivated
by the limit definition of D and try to apply it to analytic number theoretical results,
such as the functional equation for the Riemann zeta function, Dirichlet series or the
Hurwitz zeta function.
Note. Whenever results have been derived through my own investigations and
research, I will indicate it accordingly.
iii
Contents
PREFACE iii
1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Extending the factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 The Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Euler’s Reflection Formula . . . . . . . . . . . . . . . . . . . . . . . . . 16
3 The Riemann zeta function . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1 ζ(s, a) and various integral representations . . . . . . . . . . . . . . . . 20
3.2 The analytic continuation of ζ(s, a) . . . . . . . . . . . . . . . . . . . . 29
3.3 Hurwitz’s formula for ζ(s, a) . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 The functional equation for ζ(s) . . . . . . . . . . . . . . . . . . . . . . 37
3.5 The Dirichlet eta function η(s) . . . . . . . . . . . . . . . . . . . . . . . 38
4 The Grünwald-Letnikov fractional derivative . . . . . . . . . . . . . . 41
5 The fractional derivatives of ζ(s) and η(s) . . . . . . . . . . . . . . . . . 46
5.1 Differentiating ζ(s) and η(s) . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 Convergence of ζ(α)(s) and η(α)(s) . . . . . . . . . . . . . . . . . . . . . 54
5.3 The functional equations of ζ(α)(s) and η(α)(s) . . . . . . . . . . . . . 57
6 Further investigations of Dαs . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.1 Differentiating D(s) and L(χ, s) . . . . . . . . . . . . . . . . . . . . . . 63
6.2 Differentiating F(a, s) and ζ(s, a) . . . . . . . . . . . . . . . . . . . . . 66
7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
REFERENCES lxix
iv
1 Prerequisites
This introductory chapter will have the purpose of clarifying some of the notation
used in this mathematical paper as well as introducing basic concepts which we will
use throughout the thesis. Whenever we talk about the set of natural numbers
we mean the set N = {1, 2, 3, ...}. The positive and negative integers are the set
Z = {...,−2,−1, 0, 1, 2, ...} being composed of Z− = {...,−2,−1}, Z+ = N and {0}.
Over the course of the paper we will make use of the natural logarithm which we will
denote by log(z). Note, that we are only going to consider the principal branch of
the logarithm.
Throughout the chapters we are interested in investigating a special class of func-
tions which are one of the quintessential tools of analytic number theory. We begin by
introducing the notion of Dirichlet series, characters and related types of important
definitions.
Definition 1.1 Dirichlet series [1]
Let s ∈ C and let {zk}k∈N be a complex sequence, then
D(s) ≡
∞∑
k=1
zk
ks
is called a Dirichlet series.
The type of function that we just introduced is an extremely important tool that
finds its uses in almost all branches of mathematics. Next to analytic number theory
mathematicians are also interested in expressing so-called generating functions by
Dirichlet series, which are often used in the study of combinatorics and asymptoticanalysis.
5
Definition 1.2 Dirichlet character [1]
Let
(
Z�nZ
)×
be the group of units of the ring of integers modulo n and let C× de-
note the group of units of C. A group homomorphism χ :
(
Z�nZ
)×
→ C× is called
a Dirichlet character (mod n).
The way we just now introduced ”Dirichlet characters” is not the way most of the
literature would do it. For Dirichlet-L functions, which we will define soon, to be
well-defined, we are interested in extending χ to the integers. At first we want to
extend our definition and after that introduce a special type of character that will
find its uses over the course of the thesis.
Consider definition 1.2 . We now define ourselves a mapping that allows us to extend
our current definition of a Dirichlet character to the integers:
χ : Z −→ C , r 7→ χ(r) =
 χ(r (mod n)) , gcd(n, r) = 10 , gcd(n, r) > 1
By extending the definition we have gotten ourselves what most of the mathemat-
ical community defines as being a Dirichlet character. So whenever we use the term
in our thesis we invite the reader to take the extension of χ to the integers into account.
Definition 1.3 Principal character [1]
A Dirichlet character (mod n) is called a principal or trivial character modulo n if
χ ≡ 1 on
(
Z�nZ
)×
.
Note. Dirichlet characters are a special class of so-called arithmetic functions, be-
cause they allow one to use certain arithmetic operations on them. One of their main
properties is that they are completely multiplicative, meaning χ(ab) = χ(a)χ(b) for
all a, b ∈ Z.
6
After introducing the notion of characters, we are able to combine both the Dirichlet
series and χ into a so-called Dirichlet L-function.
Definition 1.4 Dirichlet L-function [1]
Let L(χ, s) be a Dirichlet series of a Dirichlet character χ, then we call
L(χ, s) ≡
∞∑
k=1
χ(k)
ks
a Dirichlet L-function.
One of the most prominent Dirichlet L-functions in mathematics is the Riemann zeta
function, which we will introduce in chapter 3. It features a principal character and
is most famously known for being the function of interest in one of the seven Mil-
lennium Prize Problems, the Riemann Hypothesis, which claims that all non-trivial
zeros of the Riemann zeta function have real part 1
2
.
The next few definitions and theorems will find their uses throughout the paper
and will complete this chapter after being stated.
Theorem 1.5 Generalized Binomial theorem
Let α, x, y ∈ R such that
∣∣∣x
y
∣∣∣ < 1, then
(x+ y)α =
∞∑
m=0
(
α
m
)
xmyα−m .
Note. A non-trivial proof of this theorem can be found in Einführung in die höhere
Mathematik Vol. 2 by Knopp/Mangoldt [7]. [pp. 130-132]
7
Definition 1.6 Fourier Series [2]
Let f(t) be a 2T−periodic function, with f(t+ 2T ) = f(t), then the Fourier series of
f(t) on an interval [−T, T ] is given by
f(t) ≡ a0
2
+
∞∑
k=1
[
ak cos
(
kπ
T
t
)
+ bk sin
(
kπ
T
t
)]
.
We call
a0
2
, ak and bk the Fourier coefficients of f(t), which are defined by
a0
2
≡ 1
2T
∫ T
−T
f(t) dt , ak ≡
1
T
∫ T
−T
f(t) cos
(
kπ
T
t
)
dt
and bk ≡
1
T
∫ T
−T
f(t) sin
(
kπ
T
t
)
dt .
Note. Depending on the symmetry of f(t) it can happen, that one or more of the
Fourier coefficients become zero due to the integrand becoming an odd function which
is being integrated over a symmetric interval.
Theorem 1.7 Generalized Leibniz rule [8]
Let f and g be functions that are k-times differentiable such that fg is k-times dif-
ferentiable as well. Then the k-th derivative of fg is given by
(fg)(k) ≡
k∑
n=0
(
k
n
)
f (n)g(k−n) .
Note. This fact can be proven using the principle of mathematical induction and the
product rule for differentiable functions.
8
Theorem 1.8 Measure theoretical Leibniz rule for integrals [10]
Let f : U × Ω −→ C, where U ⊆ C and (Ω,F , µ) is a measure space. We have that
∂u
∫
Ω
f(u, ω)µ(dω) =
∫
Ω
∂uf(u, ω)µ(dω) if the following conditions are all satisfied:
• For all u ∈ U :
∫
Ω
|f(u, ω)|µ(dω) <∞
• f is continuously differentiable in U µ-almost-surely
• There does exist some g : Ω −→ R, with ‖∂uf(u, ω)‖ ≤ g(ω)
• For all (u, ω) ∈ U × Ω:
∫
Ω
|g(ω)|µ(dω) <∞
Note. If f is in fact a holomorphic function on U , then it is also continuously differen-
tiable and thus the second condition would already be satisfied. The same fact holds if
f is meromorphic on U . Then it is Lebesgue-almost surely continuously differentiable.
2 Extending the factorial
This first mathematically focused chapter is going to deal with the derivation of Eu-
ler’s reflection formula and finding the simple poles of the so-called gamma function.
2.1 The Gamma function
One very important function in analytic number theory is the so-called Gamma func-
tion, which acts as one of the ways of extending the factorial to non-integer values.
We are going to introduce its integral definition, which is valid for all s ∈ C, where
Re(s) > 0, and show, that it satisfies the main properties of the factorial.
Let s ∈ C, with Re(s) > 1 and we claim, that the following expression satisfies
the functional equation of the factorial x! = x(x− 1)! and that 0! = 1:
Γ(s) ≡
∫ ∞
0
ts−1e−t dt ,
where Γ(s) = (s− 1)! .
9
Proof of Claim I: x! = x(x− 1)!
Let s be choosen like above. Integration by parts gives us
Γ(s) = −ts−1e−t
∣∣∣∞
0
+ (s− 1)
∫ ∞
0
ts−2e−t dt .
Now for the evaluation of −ts−1e−t under the boundary conditions. If we let the limit
go to 0, then clearly the term goes to zero as well. But what about lim
t→∞
−ts−1e−t?
Since exp(−t) outgrows polynomials of any degree for large values of t, we get that
the limit also approaches 0. In conclusion we arrive at
Γ(s) = (s− 1)
∫ ∞
0
t(s−1)−1e−t dt = (s− 1)Γ(s− 1) .
�
Proof of Claim II: 0! = 1
Let s = 1. Hence, we get
Γ(1) =
∫ ∞
0
t1−1e−t dt =
∫ ∞
0
e−t dt = −e−t
∣∣∣∞
0
= 1 = 0! .
�
Thus we have established the gamma function as a natural extension of the factorial
to non-integer values and have proven our claims to hold.
Even though this function is an integral part of analysis, it is often inconvenient to
use. For that reason we are actually interested in making use of two more convenient
formulas for Γ(s).
Theorem 2.1 Euler’s Definition for Γ(s)
For s ∈ C, with Re(s) > 1, Γ(s) =
∫∞
0
ts−1e−t dt can be expressed equivalently by
Γ(s) ≡ 1
s
∞∏
k=1
(
1 +
1
k
)s
(
1 +
s
k
) .
10
Proof. We begin by considering a slight variant of our integral representation for Γ(s):
Γn(s) =
∫ n
0
ts−1
(
1− t
n
)n
dt .
Note, that when we take lim
n→∞
Γn(s) we arrive yet again at Γ(s) since
lim
n→∞
Γn(s) = lim
n→∞
∫ n
0
ts−1
(
1− t
n
)n
dt
=
∫ ∞
0
ts−1 lim
n→∞
(
1− t
n
)n
dt
=
∫ ∞
0
ts−1e−t dt .
Interchanging the limits and distributing it into the upper bound is not a trivial mat-
ter and consequently we are interested in further justifying it.
Let us consider the sequence of functions
fn : [0,∞) −→ C , t 7→ χ[0,n](t) · ts−1
(
1− t
n
)n
on [0,∞), where χ[0,n] is the characteristic function. Then for all t ∈ [0,∞) we have
that fn(t)
n→∞−→ ts−1e−t and
∣∣∣Re(fn(t))∣∣∣ ≤ tRe(s)−1e−t , ∣∣∣Im(fn(t))∣∣∣ ≤ tRe(s)−1e−t .
This also means that both the real and imaginary parts of fn are being dominated
by a integrable function on [0,∞), namely∫ ∞
0
tRe(s)−1e−t dt
Re(s)>1
= Γ(Re(s)) <∞ .
Furthermore we have that
Re(fn(t))
n→∞−→ Re(f(t)) and Im(fn(t))
n→∞−→ Im(f(t))
11
and thus, by applying the dominated convergence theorem to the real and imaginary
parts we overall get
lim
n→∞
∫ ∞
0
fn(t) dt =
∫ ∞
0
f(t) dt
as a consequence of this fact also holding for Re(fn(t)) and Im(fn(t)).
With the motivation for where Γn(s) originated from and the limit interchange in
question out of the way, we can start with the main proof.
By introducing the natural substitution τ =
t
n
in Γn(s) we arrive at
Γn(s) =
∫ n
0
ts−1
(
1− t
n
)n
dt
=
∫ 1
0
(τn)s−1 (1− τ)n n dτ
= ns
∫ 1
0
τ s−1 (1− τ)n dτ .
The next step consists of integrating Γn(s) by parts exactly n-times. We will do this by
an iterating process, where we index our integralaccordingly. Letting Γn(s) := n
sJn,s
we get
nsJn,s = n
s
∫ 1
0
(1− τ)n τ s−1 dτ
= ns
(1− τ)n τ s
s
∣∣∣1
0︸ ︷︷ ︸
=0
+
n
s
∫ 1
0
(1− τ)n−1τ s dτ

= ns
n
s
Jn−1,s+1
= ns
n
s
· n− 1
s+ 1
Jn−2,s+2
...
12
= nsJ0,s+n
n−1∏
k=0
n− k
s+ k
= ns
∫ 1
0
τ s+n−1 dτ
n−1∏
k=0
n− k
s+ k
=
ns
s+ n
n−1∏
k=0
n− k
s+ k
=
n!ns
s(s+ 1) · · · (s+ n− 1)(s+ n)
.
We now wish to proceed by taking the limit as n→∞. In the process, we are going
to manipulate the terms in our product under the condition that everything converges
uniformely for Re(s) > 1 resulting in the desired infinite product expansion for Γ(s).
lim
n→∞
Γn(s) = lim
n→∞
n!ns
s(s+ 1) · · · (s+ n− 1)(s+ n)
= lim
n→∞
n!
s(s+ 1) · · · (s+ n− 1)(s+ n)
[
2
1
· 3
2
· · · n− 1
n− 2
· n
n− 1
]s
= lim
n→∞
n!
s(s+ 1) · · · (s+ n− 1)(s+ n)
(
2
1
)s(
3
2
)s
· · ·
(
n
n− 1
)s(
n+ 1
n
)s
= lim
n→∞
1
s
n∏
k=1
k
k + s
n∏
k=1
(
k + 1
k
)s
=
1
s
∞∏
k=1
[(
1 +
1
k
)s (
1 +
s
k
)−1]
,
where the third equality followed from the fact, that
(
n+ 1
n
)s
n→∞−→ 1.
We have thus successfully derived Euler’s definition for Γ(s) and can consider this
proof to be complete.
�
13
The next expression for Γ(s) involves the use of Euler’s ubiquitous constant γ, also
called the Euler-Mascheroni constant. Ubiquitous in the sense that it even appears
regularly in other scientific fields, such as the study of regularization for divergent
series and integrals in quantum field theory. This infinite product expansion is con-
sidered to be one of the most beautiful results in analytic number theory, due to the
fact that it combines both the extension of the factorial and the highly important
constant γ in one formula.
Theorem 2.2 Weierstraß’ Definition for Γ(s)
For s ∈ C, with Re(s) > 1, Γ(s) can be expressed equivalently by
1
Γ(s)
≡ seγs
∞∏
k=1
(
1 +
s
k
)
e−
s
k ,
where γ ≡ lim
n→∞
[Hn − log(n)] is the Euler-Mascheroni constant and Hn =
n∑
r=1
1
r
is the
n-th harmonic number.
Proof. Since we have already proven the equivalence between the integral definition
and Euler’s infinite product expansion, it is enough to show that the latter one is
thus equivalent to the reciprocal of Weierstraß’ definition.
Γ(s) = lim
n→∞
1
s
n∏
k=1
[(
1 +
1
k
)s (
1 +
s
k
)−1]
=
1
s
lim
n→∞
[
ns
n∏
k=1
(
1 +
s
k
)−1]
=
1
s
lim
n→∞
[
exp(log(ns))
n∏
k=1
(
1 +
s
k
)−1]
=
1
s
lim
n→∞
[
exp(−Hns+ log(n)s) exp (Hns)
n∏
k=1
(
1 +
s
k
)−1]
=
1
s
lim
n→∞
[
exp(−Hns+ log(n)s) exp
(
s
∑
n≥k≥1
1
k
)
n∏
k=1
(
1 +
s
k
)−1]
14
=
1
s
lim
n→∞
[
exp(−Hns+ log(n)s)
n∏
k=1
exp
(
s
1
k
) n∏
k=1
(
1 +
s
k
)−1]
=
1
s
lim
n→∞
[
exp(− [Hn − log(n)] s)
n∏
k=1
(
1 +
s
k
)−1
e
s
k
]
(∗)
=
1
s
exp(− lim
n→∞
[Hn − log(n)] s) · lim
n→∞
n∏
k=1
(
1 +
s
k
)−1
e
s
k
=
1
s
e−γs
∞∏
k=1
(
1 +
s
k
)−1
e
s
k ,
where the second equality followed from our derivation of Euler’s product expansion
and (∗) from the fact, that the product converges since the left-hand side is inde-
pendent of n and that exp(z) is continuous. By taking the reciprocal on both sides,
which is justified since the gamma function is never equal to zero, we arrive at our
desired result.
�
Weierstraß’ product expansion for Γ(s) gives us the big advantage of seeing the sim-
ple poles of the function at a glance. Up until now we have only considered the idea
of extending the factorial to s ∈ C, where Re(s) > 1. With Weierstraß’ definition
now in our hands, we have actually analytically continued Γ(s) to the whole complex
plane except for where the poles are. The following Lemma gives an idea for where
they lie in the complex plane.
Lemma 2.3 The gamma function Γ(s) has simple poles at {0,−1,−2, ...}.
Proof. Due to Weierstraß’ definition we have expressed the gamma function as an
infinite product. To find the poles, consider when Γ(s)→∞ or equivalently 1
Γ(s)
→ 0.
Examining its factors we can notice, that none of them tend to zero. This implies
that our product only goes to 0 iff one of its factors is exactly zero. Since exp(z) 6= 0
15
for all z ∈ C our only choice is given by
1 +
s
k
= 0 .
This happens exactly, whenever s ∈ {−1,−2, ...}. Examining the factors outside of
the product we can immediately see, since eγs 6= 0 for all s ∈ C, that s = 0 is yet
another pole.
�
2.2 Euler’s Reflection Formula
This next chapter is going to deal with deriving Euler’s reflection formula. It is going
to be split into two main parts. At first we derive Euler’s infinite product expansion
for sin(z) in a unique fashion and after that we prove our main result. The way we
are going to derive the infinite product expansion can, as far as I am concerned, not
be found in any standard literature and has been derived by me a few years prior in
an effort of proving the reflection formula without the use of Weierstraß factorization
theorem.
Theorem 2.4 Mittag-Leffler series expansion for cot(s)
Let s ∈ C \ Z. Then the following formula for the cot(s) holds:
cot(s) ≡ 1
s
− 2s
∞∑
k=1
1
(kπ)2 − s2
.
Proof. Let t ∈ R \ Z and consider the function f(z) = cos(tz). We proceed by
expanding f(z) = cos(tz) into its respective Fourier series in z on [−π, π] using
Definition 1.6.
At first note, that f(z) is an even function. An even function multiplied by an odd
function makes it odd overall. This makes the odd Fourier coefficient bk of f(z) vanish
since we are integrating over a symmetric interval. For the other Fourier coefficients
16
we get
a0
2
=
1
2π
∫ π
−π
cos(tz) dz =
1
π
∫ π
0
cos(tz) dz =
sin(tz)
tπ
∣∣∣π
0
=
sin(tπ)
tπ
and by doing integration by parts twice
ak =
2
π
∫ π
0
cos(tz) cos(kz) dz
=
2
π
· cos(tz) sin(kz)
k
∣∣∣π
0
− 2
π
· t sin(tz) cos(kz)
k2
∣∣∣π
0
+
t2
k2
ak
= − 2
π
· t sin(tπ) cos(kπ)
k2
+
t2
k2
ak
= (−1)k+1 2
π
· t sin(tπ)
k2
+
t2
k2
ak .
By now subtracting
t2
k2
ak on both sides and dividing by
(
1− t
2
k2
)
respectively, we
get, that
ak = (−1)k+1
2
π
· t
k2 − t2
sin(tπ) .
Plugging our newly acquired Fourier coefficients into the Fourier series expansion for
cos(tz) gives us
cos(tz) =
sin(tπ)
tπ
+
2
π
∞∑
k=1
(−1)k+1 t
k2 − t2
sin(tπ) cos(kz) .
Since we have chosen t ∈ R \ Z initially, we can divide both sides by the common
factor sin(tπ) since it will never equal zero. Also, by letting z = π we get
cos(tπ)
sin(tπ)
= cot(tπ)
=
1
tπ
+
2
π
∞∑
k=1
(−1)k+1 t
k2 − t2
cos(kπ)
17
=
1
tπ
+
2
π
∞∑
k=1
(−1)k+1 t
k2 − t2
(−1)k
=
1
tπ
− 2
π
∞∑
k=1
t
k2 − t2
.
By now substituting s = tπ we arrive at the desired series expansion.
�
We have now gathered our main tool for deriving Euler’s infinite product representa-
tion for sin(z).
Theorem 2.5 Euler’s infinite product expansion for sin(z)
Let z ∈ R, then the following representation for sin(z) is valid:
sin(z) ≡ z
∞∏
k=1
(
1− z
2
(kπ)2
)
.
Note. One can show that the product expansion actually holds for all z ∈ C.
Proof. Our proof focuses on integrating the Mittag-Leffler expansion for cot(s) on
[0, z]. We first subtract 1
s
on both sides and then integrate wrt. s:
∫ z
0
cot(s)− 1
s
ds =
∫ z
0
−2s
∞∑
k=1
1
(kπ)2 − s2
ds
(∗)⇐⇒ log(sin(s))− log(s)
∣∣∣z
0
=
∞∑
k=1
∫ z
0
−2s
(kπ)2 − s2
ds
(∗∗)⇐⇒ log
(
sin(s)
s
) ∣∣∣z
0
=
∞∑
k=1
log(k2π2 − s2)
∣∣∣z
0
(∗∗∗)⇐⇒ log
(
sin(z)
z
)
= log
(
∞∏
k=1
k2π2 − z2
k2π2
)
,
18
where in (∗) the interchange of limits follows from Tonelli’s theorem. No matter what
z is, starting at some value of k ∈ N the partial sums of the series will be monotone
and positive. Hence we can interchange limits.
On (∗∗) we used the functional equation of the logarithm and on the integral the
substitution t = (kπ)2 − s2. Now for (∗ ∗ ∗) we make use of the continuity of the
logarithm to interchange limits. On the other hand we make use of the fact, that
lim
s→0
sin(s)
s
= lim
s→0
s+ O(s3)
s
= lim
s→0
1 + O(s2) = 1
and consequently log(1) = 0 making the term vanish under the boundary condition.The principal branch of the natural logarithm is injective and thus we can conclude,
that
sin(z)
z
=
∞∏
k=1
k2π2 − z2
k2π2
. (1)
Multiplying by z whenever z 6= 0 we arrive at our desired expression.
�
Note. The infinite product in (1) is the product representation for the cardinal sine
function sinc(z). When Euler first derived the product representation he made use
of the Maclaurin series for sinc(z) and continued from this point onwards by decom-
posing it into its linear factors.
We have now derived all the necessary theorems to prove the climax of this chapter:
Theorem 2.6 Euler’s Reflection formula
For s ∈ C \ Z we have
Γ(s)Γ(1− s) ≡ π
sin(πs)
.
19
Proof. We proceed by using Weierstraß’ definition of Γ(s). Also note, that due to the
recursive nature of Gamma, we have that Γ(1− s) = (−s)Γ(−s) and thus
Γ(s)Γ(1− s) =
[
1
s
e−γs
∞∏
k=1
(
1 +
s
k
)−1
e
s
k
][
eγs
∞∏
k=1
(
1− s
k
)−1
e−
s
k
]
=
1
s
∞∏
k=1
(
1 +
s
k
)−1 (
1− s
k
)−1
=
1
s
∞∏
k=1
(
1− s
2
k2
)−1
=
π
sin(πs)
,
where the last equality followed from factoring out π and identifying the corresponding
infinite product as sin(πs).
�
3 The Riemann zeta function
The following chapter, excluding 3.5, has been directly inspired by chapter 12 of
Apostol’s Introduction to Analytic Number Theory [1]. We are going to go into way
more detail on the proofs since Apostol provided many of those only with an outline.
Thus, most of this chapter except for the main definitions, theorems and ideas for the
proofs have been filled in by myself in an effort to make everything as clear as it can
get to the reader.
3.1 ζ(s, a) and various integral representations
Our main Dirichlet L-function that we are going to deal with is the Riemann zeta
function ζ(s) we have already mentioned in the prerequisite chapter. ζ(s) converges
uniformly for every ε > 0 in some half-plane Re(s) ≥ 1 + ε and absolutely for all
s ∈ C, where Re(s) > 1. Its series representation is given by the Dirichlet L-function
with principal Dirichlet character modulo 1, such that χ ≡ 1 on Z
ζ(s) ≡
∞∑
k=1
1
ks
.
20
A more generalized version of ζ(s) is given by the Hurwitz zeta function ζ(s, a),
which also converges for all a ∈ (0, 1], s ∈ C, where Re(s) > 1:
ζ(s, a) ≡
∞∑
k=0
1
(k + a)s
.
Note that whenever we have a = 1, we simply end up with the Riemann zeta function
after we do a shift of index in k, meaning ζ(s, 1) = ζ(s).
Before we can continue, we first need to prove, that both ζ(s) and ζ(s, a) converge
just how we claimed.
Theorem 3.1 ζ(s, a) converges absolutely for Re(s) > 1. Whenever we consider a
half-plane Re(s) ≥ 1 + ε the convergence is also uniform for all ε > 0.
Proof. At first notice that since k < k + a for a ∈ (0, 1], we have 1
k+a
≤ 1
k
(∗). Now
consider
|ζ(s, a)| =
∣∣∣ ∞∑
k=0
1
(k + a)s
∣∣∣
≤
∞∑
k=0
1
|(k + a)s|
=
∣∣∣ 1
as
∣∣∣+ ∞∑
k=1
1
|(k + a)s|
=
1
aRe(s)
+
∞∑
k=1
1
(k + a)Re(s)
(∗)
≤ 1
aRe(s)
+
∞∑
k=1
1
kRe(s)
≤ 1
a1+ε
+
∞∑
k=1
1
k1+ε
<∞ .
21
Since 1 + ε > 1 we end up with a p-series that converges absolutely for Re(s) > 1.
Now for the uniform convergence. Since ζ(1 + ε, a) converges absolutely, for all ε′ > 0
there exists some N ∈ N , such that for all n ≥ N :
∣∣∣ ∞∑
k=0
∣∣∣ 1
(k + a)1+ε
∣∣∣− n∑
k=0
∣∣∣ 1
(k + a)1+ε
∣∣∣∣∣∣ < ε′ .
Hence,
∣∣∣ ∞∑
k=0
∣∣∣ 1
(k + a)1+ε
∣∣∣− n∑
k=0
∣∣∣ 1
(k + a)1+ε
∣∣∣∣∣∣ = ∞∑
k=0
∣∣∣ 1
(k + a)1+ε
∣∣∣− n∑
k=0
∣∣∣ 1
(k + a)1+ε
∣∣∣
=
∞∑
k=n+1
∣∣∣ 1
(k + a)1+ε
∣∣∣
=
∞∑
k=n+1
1
(k + a)1+ε
< ε′ .
Thus for all s ∈ C with Re(s) ≥ 1 + ε we have
∣∣∣ζ(s, a)− n∑
k=0
1
(k + a)s
∣∣∣ = ∣∣∣ ∞∑
k=n+1
1
(k + a)s
∣∣∣
≤
∞∑
k=n+1
∣∣∣ 1
(k + a)s
∣∣∣
=
∞∑
k=n+1
1
(k + a)Re(s)
≤
∞∑
k=n+1
1
(k + a)1+ε
< ε′ .
Furthermore since ε′ is not depending on the choice of s we have ζ(s, a) converging
uniformly on CRe(s)≥1+ε.
�
For our further investigations we need to derive an integral representation for ζ(s, a).
Once we have done that, we can immediately get our hands on a similar expression
for ζ(s) by letting a = 1. Let us consider the following integral, where x ∈ R
22
I =
∫ ∞
0
tx−1
eat − e(a−1)t
dt .
We are going to manipulate I accordingly, by using the geometric series, limit inter-
changing arguments and substitutions.
I =
∫ ∞
0
tx−1
eat − e(a−1)t
e−at
e−at
dt
=
∫ ∞
0
e−at
tx−1
1− e−t
dt
=
∫ ∞
0
e−attx−1
∞∑
k=0
e−kt dt ,
where the last line follows from the convergence of the geometric series on (0,∞),
since exp(−t) < 1 on said interval. Next we want to interchange the infinite series
and the integral which needs to be justified further.
Let
Iε =
∫ ∞
ε
e−attx−1
∞∑
k=0
e−kt dt
for ε > 0. We now notice that Iε
ε→0+−→ I. The geometric series inside the integral
converges uniformly on (ε,∞). Hence, by interchanging limits and introducing the
substitution τ = (k + a)t we get
Iε =
∞∑
k=0
∫ ∞
ε
e−(k+a)ttx−1 dt
=
∞∑
k=0
∫ ∞
(k+a)ε
(
τ
k + a
)x−1
e−τ
dτ
k + a
=
∞∑
k=0
1
(k + a)x
∫ ∞
(k+a)ε
τx−1e−τ dτ .
23
Consequently, Iε is monotonically increasing for ε→ 0 and thus bounded by
∞∑
k=0
1
(k + a)x
∫ +∞
0
τx−1e−τ dτ .
In conclusion we arrive at
lim
ε→0
Iε = I =
∞∑
k=0
1
(k + a)x
∫ ∞
0
τx−1e−τ dτ ,
but also notice that
I =
∫ ∞
0
e−attx−1
∞∑
k=0
e−kt dt .
Thus the interchange of limits has been justified. By now noticing, that our inte-
gral is exactly Γ(x) and the series ζ(x, a) we arrive at the solution
ζ(x, a)Γ(x) =
∫ ∞
0
tx−1
eat − e(a−1)t
dt .
Note, that by setting a = 1, we get
ζ(x)Γ(x) =
∫ ∞
0
tx−1
et − 1
dt .
We established the fact for real x only. We are now going to proceed by extending I
to all s ∈ C, where Re(s) > 1. Now, both ζ(s, a) and Γ(s) are analytic for Re(s) > 1.
For the right-hand side we make use of Theorem 1.8 to prove the analyticity of I. Now
f(s, t) =
ts−1
eat − e(a−1)t
24
has no poles, since for Re(s) > 1 and a ∈ (0, 1] our denominator never tends to zero
and is thus f is holomorphic in s for all t ∈ [0,∞) and consequently continuously
differentiable on CRe(s)>1 in s. Also we note, that∫ ∞
0
∣∣∣ts−1e−at
1− e−t
∣∣∣ dt = ∫ ∞
0
tRe(s)−1e−at
1− e−t
dt = ζ(Re(s), a)Γ(Re(s)) <∞ ,
since both ζ(Re(s), a) and Γ(Re(s)) converge for Re(s) > 1.
Now
∣∣∣∂sf(s, t)∣∣∣ = ∣∣∣ts−1e−at
1− e−t
log(t)
∣∣∣ ≤ tRe(s)−1e−at
1− e−t
t =
tRe(s)e−at
1− e−t
= f(Re(s) + 1, t) ,
where the inequality follows from the fact that t ≥ log(t) for all t ∈ R+. From these
results we can also conclude that∫ ∞
0
∣∣∣f(Re(s) + 1, t)∣∣∣ dt = ∫ ∞
0
tRe(s)e−at
1− e−t
dt = ζ(Re(s) + 1, a)Γ(Re(s) + 1) <∞
and hence all conditions of Theorem 1.8 are satisfied, meaning
∂sI = ∂s
∫ ∞
0
f(s, t) dt =
∫ ∞
0
∂sf(s, t) dt .
What we have thus shown, is that I is complex differentiable for all s ∈ C with
Re(s) > 1 making it analytic in every strip including the half-plane Re(s) > 1.
Hence by analytic continuation we have gotten ourselves an integral representation
that is valid for all s ∈ C, Re(s) > 1.
For further extensions of ζ(s, a) we are interested in deriving yet another integral
representation. This time we are going to consider a contour integral representation
that uses a contour γ around the negative real axis with a loop avoiding the origin.
[See Figure 1]
25
Figure 1: Contour Plot of γ
The contour integral is inspired directly by the structure of I and with it we can
establish a similar relationship between it, the gamma function and ζ(s, a).
Theorem 3.2 For a ∈ (0, 1] we have
I(s, a) =
1
2πi
∫
γ
zs−1
e−az − e(1−a)z
dz =:
1
2πi
∫
γ
zs−1f(z) dz
being an entire function of s, where ζ(s, a) = I(s, a)Γ(1− s), for Re(s) > 1.
Note. Whenever we talk about f(z) throughout this chapter, we use the definition
provided in this theorem.
Proof. Considering our contour γ, let us choose the parameterizations z(R) = Re−iπ
for γ1 and z(R) = Re
iπ for γ3, R ∈ [ε,∞), where 0 < ε < 2π is the circle’s radius
around the origin.
Since zs−1f(z) is holomorphic in s for all z ∈ γ2[−π, π], we have that γ2[−π, π] is
measure-finite andboth zs−1f(z) and ∂sz
s−1f(z) are continuous in z on Bε(0) for all
s ∈ C and we have that the integral over γ2 is an entire function in s.
To completely prove our integral representation to be entire, we still need to show
that our integrals over γ1 and γ3 are also entire. Let us consider some arbitrary com-
pact disk |s| < M with radius M . We have that zs−1f(z) is holomorphic in BM(0)
for all z ∈ γ1,3[ε,∞).
Now for γ1, when R ≥ 1:
|zs−1| R>0= RRe(s)−1| exp(−iπ(Re(s)− 1 + iIm(s)))| = RRe(s)−1eπIm(s)
|s|<M
≤ RM−1eMπ .
26
For γ3 on the other hand whenever R ≥ 1:
|zs−1| R>0= RRe(s)−1| exp(iπ(Re(s)− 1 + iIm(s)))| = RRe(s)−1e−πIm(s)
|s|<M
≤ RM−1eMπ .
From both inequalities resulting in the same bound, we can get a final estimate
∣∣∣zs−1f(z)∣∣∣ = ∣∣∣ zs−1
e−az − e(1−a)z
∣∣∣ ≤ RM−1eMπ
eaR − e(a−1)R
=
eR+Mπ
eR − 1
RM−1e−aR < 2eMπRM−1e−aR ,
where the first inequality follows from our estimates along γ1 and γ3. But e
R−1 > eR
2
,
when R > log(2) and thus the integrand is bounded by CRM−1e−aR. Hence,∫
γ1,3
∣∣∣zs−1f(z)∣∣∣ dz < C ∫ ∞
ε
RM−1e−aR dR <∞ .
Also consider
∂sz
s−1f(z) = log(z)zs−1f(z)
and thus along the different parts of the contour we get
∣∣∣∂szs−1f(z)∣∣∣ < ∣∣∣ log(R)− iπ∣∣∣ · ∣∣∣zs−1f(z)∣∣∣
along γ1 and ∣∣∣∂szs−1f(z)∣∣∣ < ∣∣∣ log(R) + iπ∣∣∣ · ∣∣∣zs−1f(z)∣∣∣
along γ3, since
lim
ϕ→±π
log(Reiϕ) = log(R)± iπ .
27
Overall it follows, that∫
γ1,3
∣∣∣∂szs−1f(z)∣∣∣ dz < C ∫ ∞
ε
∣∣∣ log(R)± iπ∣∣∣RM−1e−aR dR
≤ C
∫ ∞
ε
(
log(R) + π
)
RM−1e−aR dR
= C
∫ ∞
ε
log(R)RM−1e−aR dR + πC
∫ ∞
ε
RM−1e−aR dR <∞ .
Hence by Theorem 1.8, our integrals over γ1 and γ3 are holomorphic functions for all
s ∈ BM(0) and arbitrary M ≥ 0. Furthermore, our integrals over γ1 and γ3 are thus
entire functions, making our integral over the whole contour entire overall.
Now, for ζ(s, a) = I(s, a)Γ(1− s), Re(s) > 1 we are going to break
∫
γ
up into its
separate pieces along γ and show that it results in what we desire.
I(s, a) =
1
2πi
[∫
γ1
zs−1f(z) dz +
∫
γ2
zs−1f(z) dz +
∫
γ3
zs−1f(z) dz
]
By using the parameterizations from the start and by introducing z(ϕ) = εeiϕ,
ϕ ∈ [−π, π] on
∫
γ2
we arrive at
I(s, a) =
1
2πi
[∫ ε
∞
Rs−1e−iπsf(−R) dR +
∫ ∞
ε
Rs−1eiπsf(−R) dR +
∫ π
−π
iεsesiϕf(εeiϕ) dϕ
]
=
1
2πi
[
(eiπs − e−iπs)
∫ ∞
ε
Rs−1f(−R) dR + iεs
∫ π
−π
esiϕf(εeiϕ) dϕ
]
=
1
π
[
sin(πs)
∫ ∞
ε
Rs−1f(−R) dR + ε
s
2
∫ π
−π
esiϕf(εeiϕ) dϕ
]
.
Now consider the limiting process for the first integral, when closing the circle around
the origin, for Re(s) > 1:
lim
ε→0
[
sin(πs)
π
∫ ∞
ε
Rs−1f(−R) dR
]
=
sin(πs)
π
∫ ∞
0
Rs−1
eaR − e(a−1)R
dR =
sin(πs)
π
ζ(s, a)Γ(s) .
28
Next we apply the same limit to the second integral. Now f(z) is analytic for
z ∈ (−2π, 2π) except for a simple pole f(z) z→0−→ ∞. We remove the singularity by
considering zf(z) instead making it analytical on z ∈ (−2π, 2π). Thus |zf(z)| ≤ c,
c ∈ R and for |z| = ε we get
∣∣∣ εs
2π
∫ π
−π
esiϕf(εeiϕ) dϕ
∣∣∣ ≤ εRe(s)
2π
∫ π
−π
∣∣∣esiϕf(εeiϕ)∣∣∣ dϕ
=
εRe(s)
2π
∫ π
−π
e−Im(s)ϕ
c
ε
dϕ
= cεRe(s)−1
sinh(Im(s)π)
Im(s)π
.
Now that we have found the integral to be bounded, by applying the limit we arrive
at
lim
ε→0
εs
2π
∫ π
−π
esiϕf(εeiϕ) dϕ
Re(s)>1−→ 0 .
Concluding everything we have thus gathered, our final result turns out to be
I(s, a) =
sin(πs)
π
Γ(s)ζ(s, a)
and by applying Euler’s Reflection Formula from Theorem 2.6
I(s, a)Γ(1− s) = ζ(s, a) .
�
3.2 The analytic continuation of ζ(s, a)
Up until now we have defined and proven everything to hold for Re(s) > 1 including
I(s, a)Γ(1 − s) = ζ(s, a). In this chapter, we are interested in defining the Hurwitz
zeta function for Re(s) ≤ 1 by an analytic continuation.
29
Theorem 3.3 ζ(s, a) ≡ I(s, a)Γ(1 − s) is analytic for all s ∈ C except for a simple
pole at s = 1 with residue 1.
Proof. For Re(s) > 1 we have that ζ(s, a) = I(s, a)Γ(1− s). Since ζ(s, a) is analytic
for Re(s) > 1 but Γ(1− s) only has poles for all s = 1, 2, 3, ... and I(s, a) was proven
to be entire, it follows that I(s, a) has zeros of order (at least) one at all s = 2, 3, ...
cancelling the poles of Γ(1 − s). Since we know of the simple poles of ζ(s, a) and
Γ(1 − s) at s = 1 we can conclude, that I(s, a) is nonzero at s = 1. Since Γ(1 − s)
is analytic on the whole complex plane except the simple poles at s = 1, 2, 3, ... and
I(s, a) is an entire Function with zeros at s = 2, 3, ... we find that I(s, a)Γ(1 − s)
is analytic on the whole complex plane except for a simple pole at s = 1. Since
ζ(s, a) and I(s, a)Γ(1 − s) coincide on the uncountable set of all complex numbers
with Re(s) > 1 we conclude by the identity theorem for holomorphic functions that
we already have ζ(s, a) = I(s, a)Γ(1− s) on all of C \ {1}.
Claim: s = 1 is a simple pole with residue 1.
Proof. Now, for every s ∈ N on I(s, a) and by Cauchy’s residue theorem we get that
I(s, a) =
1
2πi
(eiπs − e−iπs)︸ ︷︷ ︸
=0
∫ ∞
ε
Rs−1f(−R) dR +
∫
γ2
zs−1f(z) dz

=
1
2πi
∫
γ2
zs−1f(z) dz
= Res
z=0
zs−1f(z) .
This formula holds for all natural s and thus, also for s = 1:
I(1, a) = Res
z=0
1
e−az − e(1−a)z
= lim
z→0
(z − 0) 1
e−az − e(1−a)z
= lim
z→0
z
1− az + O(z2)− 1− z + az + O(z2)
= lim
z→0
1
−1 + O(z)
= −1
30
All that is left to check is, which residue zeta has at s = 1:
Res
s=1
ζ(s, a) = lim
s→1
(s− 1)I(s, a)Γ(1− s)
= I(1, a) lim
s→1
(s− 1)Γ(1− s)︸ ︷︷ ︸
=−Γ(2−s)
= Γ(1) = 1
And this concludes the proof, that ζ(s, a) has a simple pole with residue 1 at s = 1.
�
Theorem 3.4 ζ(s) = ζ(s, 1) is analytic everywhere except for a simple pole with
residue 1 at s = 1.
Proof. The claim follows immediately from Theorem 3.3, since we have included the
case where a = 1 in the analytic continuation of ζ(s, a).
�
3.3 Hurwitz’s formula for ζ(s, a)
We are now going to derive a formula, which is a generalization of the functional
equation for the Riemann zeta function. It has been first derived by Hurwitz and
allows for yet another representation of ζ(s, a) but this time for Re(s) < 0.
Theorem 3.5 Let
S(R) = C \
⋃
k∈Z
D(2πik,R) ,
where D(2πik,R) = {z | |z − 2πik| ≤ R, k ∈ Z} denotes the set of closed circular
disks of radius R around 2πik. Then
f(z) =
1
e−az − e(1−a)z
is bounded in S(R) for a ∈ (0, 1].
31
Note. f(z) is again the same function that we have used before as part of the inte-
grand for I(s, a).
Proof. We approach this proof by first splitting S(R) up into 3 distinct regions and
show that f(z) is bounded in each one of those. Let z ∈ C and
%(R) = {z | |Re(z)| ≤ 1, |Im(z)| ≤ π, |z| ≥ R}
be a rectangle with a hole avoiding the origin. [See Figure 2]
Figure 2: Punctured Rectangle %(R)
Now %(R) is a closed finite (and thus bounded) rectangle. Thus, by Heine Borel,
it is also compact. Since f is holomorphic it follows that |f | is continuous on %(R)
and therefore allows for the existence of some maximum M . It follows that f on
%(R) is bounded by M and due to the periodic nature of |f(z)| = |f(z + 2πik)|
for all z ∈ C, k ∈ Z, we find that f is bounded on %(R) + 2πik by the same con-
stant M for each k we choose. Consequently f is bounded by M on
⋃
k∈Z
[%(R) + 2πik].
We are now interested in showing, that f(z) is bounded not only inside, but also
outside the punctured strip.
32
Case I: Let |Re(z)| ≥ 1, then
|f(z)| =
∣∣∣ 1
e−az − e(1−a)z
∣∣∣ = 1
e−aRe(z)|1− ez|
≤ 1
e−aRe(z)|1− eRe(z)|
(∗)
and thus, for Re(z) ≥ 1
|f(z)| ≤ e
Re(z)
eRe(z) − 1
≤ e
e− 1
,
where the inequalities follow from the facts, that |1− eRe(z)| = eRe(z) − 1 and
eaRe(z) ≤ eRe(z) for a ∈ (0, 1].
Case II: Let Re(z) ≤ −1, then
|f(z)| =
∣∣∣ 1
e−az − e(1−a)z
∣∣∣ (∗)≤ 1
e−aRe(z) − e(1−a)Re(z)
≤ 1
1− eRe(z)
≤ 1
1− e−1
=
e
e− 1
,
where the inequalities follow from the facts, that |1−eRe(z)| = 1−eRe(z) and eaRe(z) ≤ 1
for a ∈ (0, 1].
Overall this proves, that |f(z)| ≤ (1−e−1) =: c for |Re(z)| ≥ 1 making f(z) bounded
in S(R) = C \
⋃
k∈Z
D(2πik,R).
�
We are now ready to approach the main result of this chapter by proving the following
major result in analyticnumber theory:
Theorem 3.6 Hurwitz’s formula for ζ(s, a)
For a ∈ (0, 1] and Re(s) > 1 it holds, that
ζ(1− s, a) = (2π)−sΓ(s)
[
e−
iπs
2 F(a, s) + e
iπs
2 F(−a, s)
]
.
33
F(a, s) is called the Periodic zeta function which is expressed by the Dirichlet series
with the complex sequence zk = e
i2πka, a ∈ R, so
F(a, s) =
∞∑
k=1
ei2πka
ks
.
Note. a, such as in our case, is a real number and Re(s) > 1. The function is called
periodic, since it is a periodic function, with F(a+ 1, s) = F(a, s) and for the special
case when a = 1 we have, that F(1, s) = ζ(s).
Proof. Yet again, we would like to work with a variant of our very first integral rep-
resentation I of ζ(s, a). Let us consider the function
Ik(s, a) =
1
2πi
∮
γk
zs−1
e−az − e(1−a)z
dz =
1
2πi
∮
γk
zs−1f(z) dz
along the contour γk, k ∈ Z in the complex plane. [See Figure 3]
Figure 3: Contour Plot of γk
Looking at our contour we notice, that as k → ∞, we have that R → ∞ since our
radius of the outer circle scales by a factor of (2k + 1)π. At first we are interested in
tracing Ik(s, a) back to I(s, a) by showing, that Ik(s, a)
k→∞−→ I(s, a) for Re(s) < 0.
34
If we compare γ and γk it becomes apparent that it is enough to show that the outer
circle goes to 0 for k →∞.
Consider the parameterization z(ϕ) = Reiϕ, for ϕ ∈ [−π, π]. Therefore
|zs−1| = |Rs−1ei(s−1)ϕ| = RRe(s)−1e−Im(s)ϕ ≤ RRe(s)−1e|Im(s)π| .
By Theorem 3.5 we have that c is an upper bound for |f(z)| and thus the integrand
|zs−1f(z)| ≤ cRRe(s)e|Im(s)π| is bounded since the outer circle of γk lies in the region
S(R). Meaning overall, that our integral is also bounded by this relationship and if
we now take the limit as R→∞ the outer circle thus tends to 0 for Re(s) < 0.
By now substituting s→ 1− s, implying that Re(1− s) > 1, we get
Ik(1− s, a)
k→∞−→ I(1− s, a) .
With one part of the contour evaluated, let us continue by calculating the other three.
By Cauchy’s residue theorem for a clockwise oriented contour we get, that
Ik(1− s, a) = −
k∑
n=−k
n6=0
Res
z=i2πn
z−sf(z) = −
k∑
n=1
[
Res
z=i2πn
z−sf(z) + Res
z=−i2πn
z−sf(z)
]
and since the poles are simple, we can easily compute the residues accordingly:
Res
z=i2πn
z−sf(z) = lim
z→i2πn
(z − i2πn) z
−s
e−az − e(1−a)z
=
(i2πn)−s
e−i2πna
lim
z→i2πn
z − i2πn
1− ez
=
ei2πna
(i2nπ)s
lim
z→i2nπ
1
−ez
= − e
i2πna
(i2πn)s
Where the third equality followed from L’Hôpital’s rule, since both numerator and
denominator tend to zero simultaneously. The other residues Res
z=−i2πn
z−sf(z) are
35
calculated completely analogously, leaving us with
Res
z=−i2πn
z−sf(z) = lim
z→−i2nπ
(z + i2πn)
z−s
e−az − e(1−a)z
=
(−i2πn)−s
ei2πna
lim
z→−i2πn
z + i2πn
1− ez
=
e−i2πna
(−i2πn)s
lim
z→−i2πn
1
−ez
= − e
−i2πna
(−i2πn)s
and overall
Ik(1− s, a) =
k∑
n=1
[
ei2πna
(i2πn)s
+
e−i2πna
(−i2πn)s
]
.
Now, by considering only the principal branch,
1
is
= e−
iπs
2 and similary
1
(−i)s
= e
iπs
2
and thus we arrive at
Ik(1− s, a) =
k∑
n=1
[
e−
iπs
2 ei2πna
(2πn)s
+
e
iπs
2 e−i2πna
(2πn)s
]
= (2π)−s
[
e−
iπs
2
k∑
n=1
ei2πna
ns
+ e
iπs
2
k∑
n=1
e−i2πna
ns
]
k→∞−→ (2π)−s
[
e−
iπs
2 F(a, s) + e
iπs
2 F(−a, s)
]
.
And hence, by employing the fact that Ik(1− s, a)
k→∞−→ I(1− s, a) and
I(s, a)Γ(1− s) = ζ(s, a) we can finally conclude, that
ζ(1− s, a) = (2π)−sΓ(s)
[
e−
iπs
2 F(a, s) + e
iπs
2 F(−a, s)
]
.
�
36
3.4 The functional equation for ζ(s)
With Hurwitz’s formula for ζ(s, a) being one of the most significant results in analytic
number theory derived, we can go ahead and make use of the fact, that ζ(s, 1) = ζ(s)
to arrive at the functional equation for the Riemann zeta function.
Theorem 3.7 For all s ∈ C it holds, that
ζ(1− s) = 2(2π)−sΓ(s) cos
(
1
2
πs
)
ζ(s)
or equivalently
ζ(s) = 2(2π)s−1Γ(1− s) sin
(
1
2
πs
)
ζ(1− s) .
Proof. By letting a = 1 in Hurwitz’s formula for ζ(s, a) and using the fact that for
a = 1 we have F(1, s) = F(−1, s) = ζ(s) it is easy to see, that
ζ(1− s, 1) = (2π)−sΓ(s)[e−
iπs
2 F(1, s) + e
iπs
2 F(−1, s)]
= (2π)−sΓ(s)
[
e
iπs
2 + e
−iπs
2
]
ζ(s)
= 2(2π)−sΓ(s) cos
(
1
2
πs
)
ζ(s)
for Re(s) > 1. The second equation can be aquired, by substituting s→ 1−s giving us
ζ(s, 1) = 2(2π)s−1Γ(1− s) cos
(
1
2
πs− π
2
)
ζ(1− s)
= 2(2π)s−1Γ(1− s) sin
(
1
2
πs
)
ζ(1− s)
for Re(s) > 1.
Both results can be extended to all s ∈ C using analytic continuation as shown before.
�
37
3.5 The Dirichlet eta function η(s)
Before we can get into the main part of the thesis, we would like to derive a last but
highly important relationship between the Riemann zeta function and the so-called
Dirichlet eta function.
Theorem 3.8 Integral representation for η(s)
Let s ∈ C with Re(s) > 1, then
η(s)Γ(s) ≡
∫ ∞
0
ts−1
et + 1
dt ,
where η(s) denotes the Dirichlet eta function, which can be expressed by the Dirichlet
series with zk = (−1)k+1
η(s) ≡
∞∑
k=1
(−1)k+1
ks
.
Note. η(s) can be derived as a direct consequence of our periodic zeta function F(a, s)
by setting a = 1
2
, meaning F(1
2
, s) = −η(s).
Proof. The procedure of evaluating this integral is analogous to the evaluation of
our original I from page 23. We begin by turning (et + 1)−1 into a geometric se-
ries. At this point though it is still problematic to turn it into a geometric series,
because et ≥ 1 for all t ∈ R+0 , meaning et does not lie in our radius of convergence
over [0,∞]. To get rid of this problem, we simply expand the integrand by e−t and get:
∫ ∞
0
ts−1
et + 1
dt =
∫ ∞
0
ts−1e−t
1− (−e−t)
dt
=
∫ ∞
0
ts−1e−t
∞∑
k=0
(−e−t)k dt
=
∞∑
k=0
(−1)k
∫ ∞
0
ts−1e−(k+1)t dt ,
38
where the interchange of limits is being justified accordingly to the interchange we
have done before on I. Now we introduce the substitution τ = (k + 1)t, where our
integral’s boundaries stay the same, since k + 1 > 0 for all k, and get
∞∑
k=0
(−1)k
∫ ∞
0
ts−1e−(k+1)t dt =
∞∑
k=0
(−1)k
∫ ∞
0
(
τ
k + 1
)s−1
e−τ
dτ
k + 1
=
∞∑
k=0
(−1)k
(k + 1)s
∫ ∞
0
τ s−1e−τ dτ
=
∞∑
k=0
(−1)k
(k + 1)s
Γ(s) .
After doing a shift of index on the last infinite series, such that it starts from k = 1, we
arrive at our desired result. Note that the series representation for η(s) connverges
absolutely, due to ζ(s) being an upper bound. Consequently, η(s) has the same
convergence properties as ζ(s) even for Re(s) < 1.
�
Theorem 3.9 Let s ∈ C with Re(s) > 1, then
η(s) ≡
[
1− 1
2s−1
]
ζ(s) .
Proof. Since ζ(s) converges absolutely and uniformely for every ε > 0 in some half-
plane Re(s) ≥ 1 + ε we can rearrange the terms in ζ(s) and it will still converge to
the same limit. Now
39
∞∑
k=1
(−1)k+1
ks
=
∞∑
k=1
1
(2k − 1)s
−
∞∑
k=1
1
(2k)s
=
∞∑
k=1
1
(2k − 1)s
− 1
2s
ζ(s)
=
∞∑
k=1
1
(2k − 1)s
+
∞∑
k=1
1
(2k)s
−
∞∑
k=1
1
(2k)s
− 1
2s
ζ(s)
=
∞∑
k=1
1
ks
− 1
2s
∞∑
k=1
1
ks
− 1
2s
ζ(s)
= ζ(s)− 1
2s
ζ(s)− 1
2s
ζ(s)
=
[
1− 1
2s−1
]
ζ(s) ,
which is exactly what we were seeking.
�
From this last relationship we are also able to deduce an expression for ζ(s) in terms
of η(s) by simply dividing by the factor (1− 21−s) which is never equal to zero, since
we have restricted s to Re(s) > 1. This overall provides us with
ζ(s) =
2s−1
2s−1 − 1
η(s)
and in the process also with the functional equation for η(s):
Theorem 3.10 For all s ∈ C it holds, that
η(s) = 2πs−1Γ(1− s) sin
(
1
2
πs
)(
2s−1 − 1
1− 2s
)
η(1− s)
or equivalently
η(s) = 2(2π)s−1Γ(1− s) sin
(
1
2
πs
)(
1− 21−s
1− 2s
)
η(1− s) .
40
Proof. By plugging ζ(s) =
2s−1
2s−1 − 1
η(s) into our second functional equation for ζ(s)
from Theorem 3.8 we arrive at our desired first expression. We get the second equation
by factoring out a 2s−1 from the coefficient of η(1− s) .
�
4 The Grünwald-Letnikov fractional derivative
In this chapter we are interested in motivating why we define the fractional derivative
the way we do in this paper. As we’ve already seen in the preface, there are a variety of
thoughts behind the idea of a fractional derivativeand for our purposes we would like
to trace it back to the most basic definition of the derivative itself, its limit definition.
Sadly, there are not many resources available apart from a paper by Ortigueira [9]
that discuss the derivation of the Grünwald-Letnikov derivative. This is why I had
to do most of the derivation for myself.
From standard calculus we want to consider the derivative of a single variable,
differentiable function f(x) being described by its limiting process of the difference
quotient
D1xf(x) = lim
h→0
f(x+ h)− f(x)
h
,
where Dx denotes the differential operator. If we consider the substitution h→ −h,
we can equivalently rewrite the difference quotient as being
D1xf(x) = lim
h→0
f(x− h)− f(x)
−h
= lim
h→0
f(x)− f(x− h)
h
.
Note, that as h approaches 0 from both directions, −h does so too at the same rate.
41
Remember that the operation of differentiation can be iterated, meaning, DnxD
m
x =
Dn+mx . So for the second derivative of f , we get
D2xf(x) = lim
h→0
D1xf(x)−D1xf(x− h)
h
= lim
h1→0
lim
h2→0
f (x)− f (x− h2)
h2
− lim
h2→0
f (x− h1)− f(x− h1 − h2)
h2
h1
.
By assuming that h1 and h2 converge to the same value h ≡ h1 = h2 at the same
rate and since the limit, if it exists, is unique, we can simplify the second derivative
accordingly:
D2xf(x) = lim
h→0
f(x)− 2f(x− h) + f(x− 2h)
h2
To make the next step more clear, let us compute the 3rd derivative, resulting in
D3xf(x) = lim
h→0
f(x)− 3f(x− h) + 3f(x− 2h)− f(x− 3h)
h3
.
If we continue these iterations, we are going to end up with a generalized n-th deriva-
tive, namely
Dnxf(x) = lim
h→0
1
hn
∑
0≤m≤n
(−1)m
(
n
m
)
f(x−mh)
where
(
n
m
)
denotes the binomial coefficient. The above fact holds for all n ∈ N and
can easily be proven by the principle of mathematical induction.
By the symmetry of the substitution h→ −h, the following statement is equiva-
lent.
Dnxf(x) = lim
h→0
1
hn
∑
0≤m≤n
(−1)m
(
n
m
)
f(x+ (n−m)h)
We now define the shift operator ∆nhf(x) = f(x + nh), where the n ∈ N indicates,
42
that we apply ∆h to f exactly n-times. We thus have ∆
n−m
h f(x) = f(x+ (n−m)h)
and we can rewrite the above equation as
Dnxf(x) = lim
h→0
[
1
hn
∑
0≤m≤n
(
n
m
)
(−1)m∆n−mh
]
f(x)
The above operator that has been put into brackets can be interpreted as the appli-
cation of the binomial theorem, leaving us with the simplification
Dnxf(x) = lim
h→0
[
∆h − 1
h
]n
f(x) , (2)
where the last line followed immediately from the definition of ∆n−mh making f(x)
independent of our running index. Up until now, we have interpreted the iteration
steps of the derivative to be of a natural number order n. The above definition of
the derivative makes it clear that we can extend its definition to Dα, where α ∈ R.
For this we simply need to make use of the Generalized binomial theorem stated in
Theorem 1.5
(x+ y)α =
∞∑
m=0
(
α
m
)
xmyα−m .
Thus, by substituting α for n on (2) and expanding the operator using the General-
ized Binomial Theorem, we get
Dαxf(x) = lim
h→0
[
∆h − 1
h
]α
f(x)
= lim
h→0
[
1
hα
∞∑
m=0
(
α
m
)
(−1)m∆α−mh
]
f(x)
= lim
h→0
1
hα
∞∑
m=0
(−1)m
(
α
m
)
f(x−mh)
and may thus define the Grünwald-Letnikov derivative in two different ways. The
generalized binomial theorem only allows for convergence depending on the direction
43
we approach h from. For this purpose we may define ourselves a derivative that ap-
proaches from the negative and one that approaches from the positive side to ensure
convergence.
Note that when both the left and the right limit approach the same value we
speak of the general forward Grünwald-Letnikov fractional derivative. We think of it
just like proving a function to be differentiable at a certain point, where we have to
ensure that the limit is the same from both sides.
Definition 4.1 Right forward Grünwald-Letnikov derivative
Let α ∈ R and h ∈ R+, with f : C → C. We define the right forward Grünwald-
Letnikov fractional derivative of order α by
+Dαs f(s) = lim
h→0+
1
hα
∞∑
m=0
(−1)m
(
α
m
)
f(s−mh) .
Definition 4.2 Left forward Grünwald-Letnikov derivative
Let α ∈ R and h ∈ R−, with f : C → C. We define the left forward Grünwald-
Letnikov fractional derivative of order α by
−Dαs f(s) = lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
)
f(s−mh) .
Note. The fractional derivative can also be written in terms of Γ, since we are mostly
dealing with non-integer factorials. This results in the equivalent definition for the
Grünwald-Letnikov fractional derivative
Dαs f(x) = lim
h→0
1
hα
∞∑
m=0
(−1)m Γ(α + 1)
Γ(α−m+ 1)Γ(m+ 1)
f(s−mh) .
44
Guariglia [6] has shown on many occasions that the definition of the fractional deriva-
tive being provided by Grünwald-Letnikov satisfies all properties of the regular deriva-
tive, such as the linearity and the Leibniz rule and that it is consistent with our
definition for α→ n, n ∈ N. We will make use of some of these properties in the next
chapter.
As an introductory example of applying Definition 4.1, we would like to compute
the fractional derivative of f(s) = κs, κ ∈ R.
Lemma 4.2 Let κ ∈ R, |κ| > 1 then the Grünwald-Letnikov fractional derivative of
f(s) = κs is given by
f (α)(s) ≡ κs logα(κ) .
Note. For κ = e we get the exponential function, which is going to satisfy its most
significant property Dαs e
s = es even in the fractional derivative case.
Proof. We proceed by using Definition 4.1 on f(s).
+Dαs f(s) = lim
h→0+
1
hα
∞∑
m=0
(−1)m
(
α
m
)
κs−mh
= κs lim
h→0+
1
hα
∞∑
m=0
(
α
m
)(
− 1
κh
)m
= κs lim
h→0+
(
1−
(
1
κ
)h
h
)α
= κs(−1)α lim
h→0+
((
1
κ
)h − 1
h
)α
,
where the third equality follows from using the generalized binomial theorem which
is valid, since
∣∣∣−1
κh
∣∣∣ < 1 for h ∈ R+. By now realizing that limh→0+ ( 1κ)h − 1
h
is the
limit definition of log
(
1
κ
)
around 0 we get
+Dαs f(s) = κ
s(−1)α
[
log
(
κ−1
)]α
= κs(−1)2α logα (κ)
45
= κs logα (κ) .
�
Note. We have applied the right forward derivative in this particular case because of
the convergence of the generalized binomial theorem. If we would have used the left
derivative instead, then
∣∣∣−1
κh
∣∣∣ > 1 for h ∈ R− which would have grown unbounded in
the process. As a consequence this implies that only the right derivative does exist
for functions of this kind.
5 The fractional derivatives of ζ(s) and η(s)
The point of this chapter is to use Guariglia’s research [5] as a basis in order for us to
express the fractional derivatives of two very important analytic number theoretical
functions. Sadly, Guariglia’s research papers are lackluster at some points and even
include false derivations, hand-wavy arguments and fallacious results [For example:
See Guariglia 2017 [6] where the main result of the paper is wrong, due to a false com-
putation of Corollary 4 Dαs ((2π)
s)]. We are thus mainly interested in going through
the mathematics more clearly than he did, in an attempt to fix those mistakes. Hence,
everything that will be presented in this chapter has been inspired by Guariglia but
was newly investigated and researched by myself.
5.1 Differentiating ζ(s) and η(s)
Before we can get into the fractional derivatives of ζ(s) we are interested in first
computing its positive integer derivatives. The following Lemma gives an explicit
formula for the n-th order derivative of ζ(s):
Lemma 5.1 Let n ∈ N and s ∈ C, Re(s) > 1. Then the n-th derivative of ζ(s) is
given by
ζ(n)(s) ≡
∞∑
k=2
eiπn logn(k)
ks
.
46
Proof. We proceed by doing a direct computation of the n-th derivative for the N -th
generalized harmonic number HN,s =
N∑
k=1
1
ks
using the expression from (2). Also note,
that HN,s
N→∞−→ ζ(s). Hence we get
H
(n)
N,s = lim
h→0
1
hn
n∑
m=0
(
n
m
)
(−1)mHN,s−mh
= lim
h→0
1
hn
n∑
m=0
(
n
m
)
(−1)m
N∑
k=1
1
ks−mh
=
N∑
k=1
1
ks
[
lim
h→0
1
hn
n∑
m=0
(
n
m
)
(−1)mkmh
]
=
N∑
k=1
1
ks
[
lim
h→0
1
hn
n∑
m=0
(
n
m
)(
−kh
)m]
(∗)
=
N∑
k=1
1
ks
[
lim
h→0
(
1− khh
)n]
=
N∑
k=1
1
ks
(−1)n
[
lim
h→0
(
kh − 1
h
)n]
,
where on (∗) we have used the binomial theorem. Next we claim, that the limit ex-
pression is equal to logn(k). Polynomials are continuous, meaning we can drag the
limit to the inside giving us (
lim
h→0
kh − 1
h
)n
.
Now for k ∈ N we have
d
dx
kx = log(k)kx = lim
h→0
kx+h − kx
h
= kx lim
h→0
kh − 1
h
and by comparing terms our claim follows. By plugging everything in we get
H
(n)
N,s =
N∑
k=1
1
ks
(−1)n logn(k)
47
and by letting N approach infinity we can consider this lemma to be proved since the
generalized harmonic numbers converge uniformly to ζ(s) for N →∞.
�
After computing the integer order derivative of ζ(s) by the expression in the aforemen-
tioned Lemma one might wonder what the fractional order derivative could possibly
be. A guess that seems plausible would be to simply substitute n ∈ N by some α ∈ R,
giving us
ζ(α)(s) =
∞∑
k=2
eiπα logα(k)
ks
.
The next theorem is going to prove to us, that our intuition turns out to actually be
the right choice for ζ(α)(s):
Theorem 5.2 Let s ∈ C, Re(s) > 1, then the fractional derivative of order α of the
Riemann zeta function is given by
ζ(α)(s) ≡
∞∑
k=2
eiπα logα(k)
ks
.
Proof. We proceed by employing the fact that | . | is a metric in R. Let ζN(s) =
N∑
k=1
1
ks
,
where lim
N→∞
ζN(s) = ζ(s), then
|ζ(α)(s)− −Dαs ζN(s)| =
∣∣∣ζ(α)(s)− lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
)
ζN(s−mh)
∣∣∣
=
∣∣∣ζ(α)(s)− lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
) N∑
k=1
1
ks−mh
∣∣∣
=
∣∣∣ζ(α)(s)− N∑
k=1
1
ks
[
lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
)
kmh
] ∣∣∣
48
=
∣∣∣ζ(α)(s)− N∑
k=1
1
ks
[
lim
h→0−
1
hα
∞∑
m=0
(
α
m
)
(−kh)m
] ∣∣∣ .
Comparing the last series in the last line with the generalized binomial theorem one
can notice, that it converges to (1 − kh)α since | − kh| < 1 for h → 0−. This is also
the reason why we have chosen the left derivative here. Overall ζ
(α)
N (s) now equates to
ζ
(α)
N (s) =
N∑
k=1
1
ks
lim
h→0−
(
1− kh
h
)α
=
N∑
k=1
(−1)α 1
ks
(
lim
h→0−
kh − 1
h
)α
.
The next step is to realize, that the expression limh→0−
kh − 1
h
is simply the limit
definition of the natural logarithm. Thus we overall arrive at
|ζ(α)(s)− −Dαs ζN(s)| =
∣∣∣ ∞∑
k=1
eiπα
logα(k)
ks
−
N∑
k=1
(−1)α log
α(k)
ks
∣∣∣
=
∣∣∣ ∞∑
k=N+1
eiπα
logα(k)
ks
∣∣∣
≤
∞∑
k=N+1
∣∣∣eiπα logα(k)
ks
∣∣∣
=
∞∑
k=N+1
logα(k)
kRe(s)
≤
∞∑
k=N+1
kα
kRe(s)
= ζ(Re(s)− α)− ζN(Re(s)− α) ,
where the last inequality followed from the fact that k ≥ log(k) for all k ∈ N. Notice
that when we let N →∞ our upper estimate also goes to zero, meaning
lim
N→∞
|ζ(α)(s)− −Dαs ζN(s)| ≤ lim
N→∞
ζ(Re(s)− α)− ζN(Re(s)− α) = 0 .
By the properties of a metric this implies that
lim
N→∞
−Dαs ζN(s) = lim
N→∞
ζ(α)(s) = ζ(α)(s)
49
and hence the claim follows.
�
Note. This way of proving the expression for ζ(α)(s) to hold is by far the easiest
approach. Next we would like to explore a more direct approach of applying ±Dαs
to functions that can be expressed by an infinite series. The problem with this
more direct approach though is, that we would have to justify a high amount of
limit interchanging during the process. Up until this day I could sadly not find the
mathematical tools to justify the limit interchanging process being presented here.
So keep in mind, whenever we use this more direct approach throughout the thesis,
we could instead make use of the proof structure being presented in the last theorem.
We now apply our already motivated left Grünwald-Letnikov fractional derivative
directly to ζ(s) and just calculate everything intuitively.
−Dαs ζ(s) = lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
)
ζ(s−mh)
= lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
) ∞∑
k=1
1
ks−mh
=
∞∑
k=1
1
ks
[
lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
)
kmh
]
=
∞∑
k=1
1
ks
[
lim
h→0−
1
hα
∞∑
m=0
(
α
m
)
(−kh)m
]
Again we notice the appearance of the generalized binomial theorem and the corre-
sponding logarithm in its limit form
ζ(α)(s) =
∞∑
k=1
1
ks
lim
h→0−
(
1− kh
h
)α
=
∞∑
k=1
(−1)α 1
ks
(
lim
h→0−
kh − 1
h
)α
50
=
∞∑
k=1
(−1)α log
α(k)
ks
=
∞∑
k=1
eiπα
logα(k)
ks
,
and thus the claim from Theorem 5.2 follows yet again.
We have now computed the first fractional derivative of a function important to an-
alytic number theory. By the same means, be it using the properties of a metric or
direct calculations, we can prove an expression for the fractional derivative of order
α for the Dirichlet eta function to hold. At first, again some motivation.
Lemma 5.3 Let n ∈ N and s ∈ C, Re(s) > 1. Then the n-th derivative of η(s) is
given by
η(n)(s) ≡
∞∑
k=2
eiπn
(−1)k+1 logn(k)
ks
.
Proof. The proof is analogous to the one of Lemma 5.1 .
�
From a natural and intuitive point of view, one might yet again suggest to substitute
n ∈ N by α ∈ R. As it turns out, this is the right guess.
Theorem 5.4 Let s ∈ C, Re(s) > 1, then the fractional derivative of order α of the
Dirichlet eta function is given by
η(α)(s) ≡
∞∑
k=2
eiπα
(−1)k+1 logα(k)
ks
.
51
Proof. We proceed by using the left Grünwald-Letnikov derivative.
−Dαs η(s) = lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
)
η(s−mh)
= lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
) ∞∑
k=1
(−1)k+1
ks−mh
=
∞∑
k=1
(−1)k+1
ks
[
lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
)
kmh
]
=
∞∑
k=1
(−1)k+1
ks
[
lim
h→0−
1
hα
∞∑
m=0
(
α
m
)
(−kh)m
]
By the same arguments as in the alternate proof of Theorem 5.2 we arrive after a few
more manipulations at
η(α)(s) =
∞∑
k=1
(−1)α (−1)
k+1 logα(k)
ks
=
∞∑
k=2
eiπα
(−1)k+1 logα(k)
ks
.
�
Remember from Theorem 3.10 that η(s) could be expressed through ζ(s) by the rela-
tionship η(s) ≡ [1− 21−s] ζ(s). This fact allows us to express the fractional derivative
of order α for η(s) in terms of the Riemann zeta function. We begin by stating the
modification of the generalized Leibniz rule.
Theorem 5.5 Generalized Leibniz rule for fractional derivatives
Let f and g be two functions of a complex variable s. If f is analytic in some R ⊆ C
then
Dαs (fg) ≡
∞∑
k=0
(
α
k
)
f (k)g(α−k) .
52
Note. A detailed proof of this theorem can be found in Guariglia’s 2017 paper A
functional equation for the riemann zeta fractional derivative [6].
Theorem 5.6 Let s ∈ C, Re(s) > 1, then the fractional derivative of order α of the
Dirichlet eta function can be expressed through ζ(s) by
η(α)(s) ≡ ζα(s)− e
iπα
2s−1
∞∑
k=1
logα(2k)
ks
.
Proof. We approach this proof by manipulating the result from Theorem 5.4 accord-
ingly.
η(α)(s) =
∞∑
k=2
eiπα
(−1)k+1 logα(k)
ks
= eiπα
[
∞∑
k=1
logα(2k + 1)
(2k + 1)s
−
∞∑
k=1
logα(2k)
(2k)s
]
= eiπα
[
∞∑
k=1
logα(2k + 1)
(2k + 1)s
+
∞∑
k=1
logα(2k)
(2k)s
−
∞∑
k=1
logα(2k)
(2k)s
−
∞∑
k=1
logα(2k)
(2k)s
]
= eiπα
[
∞∑
k=2
logα(k)
ks
− 2
∞∑
k=1
logα(2k)
2sks
]
= ζ(α)(s)− e
iπα
2s−1
∞∑
k=1
logα(2k)
ks
,
where the last equality follows from Theorem 5.2 and factoring out 2−s from the
rightmost infinite series.
�
Note. The rearranging of an infinite series that we just used is only justified if a series
converges absolutely. Both ζ(α)(s) and η(α)(s) do in fact converge in this manner and
for this very reason we will dedicate the next chapter to investigating their conver-
gence properties.
53
5.2 Convergence of ζ(α)(s) and η(α)(s)
In this chapter we are investigating the simple convergence properties of the fractional
derivatives of ζ(s) and η(s). Before we do this, we would like to decompose both
ζ(α)(s) and η(α)(s) into their respective real and imaginary parts. After that we check
if both the real and imaginary parts converge (absolutely) individually and then make
use of the fact that a complex function converges (absolutely) iff both the real and
imaginary parts converge (absolutely). Later in this paper we will present a more
direct approach, but for now let us proceed like this.
We begin by stating and proving two lemmas.
Lemma 5.7 Let α ∈ R and s ∈ C, Re(s) > 1, then
Re(ζ(α)(s))≡
∞∑
k=2
k−Re(s) logα(k) cos(πα− Im(s) log(k))
and
Im(ζ(α)(s)) ≡
∞∑
k=2
k−Re(s) logα(k) sin(πα− Im(s) log(k)) .
Proof. We start by considering ζ(α)(s) from Theorem 5.2 to arrive at
ζ(α)(s) =
∞∑
k=2
eiπα logα(k)k−s
=
∞∑
k=2
eiπα logα(k)k−Re(s)k−iIm(s)
=
∞∑
k=2
k−Re(s) logα(k)ei(πα−Im(s) log(k))
=
∞∑
k=2
k−Re(s) logα(k)
[
cos(πα− Im(s) log(k)) + i sin(πα− Im(s) log(k))
]
,
where the third equality follows from k−iIm(s) = e−iIm(s) log(k) and the last one from
54
Euler’s formula. The statement of the theorem now follows immediately from taking
the real and imaginary part.
�
Lemma 5.8 Let α ∈ R and s ∈ C, Re(s) > 1, then
Re(η(α)(s)) ≡
∞∑
k=2
(−1)k+1k−Re(s) logα(k) cos(πα− Im(s) log(k))
and
Im(η(α)(s)) ≡
∞∑
k=2
(−1)k+1k−Re(s) logα(k) sin(πα− Im(s) log(k)) .
Proof. The proof is analogous to the one of Lemma 5.7 with the only difference being
the factor of (−1)k+1 in the expression.
�
Now that we have separated both ζ(s) and η(s) into their respective real and imagi-
nary parts, we are able to state the next two theorems.
Theorem 5.9 For all α > 0 we have that ζ(α)(s) converges absolutely in the half-
plane Re(s) > 1 + α.
Proof. As stated before, we consider the convergence of Re(ζ(α)(s)) and Im(ζ(α)(s))
individually. We make use of the fact that k > log(k) for all k ∈ N and also that the
trigonometric functions cos(t) and sin(t) are bounded for all t ∈ R, giving us
| cos(t)| ≤ 1 and | sin(t)| ≤ 1 .
55
Now notice, that
∞∑
k=2
∣∣k−Re(s) logα(k) cos(πα− Im(s) log(k))∣∣ ≤ ∞∑
k=2
k−Re(s) logα(k) <
∞∑
k=2
1
kRe(s)−α
.
Also we have
∑∞
k=2 k
α−Re(s) = ζ(Re(s) − α) − 1. ζ(Re(s) − α) only converges abso-
lutely for Re(s)− α > 1 and hence the claim follows for Re(ζ(α)(s)).
We now go through the same procedure for Im(ζ(α)(s)):
∞∑
k=2
∣∣k−Re(s) logα(k) sin(πα− Im(s) log(k))∣∣ ≤ ∞∑
k=2
k−Re(s) logα(k) <
∞∑
k=2
1
kRe(s)−α
.
Hence the claim follows and we have shown overall, that since both real and imaginary
part converge absolutely in the half-plane Re(s) > 1 +α, ζ(α)(s) as a complex valued
function does so too.
�
Theorem 5.10 For all α > 0 we have that η(α)(s) converges absolutely in the half-
plane Re(s) > 1 + α.
Proof. The proof is analogous to the one of Theorem 5.9 .
∣∣Re(η(α)(s))∣∣ ≤ ∞∑
k=2
∣∣(−1)k+1k−Re(s) logα(k)∣∣ < η(Re(s)− α)− 1 .
and
∣∣Im(η(α)(s))∣∣ ≤ ∞∑
k=2
∣∣(−1)k+1k−Re(s) logα(k)∣∣ < η(Re(s)− α)− 1 .
We can thus conclude, since η(Re(s) − α) converges absolutely in the half-plane
Re(s) > 1 + α, that η(α)(s) does so too.
�
56
Note. Since both η(α)(s) and ζ(α)(s) converge absolutely for Re(s) > 1 + α we have
that they also converge for Re(s) > 1 + α, since absolute convergence of a complex
function implies convergence.
Overall we can conclude that the convergence depends on the order α of differen-
tiation. Furthermore, one can show that both η(α)(s) and ζ(α)(s) do in fact converge
uniformely in the half-plane Re(s) > 1 + α by using the exact methods for the proof
of Theorem 3.1 .
5.3 The functional equations of ζ(α)(s) and η(α)(s)
We have now reached the climax of the thesis. In this chapter we are going to use
the Grünwald-Letnikov fractional derivative and Theorem 5.5, the generalized Leib-
niz rule for fractional derivatives, on our functional equations for ζ(s) and η(s) from
Theorem 3.7 and Theorem 3.10 .
Theorem 5.11 The functional equation for ζ(α)(s)
For all s ∈ C it holds, that
ζ(α)(s) ≡ 2 (2π)s−1
∞∑
p=0
∞∑
q=0
∞∑
r=0
(
α
p
)(
α−p
q
)(
α−p−q
r
)
(−1)p+rζ(p)(1− s) sin
(
π(s+q)
2
) (
π
2
)q Γ(r)(1−s)
logp+q+r−α(2π)
.
Proof. We proceed by using the Grünwald-Letnikov derivative on the second func-
tional equation for ζ(s) from Theorem 3.7 and by applying the generalized Leibniz
rule for fractional derivatives multiple times, leaving us with
+Dαs ζ(s) =
+Dαs
[
2(2π)s−1Γ(1− s) sin
(
1
2
πs
)
ζ(1− s)
]
= π−1 · +Dαs
[
(2π)sΓ(1− s) sin
(
1
2
πs
)
ζ(1− s)
]
= π−1
∞∑
p=0
(
α
p
)[
dp
dsp
ζ(1− s)
]
+Dα−ps
[
(2π)sΓ(1− s) sin
(
1
2
πs
)]
= π−1
∞∑
p=0
(
α
p
)
(−1)pζ(p)(1− s)+Dα−ps
[
(2π)sΓ(1− s) sin
(
1
2
πs
)]
.
57
Now for the fractional derivative inside the series:
+Dα−ps
[
(2π)sΓ(1− s) sin
(
1
2
πs
)]
=
∞∑
q=0
(
α− p
q
)[
dq
dsq
sin
(
1
2
πs
)]
+Dα−p−qs
[
(2π)sΓ(1− s)
]
=
∞∑
q=0
(
α− p
q
)(π
2
)q
sin(q)
(
1
2
πs
)
+Dα−p−qs
[
(2π)sΓ(1− s)
]
=
∞∑
q=0
(
α− p
q
)(π
2
)q
sin
(
1
2
πs+
π
2
q
)
+Dα−p−qs
[
(2π)sΓ(1− s)
]
And by iterating once more on the last fractional derivative
+Dα−p−qs
[
(2π)sΓ(1− s)
]
=
∞∑
r=0
(
α− p− q
r
)[
dr
dsr
Γ(1− s)
]
+Dα−p−q−rs
[
(2π)s
]
=
∞∑
r=0
(
α− p− q
r
)
(−1)rΓ(r)(1− s)+Dα−p−q−rs
[
(2π)s
]
=
∞∑
r=0
(
α− p− q
r
)
(−1)rΓ(r)(1− s)(2π)s logα−p−q−r(2π) ,
where the last equality follows from Lemma 4.2, where we set κ = 2π. By substitut-
ing all fractional derivatives into +Dαs ζ(s) we finally arrive at our desired functional
equation.
�
As one might guess, it becomes nearly impossible to use the functional equation from
Theorem 5.11 for computational and numerical purposes. Even a truncated version
of ζ(α) is going to approach a numerical value quite slow due to the triple sum that
we will encounter.
For this reason we are interested in optimizing the formula for ζ(α) by taking a
different approach of computing the derivative of order α.
58
Theorem 5.12 For all s ∈ C it holds, that
ζ(α)(s) ≡ i(2π)s−1
∞∑
k=0
(
α
k
)[
dk
dsk
ζ(1− s)Γ(1− s)
] [
ei
π
2
s̄τα−k − ei
π
2
sτα−k
]
,
where τ = log(2π) + iπ
2
.
Proof. The idea for this proof stems from rewriting the term sin
(
1
2
πs
)
of the func-
tional equation for ζ(s) from Theorem 3.7 in its Euler form.
ζ(s) = 2(2π)s−1Γ(1− s) sin
(
1
2
πs
)
ζ(1− s)
= 2e(s−1) log(2π) · e
iπ
2
s − e−iπ2 s
2i
· ζ(1− s)Γ(1− s)
= e(s−1) log(2π) ·
[
ei
π
2
s − e−i
π
2
s
]
e−i
π
2 · ζ(1− s)Γ(1− s)
=
[
e(s−1)(log(2π)+i
π
2
) − e(s−1) log(2π)−(s+1)i
π
2
]
ζ(1− s)Γ(1− s)
=
[
e(s−1)(log(2π)+i
π
2
) − e(s−1)(log(2π)−i
π
2
)−iπ
]
ζ(1− s)Γ(1− s)
=
[
e(s−1)(log(2π)+i
π
2
) + e(s−1)(log(2π)−i
π
2
)
]
ζ(1− s)Γ(1− s) ,
where from the 2nd to the 3rd equation we employed the fact that
1
i
= e−i
π
2 on the
principal branch. Now we define ourselves
τ = log(2π) + i
π
2
and ϕ(s, τ) = e(s−1)τζ(1− s)Γ(1− s)
and thus
ζ(s) = ϕ(s, τ) + ϕ(s, τ̄) .
We now apply the Grünwald-Letnikov derivative to ζ(s) and get
+Dαs ζ(s) =
+Dαsϕ(s, τ) +
+Dαsϕ(s, τ̄) .
Next we compute each fractional derivative of ϕ individually and at the end add them
together.
59
+Dαsϕ(s, τ) =
+Dαs e
(s−1)τζ(1− s)Γ(1− s)
=
∞∑
k=0
(
α
k
)[
dk
dsk
ζ(1− s)Γ(1− s)
]
+Dα−ks e
(s−1)τ ,
where
+Dαs e
(s−1)τ = e−τ lim
h→0+
1
hα
∞∑
m=0
(
α
m
)
(−1)me(s−mh)τ
= esτ−τ lim
h→0+
1
hα
∞∑
m=0
(
α
m
)(
−e−τh
)m
= e(s−1)τ lim
h→0+
(
1− (e−τ )h
h
)α
= e(s−1)τ (−1)α
(
log
(
e−τ
))α
= e(s−1)τ (−1)α(−1)ατα
= e(s−1)ττα .
Consequently we arrive at
+Dαsϕ(s, τ) =
∞∑
k=0
(
α
k
)[
dk
dsk
ζ(1− s)Γ(1− s)
]
+Dα−ks e
(s−1)τ
=
∞∑
k=0
(
α
k
)[
dk
dsk
ζ(1− s)Γ(1− s)
]
e(s−1)ττα−k .
The calculations are analogous for +Dαsϕ(s, τ̄) giving us
+Dαsϕ(s, τ̄) =
∞∑
k=0
(
α
k
)[
dk
dsk
ζ(1− s)Γ(1− s)
]
e(s−1)τ̄ τ̄α−k .
By now adding both terms together and noticing that
e−τ = −ei
π
2
1
2π
and e−τ̄ = ei
π
2
1
2π
,
60
we finally get
+Dαs ζ(s) =
∞∑
k=0
(
α
k
)[
dk
dsk
ζ(1− s)Γ(1− s)
] [
e(s−1)ττα−k + e(s−1)τ̄ τ̄α−k
]
=
∞∑
k=0
(
α
k
)[
dk
dsk
ζ(1− s)Γ(1− s)
] [
ei
π
2
1
2π
esτ̄ τ̄α−k − ei
π
2
1
2π
esττα−k
]
=
ei
π
2
2π
∞∑
k=0
(
α
k
)[
dk
dsk
ζ(1− s)Γ(1− s)
] [
esτ̄ τ̄α−k − esττα−k
]
=
i
2π
∞∑
k=0
(
α
k
)[
dk
dsk
ζ(1− s)Γ(1− s)
] [
(2π)se−i
π
2
sτ̄α−k − (2π)sei
π
2
sτα−k
]
= i(2π)s−1
∞∑
k=0
(
α
k
)[
dk
dsk
ζ(1− s)Γ(1− s)
] [
ei
π
2
s̄τα−k − ei
π
2
sτα−k
]
,
where the last line follows from the fact, that for holomorphic functions with f(R) ⊆ R
we have f(z̄) = f(z).
�
And this concludes the derivation of an easier formula for the functional equation.
The equation in itself is still complex but it has the big advantagethat we do not
have to deal with 3 infinite series at once.
Lastly we are also interested in the functional equation for η(α)(s) but this time
we immediately go for the methods used in Theorem 5.12 to arrive at a simplified
formulation for the functional equation.
Starting from this point up until the end of the thesis is again part of my own
research on this topic. I was especially interested in the fractional derivatives of gen-
eral Dirichlet series and the other important functions that we used throughout the
thesis. All of this can be found in chapter 6.
61
Theorem 5.13 The functional equation for η(α)(s)
For all s ∈ C it holds, that
η(α)(s) ≡ ζ(α)(s)− 21−s
∞∑
k=0
(
α
k
)[
dk
dsk
(
1
1− 21−s
)
η(s)
]
logα−k(2)ei(α−k)π .
Note. The functional equation can be written exclusively with respect to η(s) by
using the substitution given by Theorem 3.9 and using it on the functional equation
for ζ(α)(s).
Proof. Using Theorem 3.9 and differentiating both sides, we get
Dαs η(s) = D
α
s ζ(s)
(
1− 21−s
)
= Dαs ζ(s)− 2Dαs ζ(s)2−s
= ζ(α)(s)− 2
∞∑
k=0
(
α
k
)
ζ(k)(s)Dα−ks 2
−s
= ζ(α)(s)− 2
∞∑
k=0
(
α
k
)
ζ(k)(s)2−s logα−k(2)ei(α−k)π ,
where the last equality follows immediately from Lemma 4.2 . By expressing ζ(k)(s)
through Theorem 3.9 again in the functional equation for ζ(α)(s) and in the rightmost
infinite series we arrive at our desired expression.
�
And this concludes the chapter about the functional equations for the derivatives of
order α of the zeta and eta functions. We were able to reduce the original functional
equation into one that comes at a lower computational cost and derived from it an
expression for η(α)(s).
To generalize our results even more one could investigate the fractional deriva-
tives of ζ(s, a) and the corresponding Hurwitz’s formula for ζ(α)(s, a). In the last
mathematically focused chapter of the thesis we would like to investigate some more
relationships between Dirichlet series and Dαs .
62
6 Further investigations of Dαs
6.1 Differentiating D(s) and L(χ, s)
We begin this chapter by introducing the derivatives of order α for any convergent
Dirichlet series introduced in Definition 1.1.
Theorem 6.1 Let s ∈ C, Re(s) > 1, then the Grünwald-Letnikov fractional deriva-
tive of order α for D(s) is given by
D(α)(s) ≡ eiπα
∞∑
k=1
logα(k)
zk
ks
.
Proof. We proceed by applying −Dαs to D(s).
−DαsD(s) = lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
) ∞∑
k=1
zk
ks−mh
=
∞∑
k=1
zk
ks
lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
)
kmh
=
∞∑
k=1
zk
ks
logα(k)eiπα ,
where the last equation follows from all the steps we took in the alternative proof of
Theorem 5.2 and hence the conclusion follows.
�
From this result we can immediately derive the derivative of order α for any conver-
gent Dirichlet L-function.
Theorem 6.2 Let s ∈ C, Re(s) > 1, then the Grünwald-Letnikov fractional deriva-
tive of order α for L(χ, s) is given by
L(α)(χ, s) ≡ eiπα
∞∑
k=1
logα(k)
χ(k)
ks
.
63
Proof. The proof follows from our result of Theorem 6.1 and then setting zk to be a
Dirichlet chracter χ(k).
�
Lastly we would like to derive an equivalent expression for L(α)(χ, s) by making use of
a classic fact of analytic number theory which states, that every Dirichlet L-function
can be expressed as a superposition of Hurwitz zeta functions.
Lemma 6.3 [1] Let s ∈ C, Re(s) > 1 and χ a Dirichlet character (mod n), then
L(χ, s) ≡ 1
ns
n∑
r=1
χ(r)ζ
(
s,
r
n
)
.
Proof. Since χ is a Dirichlet character (mod n) we can rearrange the terms of L(χ, s)
with respect to the residue classes (mod n). Let k = qn + r, 1 ≤ r ≤ n and q ∈ N,
then
L(χ, s) =
∞∑
k=1
χ(k)
ks
=
n∑
r=1
∞∑
q=0
χ(qn+ r)
(qn+ r)s
=
1
ns
n∑
r=1
χ(r)
∞∑
q=0
1
(q + r
n
)s
and hence the claim follows.
�
From the last lemma we can conclude the following observation:
Theorem 6.4 Let s ∈ C, Re(s) > 1 and χ a Dirichlet character (mod n), then
L(α)(χ, s) ≡ eiπα
∞∑
k=0
(
α
k
)
logk(n)
1
ns
n∑
r=1
χ(r)
∞∑
q=0
logα−k(q + r
n
)
(q + r
n
)s
.
64
Proof. We start by applying −Dαs to Lemma 6.3 giving us
−DαsL(χ, s) =
−Dαs
1
ns
n∑
r=1
χ(r)ζ
(
s,
r
n
)
=
∞∑
k=0
(
α
k
)[
dk
dsk
n−s
]
−Dα−ks
n∑
r=1
χ(r)ζ
(
s,
r
n
)
=
∞∑
k=0
(
α
k
)
(−1)k logk(n)n−s
n∑
r=1
χ(r)−Dα−ks ζ
(
s,
r
n
)
.
Next we compute the derivative of order α for the Hurwitz zeta function.
−Dαs ζ (s, a) = lim
h→0−
1
hα
∞∑
m=0
(−1)m
(
α
m
) ∞∑
q=0
1
(q + a)s−mh
=
∞∑
q=0
1
(q + a)s
lim
h→0−
1
hα
∞∑
m=0
(
α
m
)
(−1)m(q + a)mh
=
∞∑
q=0
1
(q + a)s
lim
h→0−
1
hα
∞∑
m=0
(
α
m
)
(−(q + a)h)m
=
∞∑
q=0
1
(q + a)s
lim
h→0−
(
1− (q + a)h
h
)α
=
∞∑
q=0
1
(q + a)s
(−1)α logα(q + a) ,
and by substituting a→ r
n
, we overall get
−DαsL(χ, s) =
∞∑
k=0
(
α
k
)
(−1)k logk(n)n−s
n∑
r=1
χ(r)
∞∑
q=0
1
(q + r
n
)s
(−1)α−k logα−k(q + r
n
)
= eiπα
∞∑
k=0
(
α
k
)
logk(n)
1
ns
n∑
r=1
χ(r)
∞∑
q=0
logα−k(q + r
n
)
(q + r
n
)s
.
�
65
6.2 Differentiating F(a, s) and ζ(s, a)
Lastly we are interested in investigating the derivatives of the periodic and Hurwitz
zeta functions as well as their convergence properties. We begin by stating two lem-
mas.
Lemma 6.5 Let s ∈ C, Re(s) > 1, and a ∈ (0, 1] then the fractional derivative of
order α of the periodic zeta function is given by
F(α)(a, s) ≡
∞∑
k=1
ei(2ak+α)π logα(k)
ks
.
Proof. We notice that F(a, s) introduced in Theorem 3.6 is simply a Dirichlet series
with zk = e
i2πak. Thus, by Theorem 6.1 we get
F(α)(a, s) = eiπα
∞∑
k=1
logα(k)
ei2πak
ks
and hence the claim follows.
�
Lemma 6.6 Let s ∈ C, Re(s) > 1, and a ∈ (0, 1] then the fractional derivative of
order α of the Hurwitz zeta function is given by
ζ(α)(s, a) ≡ eiπα
∞∑
k=0
logα(k + a)
(k + a)s
.
Proof. The proof follows immediately from the proof of Lemma 6.4 .
�
66
Now that we have gotten ourselves an expression for F(α)(a, s) and ζ(α)(s, a), we are
ready to explore their convergence properties.
Theorem 6.7 Let a ∈ R. For all α > 0 we have that F(α)(a, s) converges absolutely
in the half-plane Re(s) > 1 + α.
Proof.
∣∣∣F(α)(a, s)∣∣∣ ≤ ∞∑
k=1
∣∣∣ei(2ak+α)π logα(k)
ks
∣∣∣ < ∞∑
k=1
kα
kRe(s)
= ζ(Re(s)− α) ,
where the inequalities follows immediately from the proof structure of Theorem 5.9 .
The claim now follows since ζ(Re(s)− α) converges absolutely for Re(s)− α > 1.
�
Theorem 6.8 Let a ∈ (0, 1]. For all α > 0 we have that ζ(α)(s, a) converges abso-
lutely in the half-plane Re(s) > 1 + α.
Proof.
∣∣∣ζ(α)(a, s)∣∣∣ ≤ ∣∣∣eiπα∣∣∣ · ∞∑
k=0
∣∣∣ logα(k + a)
(k + a)s
∣∣∣ < ∞∑
k=0
(k + a)α
(k + a)Re(s)
= ζ(Re(s)− α, a) ,
where the inequalities follow just like mentioned before. From Theorem 3.1 we know
that ζ(Re(s) − α, a) converges absolutely for Re(s) − α > 1 and hence the proof is
complete.
�
Note. As discussed before in section 5.2 it is also a given, that since both functions
converge absolutely they also converge regularly. One can also go by the universal
proof of Theorem 3.1 to prove the uniform convergence of both of these functions on
CRe(s)>1+α.
67
7 Conclusion
Looking back at the results of the thesis and the research provided by Guariglia one
can say, that the Grünwald-Letnikov fractional derivative poses a powerful tool in
computing fractional derivatives of a real order. Despite some apparent computa-
tional concerns that we encounter, due to the definition of the derivative in terms
of an infinite series, the Grünwald-Letnikov derivative can still manage to produce
the results that are in accordance with our intuitive understanding of the regular
derivative of positive integer order.
In further research one could investigate for example the asymptotic properties of
all the differentiated functions that we have discussed over the course of this paper.
Also, one of the major questions that could arise is, if additional investigations of the
topic could bring us a step closer to solving the unsolved problems that have their
origins in analytic number theory, such as the (generalized) Riemann

Continue navegando