Buscar

Teoria Espectral, Felipe Linares IMPA 2020

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você viu 3, do total de 135 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você viu 6, do total de 135 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você viu 9, do total de 135 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Prévia do material em texto

Teoria Espectral
The idea of these notes is to complement Section 1.3 in the notes regard-
ing Fourier Transform. The material presented here was taking from the
book of Jeffrey Rauch, Partial Differential Equations, Graduate Texts in
Mathematics, Springer-Verlag (see [1]).
1. Distributions
The distribution theory arises in several contexts. One is the treatment
of impulsive forces. Newton’s second law affirms that the rate of change of
momentum is equal to the force applied,
dp
dt
= F . Consider an intense force
which acts over a very short interval of time t0 < t < t0 + ∆t. An example
is the force applied by the strike of a hammer. The impulse, I, is defined as
I :=
∫
F (t) dt thus
p(t0 + ∆t) = p(t0) + I.
In the limit, as ∆t tends to zero, one arrives to an idealized force which acts
instantaneously to produce a jump I in the momentum p. Formally, the
force law satisfies
(1.1) F = 0 for t 6= 0 and
∫
F (t) dt = I.
This idealized force is denoted Iδt0 , and δt0 is called Dirac’s delta function
though no function can satisfy (1.1). The idealized equation of motion is
dp
dt
= δt0 . The solution satisfies p(t+)− p(t−) = I. Such idealizations have
shown to be useful in a variety of problems of mechanics and electricity.
The mathematical framework was developed by Lawrence Schwartz in the
1940’s.
We introduce some notation next. Let Ω ⊂ Rn an open subset. The set
of all infinitely differentiable functions with compact support C∞0 (Ω) will be
denoted by D(Ω) and C∞(Ω) the set of all infinitely differentiable functions
on Ω will be denoted by E(Ω). These sets of functions are referred as test
functions.
Definition 1.1. A distribution on an open Ω ⊂ Rn is a linear map l :
D(Ω)→ C, which is continuous in the sense that {ϕn} ⊂ D(Ω) satisfies
(i) there is a compact K ⊂ Ω such that for all n, supp(ϕn) ⊂ K
and
(ii) there is a ϕ ∈ D(Ω) such that for all α ∈ Nn, ∂αϕn converges
uniformly to ∂αϕ,
then l(ϕn) → l(ϕ). The set of all distributions on Ω is denoted by D′(Ω).
When ϕn, ϕ satisfy (i) and (ii) we say that ϕn converges to ϕ in D(Ω).
1
2
The action of a distribution l ∈ D′(Ω) on a test function ϕ ∈ D(ω) is
usually denoted 〈f, ϕ〉. The set D′(Ω) is a complex vector space.
Example 1.2. If f ∈ L1loc(Ω), then there is a natural distribution lf defined
by
〈l, ϕ〉 =
∫
f(x)ϕ(x) dx.
In this sense, the distributions are generalizations of functions and are some-
times called generalized functions. Two locally integrable functions define
the same distribution if and only if the functions are equal almost every-
where. We say that a distribution l is a locally integrable function and write
l ∈ L1loc(Ω) if l = lf for a f ∈ L1loc(Ω). Similarly, we say that l is continuous
(resp. C∞(Ω)) if l = lf , for a f ∈ C(Ω) (resp. C∞(Ω)).
Example 1.3. If x0 ∈ Ω, then 〈l, ϕ〉 ≡ ϕ(x0) is a distribution denoted δx0
and called the Dirac delta at x0. When x0 is not mentioned it is assumed to
be the origin. More generally, 〈l, ϕ〉 ≡ ∂αϕ(x0) is a distribution.
The following proposition characterizes the distributions. More precisely.
Proposition 1.4. A linear map l : D(Ω)→ C belongs to D′(Ω) if and only
if for every compact subset K ⊂ Ω there is an integer N(K, l) and a constant
c ∈ R such that for all ϕ ∈ D(Ω) with support in K
(1.2) |〈l, ϕ〉| ≤ c‖ϕ‖CN , ‖ϕ‖CN =
∑
|α|≤N
max |∂αϕ|.
Proof. If l ∈ D′(Ω) then is clear that (1.2) holds.
Suppose now that (1.2) does not hold for a compact K. For each integer
n, choose ϕn ∈ D(Ω) with support in K such that
(1.3) |〈l, ϕ〉| > 1 and, ‖ϕ‖CN <
1
n
.
Then ϕn satisfy (i) and (ii) with ϕ = 0, but 〈l, ϕ〉 does not converge to zero
thus l is not a distribution. �
Definition 1.5. A sequence of distributions In ∈ D′(Ω) converges to l ∈
D′(Ω) if and only if for every test function ϕ ∈ D(Ω), ln(ϕ) → l(ϕ). This
convergence is denoted In ⇀ I or In
D′→ I.
Example 1.6. If j ∈ D(Ω) with
∫
j(x) dx = 1, let j�(x) = �
−nj(x/�). Then
j� → δ0.
3
1.1. Operations with distributions. The great utility of distributions
lies on the fact that the standard operations of calculus extend to D′(Ω).
For instance, one can differentiate distributions. This is quite important in
the study of differential equations.
The recipe for defining operations on distributions is basically the same:
pass the operator onto the test function.
Example 1.7. We recall the translation operator τyf = f(x − y), y ∈ Rn.
Let I ∈ D′(Rn), the translate of l by the vector y, τyI, is defined as follows.
If l were equal to the function f , then
〈τyI, ϕ〉 =
∫
f(x− y)ϕ(x) dx =
∫
f(z)ϕ(z+ y) dz = 〈I, τ−yϕ〉, ϕ ∈ D(Rn).
This motivates the definition, 〈τhI, ϕ〉 = 〈I, τ−hϕ〉. It is easy to check that
τhI defined as above is a distribution and that definition agrees with τhf
when I = If .
Example 1.8. To differentiate a distribution I on Rn, we form the differ-
ence quotients which could converge to
∂l
∂xj
. Let ej be the vector with jth
coordinate equals 1 and 0 in the others. The difference quotients are given
by
(1.4)
〈τ−hej l − l
h
, ϕ
〉
≡
〈
l,
τhejϕ− ϕ
h
〉
, ϕ ∈ D(Rn).
The test functions on the right converge to − ∂ϕ
∂xj
, so the continuity of l im-
plies that the right hand side of (1.4) converges to 〈l,− ∂ϕ
∂xj
〉. This suggests
that
(1.5)
〈 ∂l
∂xj
, ϕ
〉
≡ 〈l,− ∂ϕ
∂xj
〉.
This defines a distribution and if f ∈ C1(Ω) and l = lf , the derivatives of
l are equal to the distributions I ∂f
∂xj
. Thus the operator ∂/∂xj on D
′ is an
extension of ∂/∂xj on D.
Let us apply the above procedure to find the derivative in distributions
sense of the Heaviside function H(x) = χ[0,∞)(x) defined on R. The differ-
ence quotient
τ−hH −H
h
= h−1χ[0,h]
converges to δ in the sense of distributions. Thus dHdx = δ. Observe that
the difference quotient converge to zero almost everywhere. Since H is not
constant, zero should not be the desired derivative. The pointwise limit gives
the wrong answer and the distribution derivative is the right answer.
4
The operations on distributions discussed so far are particular cases of a
general algorithm.
Proposition 1.9 (P.D. Lax). Suppose that L is a linear map from D(Ω1)
to D(Ω2), which is sequentially continuous in the sense that ϕn → ϕ implies
L(ϕn) → L(ϕ). Suppose, in addition, that there is an operator L′, sequen-
tially continuous from D(Ω2) to D(Ω1), which is the transpose of l in the
sense that
〈L(ϕ), ψ〉 = 〈ϕ,L′(ψ)〉 for all ϕ ∈ D(Ω1), ψ ∈ D(Ω2).
Then the operator L extends to a sequentially continuous map of D′(Ω2) to
D′(Ω1) given by
(1.6) 〈L(l), ψ〉 = 〈l, L′(ψ)〉 for all l ∈ D′(Ω1), and ψ ∈ D(Ω2).
Proof. The sequential continuity of L′ shows that L(l) defined in (1.6) is a
distribution. If I = Iϕ for some ϕ ∈ D(Ω1), then
(1.7) 〈L(l), ψ〉 ≡ 〈l, L′(ψ)〉 =
∫
Ω1
ϕ(x)L′(ψ)(x) dx =
∫
Ω2
L(ϕ)(x)ψ(x) dx,
the last equality from the hypothesis that L′ is the transpose of L. Thus
L(l) is the distribution associated to L(ϕ) which proves that L defined by
(1.6) extends L
∣∣
D′
.
Finally, if In ⇀ I in D
′(Ω1), it follows immediately from (1.6) that
L(In) ⇀ L(I) proving the sequentially continuity of L.
�
Remark 1.10. The proof of the uniqueness of this extension can be seen in
[1] Appendix Proposition 8.
Example 1.11. If a(x) ∈ C∞(Ω), (≡ E(Ω)), then the map L(ϕ) ≡ aϕ is
equal to its own transpose. That is,
〈L(ϕ), ψ〉 =
∫
(a(x)ϕ(x))(ψ) dx =
∫
(ϕ(x))(a(x)ψ) dx = 〈ϕ,L(ψ)〉.
Thus for l ∈ D′(Ω), al is a well-defined distribution given by 〈al, ϕ〉 ≡ 〈l, aϕ〉.
Example 1.12. If Ω2 = y + Ω and Lτ is translation by y, then L
′ = τ−y
is sequentially continuous. Therefore for l ∈ D′(Ω1) the translates of l are
well defined by 〈τyI, ϕ〉 ≡ 〈I, τ−yϕ〉. The reflection operator ϕ̃(x) = ϕ(−x)
is its own transpose, thus Ĩ is a well-defined distribution on the reflection of
Ω.
Example 1.13. If L = ∂α (∂α ≡ ∂α1x1 . . . ∂
αn
xn , α ∈ Z
n and |α| = α1 +
· · · + αn). Integration by parts gives L′ = (−1)|α|∂α which is sequentially
continuous on D. Thus the derivatives of distributions are defined by
〈∂αl, ϕ〉 ≡ 〈l, (−1)|α|∂αϕ〉.
5
Once we have defined multiplication and derivatives we can computea
product rule as being
〈∂(al), ψ〉 ≡ 〈l,−a∂ψ〉 = 〈l,−∂(aψ)〉+ 〈l, (∂a)ψ〉 = 〈a∂l + (∂a)l, ψ〉.
Following this procedure inductively we can define the usual Leibniz for
∂α(al).
If P (x,D) =
∑
aα(x)∂
α is a linear partial differential operator with co-
efficients in E(Ω), then P maps D′(Ω) to itself with 〈Pl, ϕ′ ≡ 〈l, P ′ϕ〉 where
the transpose of P is given by
P ′ψ =
∑
(−1)|α|∂α(aαψ).
1.2. Convolution. Suppose that Ω = Rn and ϕ ∈ D′(Rn). Ler L be the
operator L(ψ) = ϕ∗ψ. The Leibniz rule for differentiating under the integral
implies that L maps D′(Rn) continuously to itself. The Fubini theorem
shows that the transpose of L is convolution with ϕ̃. Thus ϕ∗I makes sense
for any l ∈ D′(Rn) and it is given by
〈ϕ ∗ l, ψ〉 ≡ 〈l, ϕ̃ψ〉.
Example 1.14. We compute ϕ ∗ δ
〈ϕ ∗ δ, ψ〉 ≡ 〈δ, ϕ̃ψ〉 = (ϕ̃ ∗ ψ)(0) =
∫
ϕ(y)ψ(y) dy = 〈ϕ,ψ〉.
Therefore ϕ ∗ δ = ϕ.
It is not difficult to show that for l ∈ D′(Rn),
∂α(ϕ ∗ l) = ϕ ∗ ∂αl = (∂αϕ) ∗ l.
We end this section with the following result whose proof can be see in
[1] Appendix Proposition 3.
Proposition 1.15. If l ∈ D′(Rn) and ϕ ∈ D(Rn), then l ∗ ϕ is equal to the
C∞ function whose value at x is 〈l, τx(ϕ̃)〉.
1.3. Tempered distributions. Recall that S(Rn) denotes the Schwartz
space, the space of the C∞ functions decaying at infinity, that is,
S(Rn) = {ϕ ∈ C∞ : |||ϕ|||α,β ≡ ‖xα∂βϕ‖L∞(Rn) <∞, for any α, β ∈ (Z+)n}.
Definition 1.16. A tempered distribution is a continuous linear func-
tional on S(Rn). The set of all tempered distribution is denoted by S′(Rn).
Proposition 1.17. A linear map T : S(Rn)→ C is continuous if and only
if there exist N ∈ N and c ∈ R such that for all ϕ ∈ S(Rn)
(1.8) |〈T, ϕ〉| ≤ c
∑
|α|≤N, |β|≤N
‖xβ∂αϕ‖L∞(Rn).
6
Corollary 1.18. A distribution T ∈ S′(Rn) extends uniquely to an element
of S′(Rn) if and only if there exist N ∈ N and c ∈ R such that (1.8) holds
for all ϕ ∈ D(Rd).
In particular, we have
D(Rn) ⊂ E′(Rn) ⊂ S′(Rn) ⊂ D′(Rn).
Example 1.19. If f is a Lebesgue measurable function on Rn such that
for some M , (1 + |x|2)−Mf ∈ L1(Rn), then the distribution defined by f is
tempered since
〈f, ϕ〉 = 〈(1 + |x|2)−Mf, (1 + |x|2)Mϕ〉
≤ ‖(1 + |x|2)−Mf‖L1‖(1 + |x|2)Mϕ‖L∞ ≤ cf,M |||ϕ|||2M,0.
Example 1.20. If f ∈ Lp(Rn), 1 ≤ p ≤ ∞, then f ∈ S′ since these
functions satisfy the condition of Example 1.19, if one chooses M so large
that (1 + |x|2)−M ∈ Lq(Rn) and then uses Hölder’s inequality.
Definition 1.21. A sequence Tn ∈ S′(|rn) converges to S′(Rn) if and only
if for all ϕ ∈ S(Rn)
〈Tn, ϕ〉 → 〈T, ϕ〉 as n→∞.
We write Tn
S′→ T .
In the next we mainly interested in extending to S′ the basic linear oper-
ators of analysis, for instance ∂α and F.
Given a continuous linear operator L : S → S, the transpose L′ maps
S′ → S′. For T ∈ S′(Rn), L′T ∈ S′ is defined by
(1.9) 〈L′T, ϕ〉 ≡ 〈T, Lϕ〉 for all ϕ ∈ S.
The next proposition shows that the identity can sometimes be used to
extend L.
Proposition 1.22. Suppose that L : S(Rn) → S(Rn) is a continuous lin-
ear map and that the restriction of the transpose operator to S, L′
∣∣
S
, is a
continuous map of S to itself. Then L has a unique sequentially continuous
extension to a linear map L : S′(Rn)→ S′(Rn) defined by
〈LT, ϕ〉 ≡ 〈T, L′ϕ〉, for all T ∈ S′, ϕ ∈ S.
Proof. See Proposition 4 page 77 in [1]. �
Remark 1.23. This proposition identifies when passing the operator to the
test function yields a good extension.
For a general L, one will not even have L′ϕ ∈ S for ϕ ∈ S. The hy-
pothesis on L′ is very restrictive. However, the translation, the dilation,
multiplication by a convenient function M (see Exercise 1.28 below), differ-
entiation ∂α and Fourier transform F are operators which are included in
this proposition.
7
For T ∈ S′ and ϕ ∈ S, we have
〈(∂α)′T, ϕ〉 ≡ 〈T, (∂α)ϕ〉.
If T ∈ S, the right-hand side is equal to∫
T (x)∂αxϕ(x) dx =
∫
(−∂x)αT (x)ϕ(x) dx = 〈(−∂x)αT, ϕ〉.
Thus, for such T , (∂α)′T = (−∂)αT .
Similarly, for T ∈ S′ and ϕ ∈ S,
〈FT, ϕ〉 ≡ 〈T,Fϕ〉.
For T ∈ S, the duality identity
〈Fϕ,ψ〉 = 〈ϕ,Fψ〉, for all ϕ,ψ ∈ S,
shows that this is equal to 〈FT, ϕ〉, whence F′
∣∣
S
= F.
1.4. Applications. First consider the solvability of the equation
(1.10) (1−∆)u = f
For u, f in S′ this is equivalent to
(1 + |ξ|2)û = Ff,
hence
(1.11) û = (1 + |ξ|2)−1Ff.
Proposition 1.24. For any f ∈ S′(Rn) there exists exactly one solution
u ∈ S′(Rn) to (1.10). The solution is given by formula (1.11). In particular,
if f ∈ S, then u ∈ S. If f ∈ L2, then for all |α| ≤ 2, Dαu ∈ L2(Rn).
The second application is to a Liouville-type theorem. More precisely.
Theorem 1.25 (Generalized Liouville Theorem). Suppose that P (D) is a
constant coefficient partial differential operator such that P (ξ) 6= 0 for ξ 6= 0.
If u ∈ S′(Rn) satisfies Pu = 0, then u is a polynomial in x.
Proof. Taking Fourier transform of the equation we obtain
F(P (D))u = P (ξ)û = 0.
Since P (ξ) 6= 0 if ξ 6= 0 it follows that supp û ⊂ {0}.
Thus Fu has to be a finite linear combination of derivatives of the delta
function
û =
∑
cαD
αδ.
Applying the inverse Fourier transform we get
u =
∑
cαFD
αδ =
∑
cα(−x)αFδ =
∑
cα(−x)α(2π)−n/2,
a polynomial in x. �
Corollary 1.26. The only bounded harmonic (resp. holomorphic) functions
on Rn (resp. C) are the constants.
8
Next we consider the wave equation,
(1.12)
∂2u
∂t2
(x, t) =
∂2u
∂x2
(x, t).
Both sides of the equation make sense if u is a distribution. If the equality
holds we say that u is weak solution. Recall that u is said a classical solution
if u ∈ C2(R2) and the identity is satisfied.
Consider a traveling wave u(x, t) = f(x− t), f ∈ L1loc(R). It is clear that
u ∈ L1loc(R2) and so it defines a distribution. Is it a weak solution?
Using the differentiation operator definition we find that〈 ∂2
∂t2
u, ϕ
〉
=
〈
u,
∂2
∂t2
ϕ
〉
=
∫∫
f(x− t) ∂
2
∂t2
ϕ(x, t) dxdt〈 ∂2
∂x2
u, ϕ
〉
=
〈
u,
∂2
∂x2
ϕ
〉
=
∫∫
f(x− t) ∂
2
∂x2
ϕ(x, t) dxdt.
Hence 〈 ∂2
∂t2
u− ∂
2
∂x2
u, ϕ
〉
=
∫∫
f(x− t)
( ∂2
∂t2
ϕ− ∂
2
∂x2
ϕ
)
(x, t) dxdt.
We would to like to show that this is zero. To do so, we make the change of
variables y = x− t, z = x+ t, dxdt = 12 dydz. and
∂2
∂t2
− ∂
2
∂x2
= −4 ∂
∂y
∂
∂z
.
Thus,∫∫
f(x− t)
( ∂2
∂t2
− ∂
2
∂x2
)
ϕ(x, t) dxdt = −2
∫∫
f(y)
∂2ϕ
∂y∂z
(y, z) dzdy
We claim that integration in z yields zero. Indeed, we observe that∫ b
a
∂2ϕ
∂y∂z
(y, z) dz =
∂ϕ
∂y
(y, b)− ∂ϕ
∂y
(y, a).
Thus ∫ b
a
∂2ϕ
∂y∂z
(y, z) dz = 0
since ϕ and ∂ϕ∂y vanish in a bounded set. Therefore u(x, t) = f(x − t) is a
weak solution of (1.12).
Next we investigate whether log(x2 +y2) is a weak solution of the Laplace
equation
(1.13) ∆u(x, y) =
( ∂2
∂x2
+
∂2
∂y2
)
u(x, y) = 0.
We have to check that〈( ∂2
∂x2
+
∂2
∂y2
)
u, ϕ
〉
=
〈
u,
( ∂2
∂x2
+
∂2
∂y2
)
ϕ
〉
= 0 for all ϕ ∈ D(R2).
9
Employing polar coordinates (r, θ) we have that
∂2
∂x2
+
∂2
∂y2
=
∂2
∂r2
+
1
r
∂
∂r
+
1
r2
∂2
∂θ2
and dxdy = r drdθ.
Then for u(x, y) = log(x2 + y2) we would like to know whether∫ 2π
0
∫ ∞
0
log r2
( ∂2
∂r2
+
1
r
∂
∂r
+
1
r2
∂2
∂θ2
)
ϕ(r, θ) rdrdθ = 0
is true. To avoid the singularity of u at the origin we will integrate in r in
(�,∞) and then we make � tends to 0.
We first note
(1.14)
∫ 2π
0
log r2
1
r2
∂2
∂θ2
ϕ(r, θ) rθ =
1
r
log r2
∂ϕ
∂θ
(r, θ)
∣∣∣2π
0
= 0
since
∂ϕ
∂θ
is periodic. Therefore this term is always 0.
On the other hand we have
(1.15)
∫ ∞
�
log r2
∂
∂r
ϕ(r, θ) dr = −
∫ ∞
�
∂
∂r
(log r2)ϕ(r, θ) dr− log(�2)ϕ(�, θ).
and ∫ ∞
�
r log r2
∂2
∂r2
ϕ(r, θ) dr = −
∫ ∞
�
∂
∂r
(r log r2)
∂
∂r
ϕ(r, θ) dr
− � log(�2) ∂
∂r
ϕ(�, θ)
=
∫ ∞
�
∂2
∂r2
(r log r2)ϕ(r, θ) dr
− � log(�2) ∂
∂r
ϕ(�, θ)
+
∂
∂r
(r log r2)
∣∣∣
�
ϕ(�, θ).
(1.16)
Now
∂
∂r
(r log r2) = log r2 + 2,
∂2
∂r2
(r log r2) =
2
r
and
∂
∂r
(log r2) =
2
r
Gathering together the information in (1.14), (1.15) and (1.16) we obtain∫ 2π
0
∫ ∞
�
log r2
( ∂2
∂r2
+
1
r
∂
∂r
+
1
r2
∂2
∂θ2
)
ϕ(r, θ) rdrdθ
=
∫ 2π
0
∫ ∞
�
(
− 2
r
+
2
r
)
ϕ(r, θ) drdθ
+
∫ 2π
0
(− log �2 + log �2 + 2)ϕ(�, θ) dθ
+
∫ 2π
0
(−� log �2)∂ϕ
∂r
(�, θ) dθ.
10
Thus(1.17) 〈∆u, ϕ〉 = lim
�→0
2
∫ 2π
0
ϕ(�, θ) dθ −
∫ 2π
0
� log �2
∂ϕ
∂r
(�, θ) dθ.
Since ϕ is continuous ϕ(�, θ) → ϕ(0, θ) as � → 0 and so the first term in
(1.17) approaches to 4π〈δ, ϕ〉.
In the second term in (1.17),
∂ϕ
∂r
(�, θ) remains bounded while � log �2 → 0
as �→ 0. Hence
∆ log(x2 + y2) = 4πδ.
Therefore log(x2 + y2) is not a weak solution of ∆u = 0.
The previous computations allow us to solve the Poisson equation
(1.18) ∆u = f for any f.
A final remark.
Remark 1.27. It is clear that S′(Rn) ⊂ D′(Rn). What is not true is that any
distribution in D′(Rn) is a tempered distribution. For example the function
f(x) = ex
2
in R defines the distribution
〈f, ϕ〉 =
∫ ∞
−∞
ex
2
ϕ(x) dx.
Observe that e−x
2/2 ∈ S(R) and so we have
〈f, ϕ〉 =
∫ ∞
−∞
ex
2
e−x
2/2 dx =
∫ ∞
−∞
ex
2/2 = +∞
which does not define a tempered distribution.
Exercise 1.28. Prove
(i) If M ∈ C∞(Rn) and ∀α ∈ (Z+)n, there exist N, c such that
|∂αM | ≤ c(1 + |x|)N ,
then the map f → Mf is a continuous linear transformation of
S(Rn) into itself.
(ii) If in addition, there exist γ, c > 0 such that
|M(x)| ≥ c(1 + |x|)−γ ,
then the mapping is one-to-one and onto with continuous inverse.
Exercise 1.29. Verify that if f satisfies∫
|x|≤A
|f(x)| dx ≤ cAN as A→∞
for some constants c and N , then∫
Rn
|f(x)ϕ(x)| dx <∞ ∀ϕ ∈ S(Rn).
11
Therefore ∫
Rn
f(x)ϕ(x) dx
defines a tempered distribution.
References
[1] J. Rauch, Partial Differential Equations, Graduate Texts in Mathematics, 128.
Springer-Verlag, New York, 1991. x+263 pp.
Teoria Espectral
1. Unbounded Operators
These notes are intend to introduce the unbounded operators and
several notions and properties related to them. The notes are sketchy
and you might consult some additional textbooks.
– M. Reed and B. Simon, Methods of Modern Mathematical Physics,
Volumes 1, 2
– E. Hille, Methods in Classical and Functional Analysis
– T. Kato, Perturbation Theory
We will use the following notation. We will denote X, Y to be Banach
spaces. We will use B(z,R) to denote an open ball with center z and
radius R.
1.1. Closed operators.
Definition 1.1. A linear operator T : D(T ) ⊂ X → Y is closed
if and only if for all sequence {φn} ⊂ D(T ) such that
φn
X→ φ and Tφn
Y→ ψ
then
φ ∈ D(T ) and Tφ = ψ,
if and only if the graph
G(T ) = {(φ, Tφ) : φ ∈ D(T )}
is a closed set in X × Y .
Remark 1.2. A linear closed operator is the best we can have after a
linear continuous operator.
Example 1.3. The operator H0 defined by{
D(H0) = H
2(Rn)
H0f = −∆f
is a closed operator.
It is not difficult to show that H0 = F
−1M0F where{
D(M0) = {φ ∈ L2(Rn) : |ξ|2φ ∈ L2(Rn)}
M0φ = |ξ|2φ.
Affirmation: M0 is closed.
1
2
Indeed, let {φn} ⊂ D(M0) such that φn → φ in L2 and M0φn → ψ
in L2. Then there exists a subsequence {φnk} of {φn} such that{
φnk(x)→ φ(x)
|x|2φnk(x)→ ψ(x)
almost every x ∈ Rn.
This implies that |x|2φ(x) = ψ(x) a.e. Hence | · |2φ ∈ L2(Rn). Thus
φ ∈ D(M0) and ψ = M0φ. It follows that H0 is closed.
Exercise 1.4. If A : D(A) ⊂ X → Y is bounded, show that
A is closed ⇐⇒ D(A) is closed in X.
Exercise 1.5. Let{
T : D(T ) ⊂ X → Y be a closed operator,
A : D(A) ⊂ X → Y be a bounded operator and D(T ) ⊂ D(A).
Show that T + A : D(T ) ⊂ X → Y is a closed operator and
(T + A)φ = Tφ+ Aφ.
Remark 1.6. The perturbation of a closed operator by a bounded op-
erator is a closed operator.
Definition 1.7. Let T : D(T ) ⊂ X → Y and S : D(S) ⊂ X → Y be
linear operators. The sum of T and S is given by{
D(T + S) = D(T ) ∩D(S)
(T + S)φ = Tφ+ Sφ ∀φ ∈ D(T + S).
Definition 1.8.
(1) Let T : D(T ) ⊂ X → Y be a linear operator. The kernel of
the operator T is defined by
N(T ) = ker T = {φ ∈ D(T ) : Tφ = 0} which a subspace of D(T ).
The image of the operator T is defined by
Im(T ) = R(T ) = {Tφ : φ ∈ D(T )} which a subspace of Y.
(2) Let T : D(T ) ⊂ X → Y be an injective linear operator, we
define T−1 by{
D(T−1) = R(T )
T−1Tφ = φ, ∀φ ∈ D(T ).
3
Thus T−1 : R(T ) ⊂ Y → X.
(3) If T : D(T ) ⊂ X → Y , S : D(S) ⊂ Y → Z are two linear
operators, we define S ◦ T by{
D(S ◦ T ) = {φ ∈ D(T ) : Tφ ∈ D(S)}
S ◦ T (φ) = S(Tφ).
Some remarks on the graph of a linear operator T : D(T ) ⊂ X → Y .
(1) T is closed ⇐⇒ G(T ) is closed.
(2) G(T ) closed ; D(T ) is closed.
Example 1.9. H0 is a closed linear operator but D(H0) = H
2(Rn) is
not closed in L2(Rn). Since H2(Rn) = L2(Rn) this would imply that
H2(Rn) = L2(Rn) which is false.
Theorem 1.10 (Closed Graph Theorem). Let X, Y be Banach spaces.
If T : X → Y is a closed linear operator, then T ∈ B(X, Y ).
Remark 1.11. Note that the operator T is required to be everywhere-
defined, i.e., the domain D(T ) of T is X.
Example 1.12. If T : D(T ) ⊂ X → Y is a closed operator and
S : X → X is a bounded operator. R(S) = ImS ⊂ D(T ). Then
T ◦ S ∈ B(X, Y ).
T ◦ S is closed. Let {φn} ⊂ X = D(T ◦ S) such that{
φn
X→ φ
(T ◦ S)φn
Y→ ψ
Since S is continuous we have that{
Sφn
X→ Sφ
T (Sφn)
Y→ ψ.
On the other hand, since T is closed Sφ ∈ D(T ) and ψ = T ◦ Sφ.
This implies that T ◦ S is closed. Thus T ◦ S : X → Y is closed.
Therefore the Closed Graph Theorem implies T ◦ S ∈ B(X, Y ).
Exercise 1.13. Let T : D(T ) ⊂ X → Y be a linear operator. If T is
closed and injective, show that T−1 is closed.
4
1.2. Closure of an operator. Closable operators.
Definition 1.14. Let A : D(A) ⊂ X → Y and B : D(B) ⊂ X → Y be
two linear operators. We say that B extends A if and only if
D(A) ⊆ D(B)
Bφ = Aφ, ∀φ ∈ D(A).
We use the following notation A ⊆ B or B
∣∣
D(A)
= A.
Example 1.15. Define the operator
Ḣ0 : S(Rn) ⊆ L2(R2)→ L2(Rn)
f 7→ −∆f.
It is clear that Ḣ0 ⊆ H0.
Definition 1.16. The linear operator T : D(T ) ⊂ X → Y is closable
if and only if there exists a closed linear operator S with T ⊆ S. That
is, there exists a closed extension of T .
Lemma 1.17. Let M be a subspace of X × Y , then M is the graph of
a linear operator if and only if M does not contain points of the form
(0, v), v 6= 0.
Proof. Exercise. �
Proposition 1.18. Let T : D(T ) ⊂ X → Y be a linear operator. The
following affirmations are equivalent:
(i) T is closable.
(ii) G(T ) is the graph of a linear operator (closed).
(iii) If {φn} ⊆ D(T ) such that φn
X→ 0 and Tφn
Y→ v, then v ≡ 0.
Proof.
(i) =⇒ (ii) Let T : D(T ) ⊂ X → Y be a closable operator, then
there exists S : D(S) ⊂ X → Y closed such that S ⊆ T , that is,
G(T ) ⊂ G(S). This implies that
G(T ) ⊂ G(S) = G(S)
does not contain points (0, v), v 6= 0 by Lemma 1.17. Therefore G(T )
is the graph of a linear operator which is closed since G(T ) is closed.
5
(ii) =⇒ (iii) If {φn} ⊆ D(T ) is such that φn
X→ 0 and Tφn
Y→ v,
then
(φn, Tφn)︸ ︷︷ ︸
∈ G(T )
X×Y→ (0, v)︸ ︷︷ ︸
∈ G(T )
This implies that v ≡ 0 since G(T ) is the graph of a linear operator.
(iii) =⇒ (ii) If M = (0, v) ∈ G(T ), then v = 0 which implies that
G(T ) is the graph of a linear operator.
(ii) =⇒ (i) Let S : D(S) ⊆ X → Y be a closed linear operator such
that G(S) = G(T ). Hence
G(T ) ⊆ G(T ) = G(S)
implies that T ⊆ S is closed and thus T is closable. �
Definition 1.19. If T is a closable operator, the operator T defined by
G(T ) = G(T ) is called the closure of T .
Exercise 1.20. If T : D(T ) ⊂ X → Y is closable, show that
D(T ) = {φ ∈ X : φj∈D(T )
X→ φ and {Tφj} is a Cauchy sequence in Y }.
Example 1.21 (A no closable operator). Let X = Y = L2([0, 1]), and
φ ∈ X different from 0. Let
T : D(T ) = C0([0, 1]) ⊆ L2([0, 1])→ L2([0, 1])
f 7→ f(1)φ.
Then T is not closable.
Indeed, suppose that T is closable. Let fj(x) = x
j, then Tfj = φ for
all j ∈ N.
On the other hand,
‖fj‖L2 =
(∫ 1
0
x2j dx
)1/2
=
( 1
2j + 1
)1/2
→
j→∞
0
Since T is closable then φ ≡ 0 which is a contradiction.
We will see that all differential operator is closable.
Definition 1.22. Let T be a closed operator, a subspace N ⊂ D(T ) is
a core if and only if T
∣∣
N
= T , that is, if it is possible to recover T
from N.
Exercise 1.23. Show that C∞0 (Rn) and S(Rn) are core of H0.
6
1.3. Resolvent, spectrum of an operator.
Definition 1.24. Let T : D(T ) ⊆ X → X a linearoperator. The
resolvent set of T denoted by ρ(T ) is defined by
ρ(T ) = {z ∈ C : (T − z)−1 exists and (T − z)−1 ∈ B(X)}.
Remark 1.25. If T is a closed operator we have that
z ∈ ρ(T ) ⇐⇒
{
T − z : D(T ) ⊆ X → X is injective
T − z : D(T ) ⊆ X → X is surjective
⇐⇒ For all ψ ∈ X, there exists a unique φ ∈ D(T )
such that (T − z)φ = ψ.
Indeed,
=⇒ is easy.
⇐= If T −z is 1−1 and surjective, then (T −z)−1 : X → X is closed
(exercise). Then applying the closed graph Theorem (T −z)−1 ∈ B(X).
Definition 1.26. The spectrum of a linear operator T is the set
σ(T ) = C\ρ(T ).
The set of the eigenvalues of T is given by
ev(T ) = {z ∈ C : T − z is not 1− 1},
i.e.
ev(T ) = {z ∈ C : N(T − z) 6= {0}}.
Remark 1.27. We observe that ev(T ) ⊆ σ(T ), but in general the
inclusion is strict.
Example 1.28. Consider the following operator
T : `1(N)→ `1(N)
{xj} = (x0, x1, x2, . . . ) 7→ (0, x0, x1, . . . ).
Notice that T is 1 − 1 but T is not surjective. This in particular
implies that
ev(T ) ( σ(T )
since 0 /∈ ev(T ) and 0 ∈ σ(T ).
Remark 1.29. There are two possible reasons for z ∈ σ(T ).
(i) T − z is not 1− 1.
(ii) (T − z)−1 is not defined in the whole X.
7
Definition 1.30. If z ∈ ρ(T ) we define the resolvent operator by
RT (z) = (T − z)−1.
Remark 1.31. We observe that
(T − z)RT (z)φ = φ, ∀φ ∈ X
RT (z)(T − z)ψ = ψ, ∀ψ ∈ D(T ).
Exercise 1.32 (An operator without eigenvalues).
Let D(M) = L2([−π, π]) = L2per.
M : D(M)→ L2per
f 7→Mf(x) = x f(x) a.e. x ∈ [−π, π].
Prove that
(i) M ∈ B(L2([−π, π]));
(ii) Mφ = λφ =⇒ φ = 0;
(iii) σ(M) = [−π, π].
Exercise 1.33 (Spectrum of H0 and M0). We recall that M0 = F
−1 ◦
H0 ◦ F. Show that
(i) H0 and M0 do not have eigenvalues;
(ii) σ(H0) = σ(M0) = R+ = [0,∞).
Remark 1.34. Two linear operators unitarily equivalent have the same
spectrum.
Exercise 1.35. Consider the operators Aj, j = 0, 1, 2, defined by
D(A0) = H
1([−π, π]),
D(A1) = {φ ∈ D(A0) /φ(−π) = φ(π)},
D(A2) = {φ ∈ D(A1) / φ(−π) = φ(π) = 0},
and
Aj =
1
i
d
dx
, j = 0, 1, 2.
(i) Prove that Aj is closed for j = 0, 1, 2.
(ii) Show that σ(A0) = σ(A2) = C and σ(A1) = Z.
Exercise 1.36 (Operator with empty spectrum). We Define A± by
D(A±) = {φ ∈ D(A0) : φ(±π) = 0},
A±φ = A0φ =
1
i
φ′.
Show that σ(A±) = ∅.
8
Next we recall the following property of the spectrum for bounded
operator.
Proposition 1.37. If A ∈ B(X), then the spectrum σ(A) 6= 0 and
σ(A) is a compact in C.
In the case of unbounded operators we only know that σ(T ) is closed!
As a consequence we need the next properties:
Theorem 1.38 (First equation of the resolvent). Let T : D(T ) ⊆ X →
X be a closed linear operator. Suppose that z, z′ ∈ ρ(T ), then
RT (z)−RT (z′) = (z − z′)RT (z) ◦RT (z′).
Proof. We have that
(T − z′)− (T − z) = z − z′.
So applying RT (z) in the above identity, we obtain
RT (z) ◦ (T − z′)− IdD(T ) = (z − z′)RT (z).
Now applying RT (z
′) on the right, we get the desired equality
RT (z)−RT (z′) = (z − z′)RT (z) ◦RT (z′).
�
Corollary 1.39. It holds that
RT (z) ◦RT (z′) = RT (z) ◦RT (z′).
Proof. In fact, using
RT (z)−RT (z′) = (z − z′)RT (z) ◦RT (z′)
and
RT (z
′)−RT (z) = (z′ − z)RT (z′) ◦RT (z)
the result follows. �
Theorem 1.40 (Neumann series). Let X be a Banach space and A ∈
B(X) such that ‖A‖ < 1, then Id− A is invertible and
(1.1) (Id− A)−1 =
∞∑
j=0
Aj.
In addition, it holds that
(1.2) ‖(Id− A)−1‖ ≤ 1
1− ‖A‖
.
9
Proof. Let B =
∞∑
j=0
Aj.
Since
∞∑
j=0
‖Aj‖ ≤
∞∑
j=0
‖A‖j <∞,
we deduce that the series B is convergent in norm in B(X) which
implies that B =
∞∑
j=0
Aj ∈ B(X) and for all n ∈ N we have that
(Id− A)
n∑
j=0
Aj =
n∑
j=0
Aj −
n+1∑
j=1
Aj. = Id− An+1
Making n→∞ we deduce that (Id− A)B = Id.
Similarly we prove that B(Id− A) = Id.
Thus B = (Id− A)−1 and
‖(Id− A)−1‖ = ‖
∞∑
j=0
Aj‖ ≤
n∑
j=0
‖A‖j = 1
1− ‖A‖
.
�
Corollary 1.41. If T ∈ G(X) = {A ∈ B(X);A is invertible, A−1B(X)}.
Then
B(T,
1
‖T−1‖
) ⊂ G(X).
In particular, this implies that G(X) is open. In other words, for all
S ∈ B(X) such that ‖S‖ ≤ 1‖T−1‖ we have that T + S ∈ G(X).
Moreover,
‖(T + S)−1‖ ≤ ‖T
−1‖
1− ‖S‖‖T−1‖
.
Proof. We first notice that
T + S = T ◦ (Id+ T−1 ◦ S)
and
‖T−1 ◦ S‖ ≤ ‖S‖‖T−1‖ < 1.
This implies that (Id+ T−1S) ∈ G(X). Hence
T + S = T ◦ (Id+ T−1 ◦ S) ∈ G(X). (In particular, G(X) is a group).
In addition,
(T + S)−1 = (Id+ T−1S)−1 ◦ T−1
10
which implies that
‖(T + S)−1‖ ≤ ‖(Id+ T−1S)−1‖‖T−1‖
≤ ‖T
−1‖
1− ‖T−1 ◦ S‖
≤ ‖T
−1‖
1− ‖S‖‖T−1‖
.
�
Theorem 1.42. Let T : D(T ) ⊆ X → X be a linear operator not
necessarily closed. Then ρ(T ) is open in C and for all ζ ∈ ρ(T ) and
z ∈ B(ζ, ‖RT (ζ)‖−1) we have z ∈ ρ(T ) and
RT (z) = RT (ζ)
∑
j=0
RT (ζ)
j(z−ζ)j, ∀z ∈ C such that |z−ζ| < ‖RT (ζ)‖−1.
Proof. (Idea of the proof) If we know that z and ζ are in ρ(T ) then the
first equation of the resolvent would imply that
RT (z)−RT (ζ) = (z − ζ)RT (z)RT (ζ)
or
RT (z)(Id− (z − ζ)RT (ζ)) = RT (ζ).
We can see that
(Id− (z − ζ)RT (ζ)) ∈ G(X) whenever |z − ζ| < ‖RT (ζ)−1‖.
Thus, in this case, it holds that
RT (z) = RT (ζ) ◦ (Id− (zζ)RT (ζ))−1
= RT (ζ) ◦
∞∑
j=0
(z − ζ)jRT (ζ)j.
To prove the theorem we let ζ ∈ ρ(T ) and z ∈ B(ζ, ‖RT (ζ)‖−1).
Define
F (z) = RT (ζ) ◦
∞∑
j=0
(z − ζ)jRT (ζ)j.
We notice that F (z) ∈ B(X) since the series
∞∑
j=0
(z−ζ)jRT (ζ)j converges
in norm.
We will show then that F (z) = RT (z).
11
For all φ ∈ X, we have
(T − z)F (z)φ = (T − ζ) ◦ F (z)φ+ (ζ − z)F (z)φ
=
{ ∞∑
j=0
(z − ζ)jRT (ζ)j −
∞∑
j=0
(z − ζ)j+1RT (ζ)j+1
}
φ
= φ
Thus
(1.3) (T − z) ◦ F (z) = Id.
Similarly, we get that F (z)◦(T −z) = Id. This implies that z ∈ ρ(z)
and thus
RT (z) = F (z) = RT (ζ) ◦
∞∑
j=0
(z − ζ)jRT (ζ)j.
�
Remark 1.43. Theorem 1.42 tell us that if T : D(T ) ⊂ X → X is
closed, then the map
RT : ρ(T ) ⊂ C→ B(X)
z 7→ RT (z)
is a holomorphic function.
Notice that there are several notions to define a holomorphic function
G : Θ(open) ⊂ C→ B(X).
(i) G(z) has a power series expansion in terms of each z0 ∈ Θ;
(ii) z 7→ G(z)φ is holomorphic for all φ ∈ X;
(iii) z ∈ Θ 7→ 〈ψ,G(z)φ〉 is holomorphic for all ψ ∈ X∗ and for all
φ ∈ X (G is weakly holomorphic).
In a Hilbert space, these three notions are equivalent.
Teoria Espectral
1. Adjoint. Symmetric and Self-Adjoint Operators
We assume that H is a Hilbert space. In this notes we will define
self-adjoint unbounded operators and study its properties.
Definition 1.1. Let A : D(A) ⊂ H → H be a linear operator densely
defined (d.d.) i.e. D(A) = H.
We define the adjoint A∗ of the operator A by{
D(A∗) = {η ∈ H : ∃ψ ∈ H such that (Aφ, η) = (φ, ψ) ∀φ ∈ D(A)},
A∗η = ψ.
Remarks 1.2.
(1) A∗ is well defined.
If there exists ψ̃ ∈ H such that
(Aφ, η) = (φ, ψ) = (φ, ψ̃)
Then
(φ, ψ − ψ̃) = 0 for all φ ∈ D(A)
which implies that ψ ≡ ψ̃ since D(A) is dense in H.
(2) We have that
(1.1) D(A∗) = {η ∈ H : Lη : φ ∈ D(A) 7→ (Aφ, η) is continuous}.
Indeed, we can extend Lη : D(A) = H → C continuous. i.e.
Lη ∈ H∗ implies there exists ψ ∈ H such that
Lη(φ) = (φ, ψ) ∀φ ∈ H
by the Riesz Theorem.
In particular, it holds that
(Aφ, η) = (φ, ψ) ∀φ ∈ D(A).
(3) It holds that
(1.2) (Aφ, η) = (φ,A∗η), ∀φ ∈ D(A), ∀η ∈ D(A∗).
Verify that A∗ : D(A∗ ⊂ H→ H defines a linear operator.
1
2
Exercise 1.3. If A ∈ B(H), show that A∗ ∈ B(H) and it holds that
(Af, g) = (f, A∗g) ∀f, g ∈ H
and
‖A‖ = ‖A∗‖.
Properties. Let A : D(A) ⊆ H → H and B : D(B) ⊆ H → H be
two linear operators d.d. Then it holds that
(i) A∗ is closed.
(ii) (λA)∗ = λ̄A∗ for all λ ∈ C.
(iii) A ⊆ B implies that B∗ ⊆ A∗.
(iv) A∗ +B∗ ⊆ (A+B)∗.
(v) B∗A∗ ⊆ (AB)∗.
(vi) A ⊆ A∗∗ where A∗∗ = (A∗)∗.
(vii) (A+ λ)∗ = A∗ + λ̄.
Theorem 1.4. Let A : D(A) ⊆ H→ H a linear operator d.d., then it
holds that
(i) A∗ is closed;
(ii) A is closable if and only if A∗ is d.d. in this case Ā = A∗∗;
(iii) If A is closable, then (Ā)∗ = A∗.
Proof. To prove this theorem we will need some definitions and lemmas.
We start by considering the Hilbert space H×H equipped with the
inner product
〈(φ1, ψ1), (φ2, ψ2)〉 = (φ1, φ2)H + (ψ1, ψ2)H.
We define the operator
V : H ×H→ H ×H
(φ, ψ) 7→ V (φ, ψ) = (−ψ, φ).Notice that V is a unitary operator. In fact,
〈V (φ1, ψ1), V (φ2, ψ2)〉 = 〈(−ψ1, φ1), (−ψ2, φ2)〉
= (ψ1, ψ2) + (φ1, φ2)
= 〈(φ1, ψ1), (φ2, ψ2)〉.
Lemma 1.5. If V : H→ H is a unitary operator, then
V (E⊥) = V (E)⊥, ∀E ⊆ H,
where E⊥ denotes the orthogonal set to E which is defined as
E⊥ = {φ ∈ H : (φ, η) = 0, ∀η ∈ E}.
3
Proof. Let x ∈ E⊥ and y = V (e) ∈ V (E), then
〈V (x), y〉 = 〈V (x), V (e)〉 = 〈x, e〉 = 0.
Thus V (x) ∈ V (E)⊥ and so V (E⊥) ⊂ V (E)⊥.
Reciprocally, let y ∈ V (E)⊥, since V is a bijection it implies there
exists a unique x ∈ H such that y = V (x).
If for all e ∈ E,
〈x, e〉 = 〈V (x), V (e)〉 = 0
this implies that x ∈ E⊥ and so y ∈ V (E⊥). Thus V (E)⊥ ⊂ V (E⊥).
This concludes the proof of the lemma. �
Proof of (i) We denote by
G(A) = {(x,Ax) : x ∈ D(A)} ⊆ H ×H.
the graph of the operator A.
We can see that
(φ, η) ∈ V (G(A))⊥ ⇐⇒ 〈(φ, η), (−Aψ,ψ)〉 = 0 ∀ψ ∈ D(A)
⇐⇒ −(φ,Aψ) + (η, ψ) = 0 ∀ψ ∈ D(A)
⇐⇒ (Aψ) = (η, ψ) ∀ψ ∈ D(A)
⇐⇒ (Aψ, φ) = (ψ, η) ∀ψ ∈ D(A)
⇐⇒ (φ, η) ∈ G(A∗).
This shows that G(A∗) = {V (G(A))}⊥ is closed. (since the orthogo-
nal of a set is always a closed subspace). Thus A∗ is a closed operator.
Lemma 1.6. If E ⊆ H, then E = (E⊥)⊥.
Proof. If e ∈ E and x ∈ E⊥, then (e, x) = 0 and so e ∈ (E⊥)⊥. Hence
E ⊂ (E⊥)⊥. Therefore E ⊂ (E⊥)⊥.
Now we know that H = E⊕ (E)⊥ because of the orthogonal projec-
tion Theorem.
On the other hand, y ∈ (E)⊥ implies that (y, x) = 0 for all x ∈ E
and this means that y ∈ E⊥. Similarly, y ∈ E⊥ implies that (y, x) = 0
for all x ∈ E and so we have (y, x)=0 for all x ∈ (E)⊥. We deduce
from this that (E)⊥ = E⊥.
Thus
H = E ⊕ E⊥ = E⊥ ⊕ (E⊥)⊥.
We conclude that E = (E⊥)⊥. �
4
Remark 1.7. From the orthogonal projection Theorem we deduce that
for any F ⊂ H closed and for all x ∈ H, there exists a unique y =
PF (x) ∈ F such that (x−PF (x), z) = 0 for all z ∈ F , then x−PF (x) ∈
F⊥, where PF (x) denotes the projection on F at x.
Since G(A) ⊂ H×H, we have from Lemma 1.6, Lemma 1.5 and the
fact that V is unitary that
G(A) = {G(A)⊥}⊥ = {V 2(G(A)⊥)}⊥
= {V [V (G(A)⊥)]}⊥ = {V [G(A∗)]}⊥.
Thus,
(1.3) G(A) = {V [G(A∗)]}⊥
and
(1.4) G(A∗) = {V [G(A)]}⊥
This means that A∗ d.d. implies that A is closable.
Proof of (ii). If A∗ is d.d then we deduce from (1.4) and (1.3) that
{V [G(A∗)]}⊥ = G(A∗∗) = G(A).
Then A is closable and A = A∗∗.
Reciprocally, if D(A∗) were not dense in H, let ψ ∈ D(A∗)⊥, ψ 6= 0,
then (ψ, 0) ∈ G(A∗)⊥ which implies that (0, ψ) ∈ {V G(A∗)}⊥ = G(A)
but by Lemma 1.17 in [4] we have that G(A) is not the graph of any
linear operator, that is, A is not a closable operator. This implies (ii).
Proof of (iii). Let A be a closable operator, since A∗ is closed and (ii)
holds we deduce that
A∗ = A∗ = A∗∗∗ = (A∗∗)∗ = (A)∗.
This completes the proof of Theorem 1.4. �
Example 1.8. We consider once again the operator A defined by{
D(A) = C0([0, 1]) ⊂ L2[(0, 1)]→ L2([0, 1])
Af = φ f(1) where φ ∈ L2([0, 1]), φ 6= 0.
In Example 1.21 in [4] we saw that the operator A is not closable.
We shall show now that A∗ is not densely defined.
Let η ∈ H = L2([0, 1]), then
(Af, η) =
∫ 1
0
f(1)φ(x)η(x) dx
5
Observe that the map
f ∈ C([0, 1]) ⊂ L2([0, 1]) 7→ f(1) ∈ C
is not continuous in the L2 topology. (To see this, take for instance
fn(x) = x
n. It is easy to check that fn
L2→ 0 and fn(1) = 1.) As
a consequence the only choice in order to have the map f 7→ (Af, η)
continuous is taking η ∈ L2([0, 1]) such that∫ 1
0
φ(x)η(x) dx = 0.
But this implies that {
D(A∗) = {φ}⊥
A∗η = 0.
In particular {φ}⊥ is not dense in L2([0, 1]).
1.1. Application. Differentiable operators are closable. Con-
sider a differentiable operator of order m,
P (x,D) =
∑
|α|≤m
α∈Nn
aα(x)D
α
x
where aα ∈ C∞(Ω) and Ω ⊆ Rn is an open set.
We can define Pmin by{
D(Pmin) = C
∞
0 (Ω) ⊆ L2(Ω)
Pminφ = P (x,D)φ.
Taking φ, ψ ∈ C∞0 (Ω), then
(Pminφ, ψ)0 = (
∑
|α|≤m
aα(x)∂
α
xφ, ψ)0
=
∫
Ω
∑
|α|≤m
aα(x)∂
α
xφ(x)ψ̄(x) dx
=
∑
|α|≤m
∫
Ω
aα(x)∂
α
xφ(x)ψ̄(x) dx
=
∑
|α|≤m
(−1)|α|
∫
Ω
φ(x)∂αx (aα(x)ψ(x)) dx.
This gives us
(Pminφ, ψ)0 = (φ, ζ)0
6
where ζ =
∑
|α|≤m
(−1)|α|∂αx (aα ψ). Hence defining P ∗min asD(P
∗
min) = C
∞
0 (Ω),
P ∗minψ =
∑
|α|≤m
(−1)|α|∂αx (aα ψ).
This implies that P ∗min is densely defined and from Theorem 1.4 it
follows that Pmin is a closable operator.
1.2. Symmetric and Self-Adjoint Operators.
Definition 1.9. Let A : D(A) ⊂ H→ H be a linear operator.
(i) We say that A is a symmetric operator if
(Aφ, ψ) = (φ,Aψ), for all φ, ψ ∈ D(A).
This is equivalent to say that A ⊆ A∗.
(ii) We say that A is a self-adoint operator if A = A∗.
Exercise 1.10. We say that a linear operator A is a maximal sym-
metric operator if
A ⊆ A∗
and
A ⊆ B, B ⊆ A∗ then A = B.
Prove that if A = A∗ then A is maximal symmetric.
Example 1.11. The operator H0 is symmetric, i.e. H0 ⊆ H∗0 . We
recall that {
D(H0) = H
2(Rn) ⊂ L2(Rn)
H0f = −∆f
For all f, g ∈ C∞0 (Rn),
(H0f, g)0 = −
∫
Rn
∆f(x) g(x) dx
= −
n∑
j=1
∫
Rn
∂2xjf(x) g(x) dx
=
n∑
j=1
∫
Rn
f(x) ∂2xjg(x) dx.
7
Then
(H0f, g)0 = (f,H0)0 ∀f, g ∈ S(Rn).
Now let f, g ∈ H2(Rn), there exist {fj}, {gj} ⊆ S(Rn) such that
fj
H2→ f and gj
H2→ g.
Notice that C∞0 (Rn)
H2
= L2(Rn) (prove it!). Thus for any j ∈ N it
holds that
(H0fj, gj)0 = (fj, H0gj)0
and making j →∞ it follows that
(H0f, g)0 = (f,H0g)0.
From the last identity we deduce that H2(R2) = D(H0) ⊆ D(H∗0 ) and
H0 ⊆ H∗0 .
Is H0 a self-adjoint operator?
We need the following definition
Definition 1.12. Let k ∈ N, 1 ≤ p ≤ ∞. Given a domain Ω ⊂ Rn,
the Sobolev space W k,p(Ω) is defined as,
W k,p(Ω) = {u ∈ Lp(Ω) : Dαu ∈ Lp(Ω), ∀|α| ≤ k}.
We equipped the Sobolev space with the norm
‖u‖Wk,p(Ω) =

( ∑
|α|≤k
‖Dαu‖pLp(Ω)
) 1
p
, 1 ≤ p <∞
max
|α|≤k
‖Dαu‖pLp(Ω), p =∞.
It is usual to denote W k,2(Ω) by Hk(Ω) for it is a Hilbert space with
the norm W k,2(Ω).
Example 1.13. Consider the operators Aj =
1
i
d
dx
, j = 0, 1, 2 with
D(A0) = H
1([−π, π]) ⊆ L2([−π, π]);
D(A1) = {φ ∈ H1([−π, π]) : φ(−π) = φ(π)};
D(A2) = {φ ∈ H1([−π, π]) : φ(−π) = φ(π) = 0}.
We will see that
(i) A1 and A2 are symmetric operators.
(ii) A0 is not a symmetric operator.
(iii) A2 = A
∗
0 (⊆ A0) this implies that A∗2 = A∗∗0 = A0 = A0 but
A0 ! A2 and so these operators are not self-adjoints.
8
(iv) A∗1 = A1.
Proof of (i). Let φ, ψ ∈ D(A0) then
(A0 φ, ψ) = (
1
i
φ′, ψ)L2 =
1
i
∫ π
−π
φ′(x)ψ(x) dx
=
1
i
φ(x)ψ(x)
∣∣π
−π −
1
i
∫ π
−π
φ(x)ψ′(x) dx
=
1
i
[
φ(π)ψ(π)− φ(−π)ψ(−π)
]
− 1
i
∫ π
−π
φ(x)ψ′(x) dx
This holds for absolutely continuous functions. Then
(1.5) (A0 φ, ψ)=
1
i
[
φ(π)ψ(π)− φ(−π)ψ(−π)
]
+(φ,A0 ψ)L2
for all φ, ψ ∈ H1([−π, π]).
Therefore if φ, ψ ∈ D(Aj), j = 1, 2, we deduce that
(Aj φ, ψ)L2 = (φ,Ajψ)L2 j = 1, 2,
which implies that A1 ⊆ A∗1 and A2 ⊆ A∗2.
Proof of (ii). In addition, there exist φ, ψ ∈ H1([−π, π]) such that
φ(π)ψ(π) 6= φ(−π)ψ(−π)
it yields
(A0 φ, ψ)L2 6= (φ,A0 ψ)
which implies that A0 ( A∗0.
Proof of (iii). Now let φ ∈ D(A0) and ψ ∈ D(A2), then from (1.5) it
follows that
(A0φ, ψ) = (φ,A2ψ)
which implies that A2 ⊆ A∗0.
To prove that A∗0 ⊆ A2 it is enough to verify D(A∗0) ⊆ D(A2).
Claim. If η ∈ D(A∗0), then
(1.6)
∫ π
−π
A∗0η(y) dy = 0.
Indeed, since 1 ∈ H1([−π, π]), it follows that∫ π
−π
A∗0η(y) dy = (1, A
∗
0η) = (A0(1), η) = 0.
Let η ∈ D(A∗0), we define w(x) = i
∫ x
−π
A∗0η(y) dy. Then w(−π) =
w(π) = 0 and so w ∈ D(A2) (prove it!).
9
Moreover,
A2w =
1
i
w′(x) = A∗0η(x)
But since A2 ⊆ A∗0 we have A∗0w = A2w.Thus
0 = (φ,A∗0(w − η))
= (A0φ,w − η) ∀φ ∈ D(A0).
(1.7)
This implies that w = η ∈ D(A2) and so D(A∗0) ⊆ D(A2).
To deduce the last affimation we have used that
C∞0 ([−π, π]) ⊆ (R(A0)) = L2([−π, π])
then taking u ∈ C∞0 ([−π, π]) we have that v(x) =
∫ x
−π
u(y) dy ∈ D(A0)
and A0v = u.
Proof of (iv). Notice that A2 = A
∗
0 ⊆ A1 ⊆ A0. Then A∗0 ⊆ A∗1 ⊆
A∗∗0 = A0 = A0. Thus for all ψ ∈ D(A∗1),
A∗1ψ =
1
i
ψ′.
From (ii) we already know that A1 ⊆ A∗1, we need to show now that
D(A∗1) ⊆ D(A1).
By the identity (1.5) it follows that for all φ ∈ D(A1), and for all
ψ ∈ D(A∗1),
(A1φ, ψ)L2 = (φ,A
∗
1ψ)since A∗1 ⊆ A0 and
φ(π)ψ(π)− φ(−π)ψ(−π) = φ(π)(ψ(π)− ψ(−π)).
This implies that
φ(π)(ψ(π)− ψ(−π)) = 0.
Choosing φ ≡ 1 ∈ D(A1) it follows that ψ(π) = ψ(−π), that is,
ψ ∈ D(A1 that concludes the proof.
Remark 1.14. From this example we can see that it is not easy at
all to establish when a linear symmetric operator is self-adjoint just
by using the definition. In what follow we will establish a criteria to
determine when a symmetric operator is self-adjoint.
10
1.3. Basic Criteria. The next result is an effective tool to determine
when a symmetric operator is sefl-adjoint.
Theorem 1.15. Let A : D(A) ⊂ H → H a linear operator d.d. such
that A ⊆ A∗, then the following assertions are equivalent:
(i) A = A∗;
(ii) A is closed and Ker(A∗ ± i) = {0};
(iii) R(A± i) = H.
Remarks 1.16.
(1) The linear operator A does have any special feature, i.e. the
criteria holds for ±λi, λ > 0.
(2) To prove (iii) =⇒ (ii) it is necessary to have R(A + i) = H
and R(A− i) = H.
Proof.
(i) =⇒ (ii). If A = A∗, then A is closed.
Let now φ ∈ D(A∗) = D(A) such that A∗φ = i φ. Then
i‖φ‖2 = (iφ, φ) = (A∗φ, φ) = (Aφ, φ)
= (φ,A∗φ) = (φ,−iφ) = −i‖φ‖2.
This implies that φ = 0, that is, Ker(A∗ − i) = {0}.
Similarly, we will have Ker(A∗ + i) = {0}.
(ii) =⇒ (iii). We will follow the following strategy:
(1) We first prove that R(A± i)⊥ = {0}.
(2) Then we show that R(A± i) = {0} is closed.
Thus we can conclude that R(A± i) = H. The latter follows from the
orthogonal projection Theorem.
Affirmation. Let B : D(B) ⊂ H→ H be a linear operator d.d., then
R(B)⊥ = KerB∗.
Proof of the Affirmation. (⊆) Let φ ∈ R(B)⊥, then for all ψ ∈ D(B)
we have that (Bψ, φ) = 0. It follows that φ ∈ D(B∗) and B∗φ = 0
which implies that φ ∈ KerB∗.
(⊇) Let φ ∈ KerB∗ then for all ψ ∈ D(B)
(Bψ, φ) = (ψ,B∗φ) = 0
we conclude that φ ∈ R(B)⊥. This completes the proof of the affirma-
tion. �
11
Now we suppose (ii), we deduce from the affirmation that
R(A± i)⊥ = Ker(A± i)∗ = Ker(A∗ ∓ i) = {0}
where we use property (vii) in page 2. The identity above gives us (1).
Next we shall show (2), i.e. R(A± i) is closed.
Let {fj} ⊂ R(A± i) such that fj
H→ f . For all j there exist φj such
that fj = (A± i)φj.
On the other hand, for all φ ∈ D(A) we have that
‖(A± i)φ‖2 =
(
(A± i)φ, (A± i)φ
)
= (Aφ,Aφ)± i(φ,Aφ)∓ i(φ,Aφ)− i2(φ, φ).
Using that A ⊆ A∗ we conclude that
(1.8) ‖(A± i)φ‖2 = ‖Aφ‖2 + ‖φ‖2.
From (1.8) we have that
(1.9) ‖fj − fl‖2 = ‖A(φj − φl)‖2 + ‖φj − φl‖2 ∀j, l.
Then we deduce from (1.9) that‖φj − φl‖ ≤ ‖fj − fl‖ →j,l→∞ 0,‖A(φj − φl)‖ ≤ ‖fj − fl‖ →
j,l→∞
0.
Thus {φj} and {Aφj} are Cauchy sequences in H. Since H is complete
it follows that there exist φ, ψ ∈ H such that{
φj
H→ φ
Aφj
H→ ψ
as j →∞.
Since A is closed it follows that φ ∈ D(A) and ψ = Aφ.
Thus
(A± i)φj →
j→∞
(A± i)φ ∈ R(A± i).
Therefore R(A± i) is closed.
(iii) =⇒ (i). We already know that A ⊆ A∗. Left to show that
D(A∗) ⊆ D(A).
Let f ∈ D(A∗), by hypothesis R(A − i) = H. Hence there exists
φ ∈ D(A) such that
(A∗ − i)f = (A− i)φ =
A⊆A∗
(A∗ − i)φ.
From this we conclude that
f − φ ∈ Ker(A∗ − i) = Ker(A+ i)∗ = R(A+ i)⊥ = {0}
12
Thus f = φ ∈ D(A). Above we used the affirmation and the fact that
R(A+ i) = H. �
Corollary 1.17 (Spectrum of a self-adjoint operator). If A = A∗, then
σ(A) ⊆ R.
Proof. The basic criteria implies that for all λ > 0,{
Ker(A± iλ) = {0}
R(A± iλ) = H.
From this we conclude that ±iλ ∈ ρ(A) for all λ > 0. If we denote
R∗ = R\{0}, then iR∗ ⊆ ρ(A).
Moreover, for all η > 0 we obtain
(A+ η)∗ = A∗ + η = A∗ + η.
From this it follows then that{
Ker(A+ η ± iλ) = {0},
R(A+ η ± iλ) = H.
Hence η ± iλ ∈ ρ(A), for all η ∈ R and λ > 0. Therefore C\R ⊂ ρ(A).
In other words, σ(A) ⊂ R. �
Definition 1.18. A linear symmetric operator (A ⊆ A∗) is called an
essentially self-adjoint operator, if and only if, A is self-adjoint.
That is, A
∗
= A∗ = A.
We can state another version of the Basic Criteria with the same
proof as follows.
Theorem 1.19. Let A be a symmetric operator (A ⊆ A∗), then the
following statements are equivalent.
(i) A = A∗ (A is essentially self-adjoint);
(ii) Ker(A∗ ± i) = {0};
(iii) (A∗ ± i) is dense in H.
Example 1.20. The operator H0min is is essentially self-adjoint (prove
it!).
13
References
[1] M. Reed and B. Simon, Methods of Modern Mathematical Physics, Volumes
1, 2
[2] E. Hille, Methods in Classical and Functional Analysis
[3] T. Kato, Perturbation Theory
[4] Notes on Unbounded Operators.
Teoria Espectral
1. More examples of closed, closable, adjoint,
self-adjoint operators
In these notes are presented some examples and remarks concerning
closed, closable, adjoint, self-adjoint unbounded linear operators.
Example 1.1. It is easy to construct, using an algebraic basis, a lin-
ear operator whose domain is the entire Hilbert space, but which is
unbounded. (We are of course assuming that the Hilbert space is infi-
nite dimensional.) By the closed graph theorem, this operator cannot
be closed. So it provides an extreme example of an operator which is
not closable.
Example 1.2. It is also possible for an operator to have many closed
extensions. Here is an example. The Hilbert space is H = L2(R) and
the operator is
D(A) = {f ∈ C∞0 (R) :
∫ ∞
−∞
f(x) dx =
∫ ∞
−∞
x f(x) dx = 0}
A(f)(x) = (1 + x2)f(x).
If one takes Fourier transform, this operator becomes the differential
operator − d
2
dξ2
+ 1 with “initial conditions” f̂(0) =
df̂
dξ
(0) = 0.
Set
p0(ξ) =
1
1 + ξ2
, p1(ξ) =
ξ
1 + ξ2
.
Then the closure of A is{
D(A) = {f ∈ L2(R) : (1 + ξ2)f ∈ L2(R), (1 + ξ2)f ⊥ p0, p1}
(Af)(ξ) = (1 + ξ2)f(ξ).
Choose any nonzero p, q ∈ span{p0, p1} and a nonzero p⊥ ∈ span{p0, p1}⊥
which is perpendicular to p. The following are all closed extensions of
1
2
A.{
D(A1) = {f ∈ L2(R) : (1 + ξ2)f ∈ L2(R), (1 + ξ2)f ⊥ p}
(A1f)(ξ) = (1 + ξ
2)f(ξ),{
D(A2) = {f ∈ L2(R) : (1 + ξ2)f ∈ L2(R)}
(A2f)(ξ) = (1 + ξ
2)f(ξ),{
D(A3) = D(A1) = {α p
⊥
1+ξ2
+ f : α ∈ C, (1 + ξ2)f ∈ {p0, p1}⊥}
A3(α
p⊥
1+ξ2
+ f) = α q + (1 + ξ2)f.
Example 1.3. The following example shows that it is possible to have
D(T ∗) = {0}. Let
(i) H = L2(R),
(ii) {en}n∈N be an orthonormal basis for H and
(iii) for each k ∈ N, fk(x) = eikx. Note that fk /∈ H.
We define the domain of T to be D(T ) = B0(R), the set of all
bounded Borel functions on R that are compact support. This domain
is dense in L2(R). This domain is dense in L2(R). For ϕ ∈ D(T ),
Tϕ =
∞∑
n=1
[ ∫ ∞
−∞
fn(x)ϕ(x) dx
]
en.
1. We first check that T is well-defined. Let ϕ ∈ D(T ). Then there is
some integer m such that ϕ(x) vanishes outside of [−mπ,mπ]. Then,
for each k ∈ Z,
∫ ∞
−∞
fk(x)ϕ(x) dx =
∫ mπ
−mπ
fk(x)ϕ(x) dx =
∫ π
−π
eikmtmϕ(mt) dt
is the km-th Fourier coefficient of the function mϕ(mt). Since the sum
of the square of all Fourier coefficients is, up to a factor of 2π, the L2
norm of mϕ(mt), which is finite, so T is well-defined.
2. We now check that D(T ∗) = {0}. Let ψ ∈ T ∗ and ϕ ∈ D(T ) =
B0(R). Choose an m ∈ N with ϕ(x) vanishing except for x in [−mπ,mπ].
Then
3
〈ϕ, T ∗ψ〉 = 〈Tϕ, ψ〉
=
∞∑
n=1
[ ∫ ∞
−∞
fn(x)ϕ(x) dx
]
〈en, ψ〉
=
∞∑
n=1
〈ϕ, 〈en, ψ〉fn〉.
Since∫ mπ
−mπ
fk(x) fl(x) dx =
∫ mπ
−mπ
ei(l−k)x dx =
{
2mπ if k = l,
0. if k 6= l.
The series
∞∑
n=1
〈en, ψ〉fn converges in H and
〈ϕ, T ∗ψ〉 =
〈
ϕ,
∞∑
n=1
〈en, ψ〉fn
〉
This is true for all bounded Borel functions ϕ supported in [−mπ,mπ]
so that
T ∗ψ =
∞∑
n=1
〈en, ψ〉fn a.e. on [−mπ,mπ]
and
∞ > ‖T ∗ψ‖L2 ≥ ‖T ∗ψ‖L2([−mπ,mπ]) =
∞∑
n=1
|〈en, ψ〉|2(2mπ).
Since this is true for all m, we must have 〈en, ψ〉 = 0 for all n ∈ N
and hence ψ ≡ 0.
We recall the following notion.
Definition 1.4. Let f be a complex valued function defined on [α, β]
where −∞ < α < β < ∞. f is said absolutely continuous if there
exists an integrable function g on [α, β] such that
f(x) =
∫ x
α
g(t) dt+ f(α).
Observe that f is continuous on [α, β] and differentiable a.e. with
f ′(x) = g(x) a.e on [α, β]. But it is not necessarily in L2[(α, β)]. This
can be seen by taking the function f(x) = x1/2 in L2([0, 1]). We denote
the set of all absolutelycontinuous functions on [α, β] by AC([α, β]).
4
Example 1.5. We consider three different Hilbert spaces:
H1 = L
2([α, β]), −∞ < α < β <∞,
H2 = L
2([α,∞)), −∞ < α <∞,
H3 = L
2((−∞,∞)).
We consider the operators Tj : Dj ⊂ Hj → Hj, j = 1, 2, 3 defined as
D1 = {g ∈ H1 : g = f a.e. for f ∈ AC([α, β]), f(α) = 0 = f(β), and
f ′ ∈ L2([α, β])}
D2 = {g ∈ H1 : g = f a.e. for f ∈ AC([α, β)), for each β > α, f(α) = 0,
and f ′ ∈ L2([α, β])}
D3 = {g ∈ H1 : g = f a.e. for f ∈ AC([α, β]), for each −∞ < α < β <∞
and f ′ ∈ H2}
with
Tjg = f
′, j = 1, 2, 3.
We will show that
(1) The linear operators Tj are unbounded symmetric operators in
Hj, j = 1, 2, 3.
(2) T3 is a self-adjoint operator.
(3) T1 and T2 are not self-adjoint operators.
(4) Tj = T
∗∗
j , j = 1, 2, 3.
First we notice that Dj = Hj, for each j = 1, 2, 3. To show this
for D1 we recall that the linear subspace spanned by the set {xk : k =
0, 1, 2, . . . } is dense in L2([α, β]) because of the class of all complex
polynomials is dense in L2([α, β]). On the other hand, xk ∈ D1 since
each xk can be approximated in L2([α, β]) by a function f in D1. For
instance, for � > 0 suitable, we can take f as being
f(x) =

(α + �)k �−1(x− α), α < x < α + �,
xk, α+ � < x < β − �,
−(β − �)k �−1(x− β), β − � < x < β.
This means D1 = H1.
To show that D2 = H2 and D3 = H3. We notice that the linear
subspace spanned by the set {xk e−x2/2 : k = 0, 1, 2, . . . } is dense in
L2((−∞,∞)) and in consequence their restrictions to [α,∞) are dense
in L2([α,∞)). We then can approximate each xk e−x2/2 by a function
in D2 or D3, respectively.
5
The operators Tj, j = 1, 2, 3 are unbounded. We consider for α < β
and k ≥ 2
β − α
the functions fk defined by
fk(x) =

k (x− α), if x ∈ [α, α + 1
k
],
2− k(x− α), if x ∈ [α + 1
k
, α+ 2
k
],
0, if x ∈ [α + 2
k
,∞).
Next we observe that,
‖fk‖2 =
∫ α+2/k
α
|fk(x)|2 dx =
2
3k
.
and
‖if ′k‖2 =
∫ α+2/k
α
k2 dx = 2k.
From this we deduce that
‖Tjfk‖
‖fk‖
=
(2k)1/2
( 2
3k
)1/2
≥ k.
Hence the operators are unbounded.
We show next that Tj are symmetric. It is done by using integration
by parts. Let f, g ∈ D1, then
(T1f, g) = (if
′, g) = i
∫ β
α
f ′(y) g(y) dy
= i f(y) g(y)
∣∣∣β
α
− i
∫ β
α
f(y) g(y) dy
= (f, ig′) = (f, T1g).
(1.1)
One can verify the same for the operators T2 and T3 by noticing that if
f ∈ D2 then lim
x→∞
f(x) = 0 and for f ∈ D3 we have that lim
x→±∞
f(x) = 0.
Next we will compute de adjoint T ∗1 de T1. Let D
∗
1 be the set
D∗1 = {g ∈ H1 : g = f a.e. where f ∈ AC([α, β]), f ′ ∈ H1}
Notice that (1.1) also holds for g ∈ D∗1, so the domain of T ∗1 contains
D∗1 and T
∗
1 g = if
′ for g ∈ D∗1. We shall show that D(T ∗1 ) = D∗1. This
can be done if we prove that T1 ⊂ T ∗1 but T1 6= T ∗1 . Let f ∈ D∗1 and let
h be the absolutely continuous function given by
h(x) =
∫ x
α
T ∗1 f(s) ds+ C
6
where C is a constant selected so that∫ x
α
[f(s) + ih(s)] ds = 0.
For every g ∈ D1, integration by parts yields∫ β
α
ig′(s) f(s) ds = (T1g, f) = (g, T
∗
1 f) =
∫ β
α
g(s)T ∗1 f(s) ds
= g(s)h(s)
∣∣∣β
α
−
∫ β
α
g′(s)h(s) = i
∫ β
α
ig′(s)h(s).
Then ∫ β
α
g′(s) [f(s) + ih(s)] ds = 0.
In particular, taking g ∈ D1 given by
g(x) =
∫ x
α
[f(s) + ih(s)] ds
we get that ∫ β
α
|f(s) + ih(s)|2 ds = 0.
That is,
f(x) = −ih(x) = −i
∫ x
α
T ∗1 f(s) ds− i C, a.e.
and h is absolutely continuous with h′(x) = T ∗1 f(x). Thus f ∈ D∗1.
Using the previous analysis we can prove that T ∗2 g = if
′ on the do-
main
D∗2 = {g ∈ H1 : g = f a.e. where f ∈ AC([α, β]), β > α, f ′ ∈ H2}
and T ∗3 g = if
′ on the domain D∗3 = D3.
Since D∗1 ( D1, D∗2 ( D2 and D∗3 = D3. We have that T ∗3 is self-
adjoint and that T ∗∗1 , T
∗∗
2 and T
∗∗
3 are well-defined.
It remains to prove that Tj = T
∗∗
j for j = 1, 2. In either case we have
Tj ⊂ T ∗∗j ⊂ T ∗j
since Tj ⊂ T ∗j . It suffices then to show that DT ∗∗j ⊂ Dj.
Let f ∈ DT ∗∗j , then for all g ∈ D
∗
j we have
(T ∗∗j f, g) = (f, T
∗
j g).
Since T ∗∗j ⊂ T ∗j , we have T ∗∗j f = if ′ and then
0 = (if ′, g)− (f, ig′).
7
If j = 1, this means
0 = i
∫ β
α
f ′(s) g(s) ds+ i
∫ β
α
f(s) g′(s) ds
= if(s) g(s)
∣∣∣β
α
= i[f(β) g(β)− f(α) g(α)].
Taking first g(x) =
(x− α)
(β − α)
∈ D∗1 and then g(x) =
(β − x)
(β − α)
∈ D∗1,
we obtain f(α) = 0 = f(β) which implies that f ∈ D1.
If j = 2, let g(x) = e−(x−α) to yield f(α) = 0. Thus f ∈ D2.
Remark 1.6. We observe that T1 has uncountably many different self-
adjoint extensions. Indeed, let γ ∈ C with |γ| = 1 and define Tγ in H1
on
DTγ = {g ∈ L2((α, β)) : g = f a.e. where f ∈ AC([α, β]), f ′ ∈ H1,
and f(β) = γ f(α)}
by Tγg = if
′. Each Tγ is self-adjoint and extends T1. For each γ, we
have T1 ⊂ Tγ ⊂ T ∗1 .
Exercise 1.7. Consider the symmetric unbounded operators Tj, j =
1, 2, 3 defined in Example 1.5. Use the Basic Criteria to show that
(a) T3 is a self-adjoint operator.
(b) T1 and T2 are not self-adjoint operators.
Remark 1.8. The sprectrum of a linear operator of A is union of the
three disjoint following sets:
(i) σp(A) the point spectrum: the set of all eigenvalues.
(ii) σr(A) the residue spectrum: the set of all λ that are not eigen-
values and such that the image of λ− T is not dense in X.
(iii) σc(A) the continuous spectrum: the complementary of σp(A)
and σr(A) it is also the set of λ such that λ−A is injective with
dense image, but (λ− A)−1 is not continuous.
Example 1.9. Here is an example which shows, firstly, that an un-
bounded operator T may have σ(T ) = ∅ and secondly that “just chang-
ing the domain of an operator” can change its spectrum. The Hilbert
space H = L2([0, 1]).
8
(i) If D(T ) = AC[0, 1] with Tf = if ′, then σ(T ) = C.
In fact σp(T ) = C, since, for any λ ∈ C, e−iλx is an eigen-
function for T with eigenvalue λ.
(ii) If D(T ) = {f ∈ AC[0, 1] : f(0) = 0} with Tf = if ′, then
σ(T ) = ∅.
Indeed, for any λ ∈ C, the resolvent operator (λ− T )−1 is
(Rλ(T )ψ)(x) = i
∫ x
0
e−iλ(x−t)ψ(t) dt.
(iii) Let α ∈ C be nonzero. If D(T )={f ∈ AC[0, 1] : f(0) = αf(1)}
with Tf = if ′, then σ(T ) = {−i lnα+ 2kπ : k ∈ Z}. Again the
spectrum consists solely of eigenvalues. If λ = −i lnα+2kπ for
some k ∈ Z, then eiλx is an eigenvalue for T with eigenvalue λ.
For λ not of the form −i lnα+2kπ for all k ∈ Z, the resolvent
operator (λ− T )−1 is
(Rλ(T )ψ)(x) = i
∫ x
0
Gλ(x, t)ψ(t) dt
with
Gλ(x, t) =

iλ eiλ(t−x−1)
1− αe−iλ
, if x < t,
i eiλ(t−x)
1− αe−iλ
, if x > t.
Teoria Espectral
1. Spectral Theorem
Here we are interested in extending the spectral theorem from some
bounded linear operators to self-adjoint unbounded linear operators.
We are going to give most of the main details to establish the Spectral
Measure Version of the Spectral Theorem. We will also give some details
of the Multiplication Operator Form of the Spectral Theorem. We follow
the notes by Bernard Helffer [1], the books by Reed and Simon [3, 4]
and class notes.
We will start by recalling the spectral theorem for compact operators.
Theorem 1.1. Let H be a separable Hilbert space and T a compact
self-adjoint operator. Then H admits a Hilbertian basis consisting of
the eigenfunctions of T .
More precisely, we can obtain a decomposition of H in the form
H = ⊕
k∈N
Vk
such that
Tuk = λk uk, if uk ∈ Vk
Thus H has been decomposed into a direct sum of orthogonal sub-
spaces Vk in which the self-adjoint operator T is reduced to multipli-
cation by λk.
We recall that an operator P ∈ B(H) is called an orthogonal pro-
jection if P = P ∗ and P 2 = P .
If Pk denotes the orthogonal projection operator onto Vk, we can
write
I =
∑
k
Pk (the limit is in the strong convergence sense)
and
Tu =
∑
k
λkPku, ∀u ∈ D(T ).
This decomposition is the inspiration to extend the spectral theorem
for self-adjoint unbounded operators as we will see below.
1
2
1.1. Spectral family and resolution of the identity.
Definition 1.2. A family of orthogonal projectors E(λ) (or Eλ), −∞ <
λ < ∞ in a Hilbert space H is called a resolution of the identity
(or spectral family) if it satisfies the following conditions:(i)
(1.1) E(λ)E(µ) = E(min(λ, µ)),
(ii)
(1.2) E(−∞) = 0, E(+∞) = I
where E(±∞) is defined by
(1.3) E(±∞)x = lim
λ→±∞
E(λ)x for all x ∈ H,
(iii)
(1.4) E(λ+ 0) = E(λ)
where E(λ+ 0) is defined by
(1.5) E(λ+ 0)x = lim
µ>λ
µ→λ
E(µ)x.
Remark 1.3. Observe that (1.1) gives the existence of the limit. The
limit in (1.3) is taken in H. We also notice that λ 7→ 〈E(λ)x, x〉 =
‖E(λ)x‖2 is monotonically increasing.
Example 1.4 (Spectral family associated to H0). Let
Gλ(|ξ|2) =
{
0 if λ < 0,
χ
|ξ|2<λ
if λ ≥ 0.
Gλ(H0) = F
−1G(H0)F = (Gλ(|ξ|2)f̂ )∨ = E0(λ)f
• lim
λ→0
E0(λ)f = 0,
(
lim
λ→−∞
E0(λ)f = 0
)
∀f ∈ L2(dξ)
• lim
λ→∞
E0(λ)f = f ∀f ∈ L2(dξ)
• E0(λ) is an orthonormal projection for any λ.
• E0(λ)2 = E0(λ)
• E0(λ) = E∗0(λ)
3
•
‖E0(λ)f‖2 = (E0(λ)f, E0(λ)f)
= (E20(λ)f, f)
= (E0(λ)f, f).
Proposition 1.5. Let E(λ) be a resolution of identity; then for all
x, y ∈ H, the function
(1.6) λ 7→ 〈E(λ)x, y〉
is a function of bounded variation whose total variation satisfies
(1.7) V (x, y) ≤ ‖x‖ · ‖y‖, ∀x, y ∈ H.
where
(1.8) V (x, y) = sup
λ1,...,λn
n∑
j=2
∣∣〈E(λj−1,λj ]x, y〉∣∣.
Proof. Let λ1 < λ2 < · · · < λn. From the assumption (1.1) we deduce
that
E(α,β] = Eβ − Eα
is an orthogonal projection. The Cauchy-Schwarz inequality yields
n∑
j=2
|〈E(λj−1,λj ]x, y〉| =
n∑
j=2
|〈E(λj−1,λj ]x,E(λj−1,λj ]y〉|
≤
n∑
j=2
‖E(λj−1,λj ]x‖‖E(λj−1,λj ]y‖
≤
( n∑
j=2
‖E(λj−1,λj ]x‖2
)1/2( n∑
j=2
‖E(λj−1,λj ]y‖2
)1/2
=
(
‖E(λ1,λn]x‖2
)1/2(‖E(λ1,λn]y‖2)1/2.
Then for m > n, we obtain
‖x‖2 ≥ ‖E(λn,λm]x‖2 =
m−1∑
i=n
‖E(λi,λi+1]x‖2.
Thus for any finite sequence λ1 < λ2 < · · · < λn we have
m∑
j=2
|〈E(λj−1,λj ]x, y〉| ≤ ‖x‖‖y‖.
Using (1.8), the estimate (1.9) follows. �
4
We have proved that for all x and y in H, the function λ 7→ 〈E(λ)x, y〉
is with bounded variation and we can then show the existence of
E(λ+ 0) and E(λ− 0). Indeed, the following lemma regards this.
Lemma 1.6. If E(λ) is a family of projectors satisfying (1.1) and
(1.2), then for all λ ∈ R, the operators
(1.9) Eλ+0 = lim
µ→λ, µ>λ
E(µ) and Eλ−0 = lim
µ→λ, µ<λ
E(µ)
are well defined when considering the limit for the strong convergence
topology.
Proof. We prove the existence of the left limit. Using (1.1), we deduce
that for any � > 0, there exists λ0 < λ such that, ∀λ′,∀λ′′ ∈ [λ0, λ)
with λ′ < λ′′
‖E(λ′,λ′′]x‖2 ≤ �.
It is not difficult to prove that Eλ− 1
n
x is a Cauchy sequence converg-
ing to a limit and that limit does not depend on the sequence going to
λ.
A similar argument shows the existence of the limit from the right.
�
It is then classical (Stieltjes integrals) that one can define for any con-
tnuous complex valued function λ 7→ f(λ) the integrals
∫ b
a
f(λ) d〈E(λ)x, y〉
as a limit of Riemann sums.
Proposition 1.7. Let f be a continuous function on R with complex
values and let x ∈ H. Then it is possible to define for α < β, the
integral ∫ β
α
f(λ) dEλx
as the strong limit in H of the Riemann sum:
(1.10)
∑
j
f(λ′j) (Eλj+1 − Eλj)x,
where α = λ1 < λ2 < · · · < λn = β, and λ′j ∈ (λj, λj+1], when
max
j
|λj+1 − λj| → 0.
Proof. The proof uses the uniform continuity of f . �
5
Definition 1.8. For any given x ∈ H and any continuous function f
on R, the integral ∫ ∞
−∞
f(λ) dEλx
is defined as the strong limit in H, if it exists of
∫ β
α
f(λ) dE(λ)x when
α→ −∞ and β →∞.
Remark 1.9. The theory works more generally for any Borelian func-
tion see [3]. This is important, because we are in particular interested
in the case when f(t) = χ(−∞,λ].
One possibility for the reader who wants to understand how this can
be made is to look at [5] which gives the following theorem:
Theorem 1.10 ([5], Theorem 8.14, p. 173).
(1) If µ is a complex Borel measure on R and if
(1.11) f(x) = µ((−∞, x]), ∀x ∈ R,
then f is a normalized function with bounded variation(NBV),
i.e. a function with bounded variation, which is continuous from
the right and such that lim
x→−∞
f(x) = 0.
(2) Conversely, to every f ∈ NBV , there corresponds a unique
complex Borel measure µ such that (1.11) is satisfied.
Theorem 1.11. For x given in H and if f is a complex valued function
on R, the following conditions are equivalent
(i)
(1.12)
∫ ∞
−∞
f(λ) dEλ x exists;
(ii)
(1.13)
∫ ∞
−∞
|f(λ)|2 d‖Eλ‖ <∞;
(iii)
(1.14) y 7→
∫ ∞
−∞
f(λ) d(〈Eλy, x〉H)
is a continuous linear form.
6
Sketch of the Proof.
(i) =⇒ (iii) It follows by using repeatedly the Banach-Steinhaus The-
orem and the definition of the integral.
(iii) =⇒ (ii) Let F be a linear form defined in (1.14). Introducing
y =
∫ β
α
f(λ) dEλx,
we notice that
y = E(α,β]y
by using the Riemann integrals.
It is not difficult to show that
F (y) =
∫ ∞
−∞
f(λ) d〈Eλx, y〉
=
∫ ∞
−∞
f(λ) d〈Eλx,E(α,β]y〉
=
∫ ∞
−∞
f(λ) d〈E(α,β]Eλx, y〉
=
∫ β
α
f(λ) d〈Eλx, y〉
= ‖y‖2.
By (1.12) it follows that
‖y‖2 ≤ ‖F‖‖y‖.
Thus
‖y‖ ≤ ‖F‖.
Observe that the right hand side is independent of α and β.
On the other hand, using once more the Riemann sums, we obtain
‖y‖2 =
∫ β
α
|f(λ)|2 d‖Eλx‖.
Therefore ∫ β
α
|f(λ)|2 d‖Eλx‖ ≤ ‖F‖2.
Thus, making α→ −∞ and β →∞ yield (1.13).
7
(ii) =⇒ (i) Notice that for α′ < α < β < β′, we
‖
∫ β′
α′
f(λ) dEλx−
∫ β
α
f(λ) dEλx‖2
=
∫ α
α′
|f(λ)|2 d‖Eλx‖2 +
∫ β′
β
|f(λ)|2 d‖Eλx‖2.
This implies (1.11). �
Theorem 1.12. Let λ 7→ f(λ) be a real-valued continuous function.
Let
Df = {x ∈ H :
∫ ∞
−∞
|f(λ)|2d〈E(λ)x, x〉 <∞}.
Then Df is dense in H and we define Tf whose domain is defined by
D(Tf ) = Df ,
and
〈Tfx, y〉 =
∫ ∞
−∞
f(λ) d〈E(λ)x, y〉
for all x in D(Tf ) and y ∈ H.
The operator Tf is self-adjoint. In addition, TfEλ is an extension of
EλTf .
Proof of Theorem. Property (1.2) gives us, that for any y ∈ H, there
exists a sequence (αn, βn) such that E(αn,βn]y → y as n→∞.
Observe that E(α,β]y ∈ Df , for any α, β, this yields the density of Df
in H.
Since f is real-valued and Eλ is symmetric, it follows that Tf is
symmetric. That Tf is self-adjoint is deduced by using Theorem 1.11.
We notice that, for f0 = 1, we get Tf0 = I and for f1(λ) = λ, we
have a self-adjoint Tf1 = T .
In this case, it is said that
T =
∫ ∞
−∞
λ dE(λ)
is a spectral decomposition of T and we observe that
‖Tx‖2 =
∫ ∞
−∞
λ2 d〈E(λ)x, x〉 =
∫ ∞
−∞
λ2 d‖E(λ)x‖2
for x ∈ D(T ) More generally,
‖Tx‖2 =
∫ ∞
−∞
λ2 d((E(λ)x, x)) =
∫ ∞
−∞
λ2 d(‖E(λ)x‖2)
for x ∈ D(Tf ) �
8
We have seen so far how one can associate to a spectral family of
projectors a self-adjoint operator.
The Spectral Decomposition Theorem makes explicit that the pre-
ceding situation is actually the general one.
Theorem 1.13. Any self-adjoint operator T is a Hilbert space H ad-
mits a spectral decomposition such that
(1.15) 〈Tx, y〉 =
∫
R
λ〈Eλx, y〉H,
and
(1.16) Tx =
∫
R
λ d(Eλx).
Sketch of the Proof.
Step 1. It is rather natural to imagine that it is essentially enough to
treat the case when T is a bounded selfadjoint operator (or at least a
normal bounded operator, that is satisfying T ∗T = TT ∗. If A is indeed
a general semibounded self-adjoint operator, one can come back to the
bounded case by considering (A+λ0)
−1, with λ0 real, which is bounded
and self-adjoint. In the general case, one can consider (A+ i)−1.
Step 2. We analize first the spectrum of P (T ) where P is a polynomial.
Lemma 1.14. If P is a polynomial, then
σ(P (T )) = {P (λ) : λ ∈ σ(T )}.
Proof of Lemma 1.14. From the identity P (x)−P (λ) = (x− λ)Qλ(x)
we obtain for bounded operators the identity
P (T )− P (λ) = (T − λ)Qλ(T ).
This allows us to construct the inverse of (T − λ) if one knows the
inverse of P (T )− P (λ).
Reciprocally, notice that if z ∈ C and if λj(z) are the roots of λ 7→
(P (λ)− z), then
(P (T )− z) = c
∏
j
(T − λj(z)).
This allows to construct the inverse of (P (T )−z) if one has the inverse
of (T − λj(z)) for all j. �
9
Lemma 1.15. Let T be a bounded self-adjoint operator. Then using
Exercise (1.18) we have
(1.17) ‖P (T )‖ = sup
λ∈σ(T )
|P (λ)|.
Proof. We first notice that
‖P (T )‖2 = ‖P (T )∗P (T )‖.
From Exercise (1.19) and Lemma (1.14) we deduce that
‖P (T )‖ = ‖(PP )(T )‖
= sup
µ∈σ((PP)(T ))
|µ|
= sup
λ∈σ(T )
|(PP )(λ)|
= sup
λ∈σ(T )
|P (λ)|2
�
Step 3. We have defined a map Φ from the set of polynomials into
B(H) by
(1.18) P 7→ Φ(P ) = P (T )
which is continuous since
(1.19) ‖Φ(P )‖B(H) = sup
λ∈σ(T )
|P (λ)|.
The set σ(T ) is a compact in R and using the Stone-Weierstrass
theorem (which guarantees the density of the polynomials in C(σ(T ))),
the map Φ can be uniquely extended to C(σ(T )). We will denote this
extension again using Φ.
Theorem 1.16 (Properties of Φ). Let T be a bounded self-adjoint op-
erator on H. Then there exists a unique map Φ,
Φ : C(σ(T ))→ B(H)
satisfying the following properties:
(i)
Φ(f + g) = Φ(f) + Φ(g);
Φ(f) = Φ(f);
Φ(1) = Id;
Φ(f) = Φ(f)∗;
Φ(fg) = Φ(f) ◦ Φ(g).
10
(ii)
‖Φ(f)‖B(H) = sup
λ∈σ(T )
|f(λ)|.
(iii) If f is defined by f(λ) = λ, then Φ(f) = T .
(iv)
σ(Φ(f)) = {f(λ) : λ ∈ σ(T )}.
(v) If ϕ satisfies Tϕ = λϕ, then Φ(f)ϕ = f(λ)ϕ.
(vi) If f ≥ 0, then Φ(f) ≥ 0.
Proof. The proof of the properties above follows by showing first the
properties for the polynomials P and then extending the properties
by continuity to continuous functions. To establish the last item we
observe that
Φ(f) = Φ(
√
f) · Φ(
√
f) = Φ(
√
f)∗ · Φ(
√
f).
�
Step 4. Now we introduce the measures.
Let ψ ∈ H. Define the functional
(1.20) f 7→ 〈ψ, f(T )ψ〉H = 〈ψ,Φ(f)ψ〉H.
We observe that this is a positive linear functional on C(σ(T )). From
the Riesz Theorem (Theorem 1.27 below), there exists a unique mea-
sure µψ on σ(T ), such that
(1.21) 〈ψ,Φ(f)ψ〉H =
∫
σ(T )
f(λ) dµψ.
This measure is called the spectral measure associated with the
vector ψ ∈ H. This measure is a Borel measure. This means that we
can extend the map Φ and (1.21) to Borelian functions.
Using the standard Hilbert calculus (that is the link between sesquilin-
ear form and the quadratic forms) we can also construct for any x and
y in H a complex measure dµx,y such thta
(1.22) 〈x,Φ(f)y〉H =
∫
σ(T )
f(λ) dµx,y(λ).
Using the Riesz representation Theorem (Theorem 1.28 below), this
gives as, when f is bounded, an operator f(T ). If f = χ(−∞,µ], we
recover the operator Eµ = f(T ) which permits to construct indeed the
spectral family announced in Theorem 1.13.
�
11
Remarks 1.17. For any measurable (real or complex valued) func-
tion f on R, the unique operator f(T ) satisfying (1.21) is defined. Its
domain is the set {h :
∫∞
−∞ |f |
2 dµh <∞}, dense in H.
For any h ∈ D(f(T ))
(1.23) ‖f(T )h‖2 =
∫ ∞
−∞
|f(λ)|2 dµh(λ).
The equation (1.23) can be easily verified for the case when f is a
nonnegative measurable function.
We have
‖f(T )h‖2 = lim
n→∞
‖f ∧ n · χ[−n,n](T )h‖2
= lim
n→∞
(
[
f ∧ n · χ[−n,n](T )
]2
h, h)
= lim
n→∞
∫ ∞
−∞
[
f ∧ n · χ[−n,n](λ)
]2
dµh(λ)
=
∫ ∞
−∞
f 2 dµh(λ)
where f ∧ n · χ[−n,n] = inf{f, n · χ[−n,n]}.
In case f is any measurable function, f = f1 − f2 + i(g3 − g4) and
|f |2 = f 21 + f 22 + g23 + g24. In this situation equation (1.23) can be seen
to hold.
Exercise 1.18. Let A be a bounded linear operator in a Hilbert space
H. Show that
‖A∗A‖ = ‖A‖2.
Exercise 1.19. Let A ∈ B(H) be a self-adjoint operator. Show that the
spectrum of A is contained in [n,M ] with m = inf
〈Au, u〉
‖u‖2
. Moreover
m and M belong to the spectrum of T .
1.2. Another version of the Spectral Theorem. In this section
our goal is to present a multiplication form of the Spectral Theorem.
Our plan is to sketch the main points of the proof of the theorem. We
will ask the reader to complete some details by proving some proposed
exercises.
The Spectral Theorem reads as follows.
12
Theorem 1.20 (Multiplication Operator Form of the Spectral Theo-
rem). Let T be a self-adjoint operator in a Hilbert space H. There exists
then a measure space (X,A, µ), a unitary operator U : HtoL2(X,µ),
and a measurable function F on X which is real a.e. such that
(i) h ∈ D(T ) if and only if F (·)Uh(·) is in L2(X,µ).
and
(ii) if f ∈ U(D(T )), then (UTU−1f)(·) = F (·)f(·).
To prove this theorem we need of some preparation.
Definition 1.21. Let T : H→ H be a continuous linear operator with
adjoint T ∗.
T is called normal if and only if T ∗T = TT ∗.
T Is called unitary if and only if T ∗T = TT ∗ = I.
The proof of Theorem 1.20 uses the following Spectral Theorem for
bounded normal linear operators. The proof can be found for instance
in the appendix of [2].
Theorem 1.22. Let T = T1 + iT2 be a bounded normal operator on H.
Then there exists a family of finite measures (µj)j∈I on σ(T1)× σ(T2)
and a unitary operator
U : H→ ⊕
j∈I
L2(σ(T1)× σ(T2), µi)
such that
(UTU−1f)j(x, y) = (x+ iy)fj(x, y) a.e.
where f = (fj)j∈I is in ⊕
j∈I
L2(σ(T ), µj) and σ(·) stands for the spectrum
of the operator ·.
Proof. See Theorem A.6 in [2]. �
A readily consequence of this theorem we have.
Corollary 1.23. Let T be a bounded normal operator on a Hilbert
space H.Then there exists a measure space (X,A, µ), a bounded com-
plex function G on X, and a unitary map U : H→ L2(X,µ) so that
(UTU−1f)(λ) = G(λ)f(λ) a.e.
Exercise 1.24. Show that if T is a closed linear operator in H densely
defined and λ ∈ ρ(T ), then (T − λI)−1 is a bounded linear operator on
H.
13
Exercise 1.25. Let T be a self-adjoint operator in H. Prove that ρ(T )
contains all complex number with nonzero imaginary part. Moreover,
if Imλ 6= 0, then
(1.24) ‖(T − λI)−1‖ ≤ 1
|Imλ‖
and
(1.25) Im((T − λI)h, h) = Im(−λ)‖h‖2 for all h ∈ D(T ).
Proof. We will need the results in Corollary 1.23 for bounded normal
operators applying to the operator (T + i)−1.
We first show that (T +i)−1 is a bounded normal operator. From the
Exercises (1.24) and (1.25) we conclude tha (T ± i)−1 exist as bounded
linear operator in H. In particular, R(T ± i) = H and T ± i are one-
to-one operators. Since T is self-adjoint, for any φ and ψ in D(T ), we
have
((T − i)φ, (T + i)−1(T + i)ψ) = ((T − i)−1(T − i)φ, (T + i)ψ).
This implies that ((T+i)−1)∗ = (T−i)−1. Since (T+i)−1 and (T−i)−1
commute by the resolvent formula, we have
(T + i)−1((T + i)−1)∗ = (T + i)−1(T − i)−1 = ((T + i)−1)∗(T + i)−1,
which tells us that (T + i)−1 is a normal operator.
Using Corollary 1.23, there is measure space (X,A, µ), a unitary
operator U : H → L2(X,µ), and a bounded, measurable complex
function G on X such that
(1.26) (U(T + i)−1U−1f)(x) = G(x)f(x) a.e
for all f ∈ L2(X,µ).
Since Ker(T + i)−1 = {0}, G(x) 6= 0 a.e. Therefore if we define F (x)
as G(x)−1 − i foreach x ∈ X, |F (x)| is finite a.e. Now if f ∈ U(D(T )),
then there exists a function g ∈ L2(X,µ) such that f(·) = G(·)g(·) in
L2. This is true because of
(1.27) U(D(T )) ⊂ U(T + i)−1(H) ⊂ U(T + i)−1U−1(L2(X,µ)).
Noticing that U(T + i)−1U−1 is an injection, for any g in the range
of U(T + i)−1U−1 we have from (1.26) that
[U(T + i)−1U−1]−1g(x) =
1
G(x)
· g(x) ∈ L2(X,µ).
In particular for f in the set U(D(T )),
[U(T + i)−1U−1]−1f(x) =
1
G(x)
· f(x) ∈ L2(X,µ).
14
or
U(T + i)U−1f(x) =
1
G(x)
· f(x) ∈ L2(X,µ).
or
UTU−1f(x) =
1
G(x)
· f(x)− if(x) = F (x)f(x) ∈ L2(X,µ).
This proves (ii) and the necessity of (i) provided F is real-valued,
which we show below. For the converse of (i), if F (x)Uh(x) is in
L2(X,µ), then there exists k ∈ H so that Uk = [F (x) + i]uh(x). Thus
G(x)Uk(x) = G(x) [F (x) + i]Uh(x) = Uh(x),
so h = (T + i)−1, whereby h ∈ D(T ).
To finish the proof it must be established that F is real-valued a.e.
Observe that the operator in L2(X,µ) defined bu multiplication by F
is self-adjoint since by (ii) it is unitarily equivalent to T . Hence for
all χM , M a measurable subset of X, (χM , FχM) is real. However, if
ImF > 0 on a set of positive measure, then there exists a bounded
set B in the plane so that M = f−1(B) has nonzero measure. Clearly
FχM is in L
2(X,µ) since B is bounded and Im (χM , FχM) > 0. This
contradiction shows that ImF = 0 a.e. �
Example 1.26 (Examples of functions of a self-adjoint operator).
The following are common examples in spectral theory.
(1) f is the characteristic function of (−∞, λ], χ(−∞,λ]; Φ(f) =
f(T ) is then Φ(f) = E(λ).
(2) f is the characteristic function of (−∞, λ),χ(−∞,λ); f(T ) is
then Φ(f) = E(λ− 0).
(3) f is a compactly supported continuous function. f(T ) will be
an operator whose spectrum is localized in the support of f .
(4) ft(λ) = exp(itλ) with t real. ft(T ) is then a solution of the
functional equation{
(∂t − iT )(f(t, T )) = 0,
f(0, T ) = I.
We notice that , for all real t, ft(T ) = exp(itT ) is a bounded
unitary operator.
15
(5) gt(λ) = exp(−tλ) with t real positive. gt(T ) is the a solution of
the functional equation{
(∂t + T )(g(t, T )) = 0, for t ≥ 0,
g(0, T ) = I.
1.3. Application of the Spectral Theorem in solving the Schrödinger
equation. The time-dependent Schrödinger equation arises in quan-
tum mechanics. It is given by
i
du
dt
= Au(t),
where u(t) is an element of a Hilbert space H, A is a self-adjoint op-
erator in H, antd t is a time variable with u(t) ∈ D(A). An initial
condition is u(0) = u0 ∈ D(A). The derivative of u is given as
lim
∆→0
u(t+ ∆)− u(t)
∆
in the strong topology of H.
The Spectral Theorem allows us to solve the Schrödinger equation.
Let e−itA be the bounded operator on H given by
e−itA =
∫ ∞
−∞
e−itλ dP (λ),
where A =
∫
λdP . We would like to prove that
(1.28)
d
dt
(e−itAh) = i A(e−itAh)
for every h ∈ D(A).
To show this, we compute the following limit:
lim
∆t→0
∥∥∥(e−(t+∆t)A − e−itA
∆t
+ ie−itAA
)
h
∥∥∥2
= lim
∆t→0
∫ ∞
−∞
∣∣∣e−(t+∆t)λ − e−itλ
∆t
+ ie−itλλ
∣∣∣2d(E(λ)h, h)
= lim
∆t→0
∫ ∞
−∞
∣∣∣e−i∆tλ − 1
∆t
+ iλ
∣∣∣2d(E(λ)h, h).
Letting M = max
∆t
∣∣∣ e−i∆t−1∆t + i∣∣∣2, the integrand above is bounded by
Mλ2, which is integrable since h ∈ D(A). It follows then by using
16
the Lebesgue Dominated Convergence theorem that the limit is zero.
Hence
(1.29)
d
dt
(e−itAh) = −i (e−itAAh)
for every h ∈ D(A).
The identity (1.28) follows from (1.29) since for h ∈ D(A)
(1.30) e−itAAh = Ae−itAh. (exercise)
This follows from the fact that if h ∈ D(A), then e−itAh is in D(A)
since by the equation (1.23) we have
‖E(M)e−itAh‖2 =
∫
χM |e−itλ|2 dEh(λ) =
∫
χMdEh(λ) = ‖E(M)h‖2.
The solution u(t) = e−itAu0 of the Schrödinger equation is unique.
To show this, suppose that v(t) in D(A) is a solution. Then for any
φ ∈ H
d
ds
(e−i(t−s)Av(s), φ) = lim
∆s→0
(e−i(t−(s+∆s))A v(s+ ∆s), φ)− (e−i(t−s)Av(s), φ)
∆s
= lim
∆s→0
(e−i(t−(s+∆t))A − e−i(t−s)A
∆s
v(s+ ∆s), φ
)
+
(
lim
∆s→0
e−i(t−s)A
v(s+ ∆s)− v(s)
∆s
, φ
)
=
(
− d
dt
e−i(t−s)Av(s), φ
)
+
(
e−i(t−s)A
dv
ds
, φ
)
= (ie−i(t−s)A v(s), φ) + (e−i(t−s)A[−iAv(s)], φ) = 0.
Therefore for all φ ∈ H
0 =
∫ t
0
d
ds
(
e−i(t−s)A v(s), φ
)
ds = (e−i0Av(t), φ)− (e−itAv(0), φ),
and since v(0) = u0 and e
−i0A = I we have
v(t) = e−itAu0.
This yields the uniqueness.
1.4. Riesz representation Theorem. We start by introducing some
notations and definitions.
Given a locally compact Hausdorff space X we denote C0(X) as the
set of continuous functions on X which vanish at infinity.
17
We say that ν is a regular measure if every Borel set in X is both
outer regular and inner regular. We denote by |ν| the total variation
of ν or the total variation measure.
A complex Borel measure µ on X is called regular if |µ| is regular.
If µ is a complex Borel measure on X, it is not difficult to see that
the mapping
f →
∫
X
f dµ
is a bounded linear functional on C0(X), whose norm is not longer
than |µ|(X). The Riesz theorem guarantees that all bounded linear
functionals on C0(X) are obtained in this way.
Theorem 1.27. If X is a locally compact Hausdorff space, then every
bounded linear functional Φ on C0(X) is represented by a unique regular
complex Borel measure µ, in the sense that
Φf =
∫
X
f dµ for every f ∈ C0(X).
Moreover, the norm of Φ is the total variation of µ:
‖Φ‖ = |µ|(X).
Proof. See Theorem 6.19 in [5]. �
In Hilbert spaces we have the well known Riesz theorem.
Theorem 1.28. Let u 7→ F (u) a linear continuous form on H. Then
there exists a unique w ∈ H such that
(1.31) F (u) = 〈u,w〉H, ∀u ∈ H.
References
[1] B. Helffer, Spectral theory and applications. An elementary in-
troductory course. https://www.imo.universite-paris-saclay.fr/ helf-
fer/m2bucarest2010.pdf
[2] A. Mukherjea and K. Pothoven, Real and Functional Analysis, Mathematical
Concepts and Methods in Science and Engineering, Vol 10, Springer Science+
Business Media New York (1978).
[3] M. Reed and B. Simon, Methods of modern mathematical Physics. I. Func-
tional analysis. Academic Press, (1972).
[4] M. Reed and B. Simon, Methods of modern mathematical Physics. II. Fourier
analysis, selfadjointness. Academic Press, (1975).
[5] W. Rudin, Real and Complex Analysis. Mc Graw Hill, New York (1974).
Teoria Espectral
1. Kato-Rellich Theorem
Problem. Let A : D(A) ⊆ H → H and B : D(B) ⊆ H → H be two
linear densely defined. Suppose that A∗ = A and B ⊆ B∗ such that
D(A) ⊆ D(B).
In particular, it makes sense to consider A + B : D(A) ⊆ H → H.
Moreover, it was proved that
A+B ⊆ A∗ +B∗ ⊆ (A+B)∗.
The question that arises is under what conditions it holds that
(A+B)∗ = A+B.
That problem appears for instance in Quantum Mechanics.
The initial value problem (IVP) for the Schrödinger equation is writ-
ten as
(1.1)

i∂tu = −∆u+ V u︸ ︷︷ ︸
multiplication by a real potential
= (H0 + V )︸ ︷︷ ︸
symmetric operator
u,
u(0) = φ ∈ H2.
If the operator were self-adjoint we should show by means of the
Spectral Theorem that the IVP (1.1) is well-posed, or in other words,
the operator H0 + V generates a unitary group. In addition, we could
obtain information on the spectrum of H0 + V .
In order to do this we need of the following notion.
Definition 1.1. Let A : D(A) ⊆ H → H and B : D(B) ⊆ H → H
be two linear operators. We say that B is bounded in relation to A
(or B is A-bounded) if
(i) D(A) ⊆ D(B),
(ii) There exists α > 0 and β > 0 such that
‖Bφ‖ ≤ α ‖φ‖+ β ‖Aφ‖, ∀φ ∈ D(A).
The number
β0 = inf{β > 0 : (ii) holds}
is called the A-bound related to B.
Exercise 1.2. If A is a closed linear operator and B is a A-bounded
linear operator, show that
(a) H0 = (D(A), [·, ·]) is a Hilbert space with inner product
[φ, ψ] = (φ, ψ) + (Aφ,Aψ).
1
2
(b) B ∈ B(H0).
Example 1.3. Let H = L2(Rn), A = H0 and
B : H1(Rn) ⊆ L2(Rn)→ L2(Rn)
φ 7→ 1
i
φ′,
then B is A-bounded with A-bound equals zero.
Indeed, D(A) = H2(Rn) ⊆ H1(Rn) = D(B) implies (i) in Definition
(1.1)
Let � > 0, then |ξ| ≤ �|ξ|2 + 1
4�
.
Using the Plancherel identity we have
‖Bφ‖2 = ‖B̂φ‖2 =
∫
Rn
|ξφ(ξ)|2 dξ
≤
∫
Rn
(�|ξ|2 + 1
4�
)2|φ̂(ξ)|2 dξ
≤ c�2
∫
Rn
||ξ|2φ̂(ξ)|2 dξ + c(�)
∫
Rn
|φ̂(ξ)|2 dξ.
This implies that
‖Bφ‖ ≤ c�‖Aφ‖+ c(�)‖φ‖, ∀φ ∈ H2(Rn).
Since this holds for any � > 0, we deduce that B is A-bounded and
the A-bound of B is equal to zero.
Proposition 1.4. Let A and B be closed linear operators. Suppose
that D(A) ⊆ D(B) and ρ(A) 6=. Then (ii) in (1.1) holds.
Proof. Take z ∈ ρ(A), then
B(A− z)−1 : H→ H
is closed. By the Closed Graph Theorem we get B(A− z)−1 ∈ B(H).
Next, for all φ ∈ D(A)
‖Bφ‖ = ‖B(A− z)−1(A− z)‖
≤ ‖B(a− z)−1‖‖(A− z)φ‖
≤ c‖Aφ‖+ c|z|‖φ‖.
�
Exercise 1.5. If B is a closed linear operator, ρ(A) 6=, prove that the
following statements are equivalent
3
(1) B is A-bounded.
(2) B(A− z)−1 ∈ B(H) for some z ∈ ρ(A).
(3) B(A− z)−1 ∈ B(H) for all z ∈ ρ(A).
Definition 1.6. Let A be a linear operator such that A ⊆ A∗. We say
that A is a positive operator if and only if
(Aφ, φ) ≥ 0 ∀φ ∈ D(A).
(i.e. a bilinear form
b∗ : D(A)×D(A)→ C
φ 7→ (Aφ, φ)
is positive)
A is strictly positive if and only if
(Aφ, φ) > 0 ∀φ ∈ D(A).
Remarks 1.7.
(i) A ⊆ A∗ implies that
(Aφ, φ) = (φ,Aφ) = (Aφ, φ) ∀φ ∈ D(A)
and so (Aφ, φ) ∈ R.
(ii) We can define an order relation on positive symmetric operators
as: A ≥ B if and only if A−B ≥ 0.
Definition 1.8. If there exists λ0 ∈ C such that A ≥ λ0, we say that
A is lower bounded.
Exercise 1.9. Let A : D(A) ⊂ H→ H such that A = A∗ and M ∈ R.
Show that A ≥M if and only if (−∞,M) ⊂ ρ(A).
From this we can see that a self-adjoint operator is lower bounded if
and only if its spectrum is bounded below.
Theorem 1.10 (Kato-Rellich Theorem). Let A : D(A) ⊆ H → H be
a linear

Outros materiais