Buscar

Solutions Gubner

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 3, do total de 260 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 6, do total de 260 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 9, do total de 260 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Prévia do material em texto

Solutions Manual for
Probability and Random Processes for
Electrical and Computer Engineers
John A. Gubner
University of Wisconsin–Madison
File Generated July 13, 2007
CHAPTER 1
Problem Solutions
1. Ω = {1,2,3,4,5,6}.
2. Ω = {0,1,2, . . . ,24,25}.
3. Ω = [0,∞). RTT> 10 ms is given by the event (10,∞).
4. (a) Ω = {(x,y) ∈ IR2 : x2 + y2 ≤ 100}.
(b) {(x,y) ∈ IR2 : 4≤ x2 + y2 ≤ 25}.
5. (a) [2,3]c = (−∞,2)∪ (3,∞).
(b) (1,3)∪ (2,4) = (1,4).
(c) (1,3)∩ [2,4) = [2,3).
(d) (3,6]\ (5,7) = (3,5].
6. Sketches:
x
y
1
1
1B0B
x
y
−1
B
−1
−1
x
y
x
y
x
y
3
x
y
3
H3 J3C1
1
2 Chapter 1 Problem Solutions
x
y
x
y
3
3
3
3
H3 J3
U
= M3 UH3 J3 N3
=
x
y
2
2M N3
U
= 2M
2
x
y
4
3
4M N3
U
4
3
7. (a) [1,4]∩
(
[0,2]∪ [3,5]
)
=
(
[1,4]∩ [0,2]
)
∪
(
[1,4]∩ [3,5]
)
= [1,2]∪ [3,4].
(b) (
[0,1]∪ [2,3]
)c
= [0,1]c∩ [2,3]c
=
[
(−∞,0)∪ (1,∞)
]
∩
[
(−∞,2)∪ (3,∞)
]
=
(
(−∞,0)∩
[
(−∞,2)∪ (3,∞)
])
∪
(
(1,∞)∩
[
(−∞,2)∪ (3,∞)
])
= (−∞,0)∪ (1,2)∪ (3,∞).
(c)
∞⋂
n=1
(− 1
n
, 1
n
) = {0}.
(d)
∞⋂
n=1
[0,3+ 12n ) = [0,3].
(e)
∞⋃
n=1
[5,7− 13n ] = [5,7).
(f)
∞⋃
n=1
[0,n] = [0,∞).
Chapter 1 Problem Solutions 3
8. We first let C ⊂ A and show that for all B, (A∩B)∪C = A∩ (B∪C). Write
A∩ (B∪C) = (A∩B)∪ (A∩C), by the distributive law,
= (A∩B)∪C, since C ⊂ A⇒ A∩C =C.
For the second part of the problem, suppose (A∩B)∪C= A∩(B∪C). We must show
that C ⊂ A. Let ω ∈ C. Then ω ∈ (A∩B)∪C. But then ω ∈ A∩ (B∪C), which
implies ω ∈ A.
9. Let I := {ω ∈Ω : ω ∈ A⇒ ω ∈ B}. We must show that A∩ I = A∩B.
⊂: Let ω ∈ A∩ I. Then ω ∈ A and ω ∈ I. Therefore, ω ∈ B, and then ω ∈ A∩B.
⊃: Let ω ∈ A∩B. Then ω ∈ A and ω ∈ B. We must show that ω ∈ I too. In other
words, we must show that ω ∈ A⇒ ω ∈ B. But we already have ω ∈ B.
10. The function f :(−∞,∞)→ [0,∞) with f (x) = x3 is not well defined because not all
values of f (x) lie in the claimed co-domain [0,∞).
11. (a) The function will be invertible if Y = [−1,1].
(b) {x : f (x)≤ 1/2}= [−pi/2,pi/6].
(c) {x : f (x) < 0}= [−pi/2,0).
12. (a) Since f is not one-to-one, no choice of co-domain Y can make f : [0,pi]→ Y
invertible.
(b) {x : f (x)≤ 1/2}= [0,pi/6]∪ [5pi/6,pi].
(c) {x : f (x) < 0}= ∅.
13. For B⊂ IR,
f−1(B) =

X , 0 ∈ B and 1 ∈ B,
A, 1 ∈ B but 0 /∈ B,
Ac, 0 ∈ B but 1 /∈ B,
∅, 0 /∈ B and 1 /∈ B.
14. Let f :X → Y be a function such that f takes only n distinct values, say y1, . . . ,yn.
Let B ⊂ Y be such that f−1(B) is nonempty. By definition, each x ∈ f−1(B) has the
property that f (x) ∈ B. But f (x) must be one of the values y1, . . . ,yn, say yi. Now
f (x) = yi if and only if x ∈ Ai := f−1({yi}). Hence,
f−1(B) =
⋃
i:yi∈B
Ai.
15. (a) f (x) ∈ Bc⇔ f (x) /∈ B⇔ x /∈ f−1(B)⇔ x ∈ f−1(B)c.
(b) f (x)∈
∞⋃
n=1
Bn if and only if f (x)∈ Bn for some n; i.e., if and only if x ∈ f−1(Bn)
for some n. But this says that x ∈
∞⋃
n=1
f−1(Bn).
4 Chapter 1 Problem Solutions
(c) f (x) ∈
∞⋂
n=1
Bn if and only if f (x) ∈ Bn for all n; i.e., if and only if x ∈ f−1(Bn)
for all n. But this says that x ∈
∞⋂
n=1
f−1(Bn).
16. If B=
⋃
i{bi} and C =
⋃
i{ci}, put a2i := bi and a2i−1 := ci. Then A=
⋃
i ai = B∪C
is countable.
17. Since each Ci is countable, we can writeCi =
⋃
j ci j. It then follows that
B :=
∞⋃
i=1
Ci =
∞⋃
i=1
∞⋃
j=1
{ci j}
is a doubly indexed sequence and is therefore countable as shown in the text.
18. Let A=
⋃
m{am} be a countable set, and let B⊂ A. We must show that B is countable.
If B = ∅, we’re done by definition. Otherwise, there is at least one element of B in
A, say ak. Then put bn := an if an ∈ B, and put bn := ak if an /∈ B. Then
⋃
n{bn}= B
and we see that B is countable.
19. Let A ⊂ B where A is uncountable. We must show that B is uncountable. We prove
this by contradiction. Suppose that B is countable. Then by the previous problem, A
is countable, contradicting the assumption that A is uncountable.
20. Suppose A is countable and B is uncountable. Wemust show that A∪B is uncountable.
We prove this by contradiction. Suppose that A∪B is countable. Then since B ⊂
A∪B, we would have B countable as well, contradicting the assumption that B is
uncountable.
21. MATLAB. OMITTED.
22. MATLAB. Intuitive explanation: Using only the numbers 1,2,3,4,5,6, consider how
many ways there are to write the following numbers:
2 = 1+1 1 way, 1/36 = 0.0278
3 = 1+2= 2+1 2 ways, 2/36 = 0.0556
4 = 1+3= 2+2= 3+1 3 ways, 3/36 = 0.0833
5 = 1+4= 2+3= 3+2= 4+1 4 ways, 4/36 = 0.1111
6 = 1+5= 2+4= 3+3= 4+2= 5+1 5 ways, 5/36 = 0.1389
7 = 1+6= 2+5= 3+4= 4+3= 5+2= 6+1 6 ways, 6/36 = 0.1667
8 = 2+6= 3+5= 4+4= 5+3= 6+2 5 ways, 5/36 = 0.1389
9 = 3+6= 4+5= 5+4= 6+3 4 ways, 4/36 = 0.1111
10 = 4+6= 5+5= 6+4 3 ways, 3/36 = 0.0833
11 = 5+6= 6+5 2 ways, 2/36 = 0.0556
12 = 6+6 1 way, 1/36 = 0.0278
36 ways, 36/36 = 1
23. Take Ω := {1, . . . ,26} and put
P(A) :=
|A|
|Ω| =
|A|
26
.
Chapter 1 Problem Solutions 5
The event that a vowel is chosen is V = {1,5,9,15,21}, and P(V ) = |V |/26= 5/26.
24. Let Ω := {(i, j) : 1≤ i, j≤ 26 and i 6= j}. For A⊂Ω, put P(A) := |A|/|Ω|. The event
that a vowel is chosen followed by a consonant is
Bvc =
{
(i, j) ∈Ω : i= 1,5,9,15, or 21 and j ∈ {1, . . . ,26}\{1,5,9,15,21}}.
Similarly, the event that a consonant is followed by a vowel is
Bcv =
{
(i, j) ∈Ω : i ∈ {1, . . . ,26}\{1,5,9,15,21} and j = 1,5,9,15, or 21}.
We need to compute
P(Bvc∪Bcv) = |Bvc|+ |Bcv||Ω| =
5 · (26−5)+(26−5) ·5
650
=
21
65
≈ 0.323.
The event that two vowels are chosen is
Bvv =
{
(i, j) ∈Ω : i, j ∈ {1,5,9,15,21} with i 6= j},
and P(Bvv) = |Bvv|/|Ω|= 20/650 = 2/65≈ .031.
25. MATLAB. The code for simulating the drawing of a face card is
% Simulation of Drawing a Face Card
%
n = 10000; % Number of draws.
X = ceil(52*rand(1,n));
faces = (41 <= X & X <= 52);
nfaces = sum(faces);
fprintf(’There were %g face cards in %g draws.\n’,nfaces,n)
26. Since 9 pm to 7 am is 10 hours, take Ω := [0,10]. The probability that the baby wakes
up during a time interval 0≤ t1 < t2 ≤ 10 is
P([t1, t2]) :=
∫ t2
t1
1
10
dω.
Hence, P([2,10]c) = P([0,2]) =
∫ 2
0 1/10dω = 1/5.
27. Starting with the equations
SN = 1+ z+ z
2+ · · ·+ zN−2+ zN−1
zSN = z+ z
2+ · · ·+ zN−2+ zN−1+ zN ,
subtract the second line from the first. Canceling common terms leaves
SN − zSN = 1− zN , or SN(1− z) = 1− zN .
If z 6= 1, we can divide both sides by 1− z to get SN = (1− zN)/(1− z).
6 Chapter 1 Problem Solutions
28. Let x = p(1). Then p(2) = 2p(1) = 2x, p(3) = 2p(2) = 22x, p(4) = 2p(3) = 23x,
p(5) = 24x, and p(6) = 25x. In general, p(ω) = 2ω−1x and we can write
1 =
6
∑
ω=1
p(ω) =
6
∑
ω=1
2ω−1x = x
5
∑
ω=0
2ω =
1−26
1−2 x = 63x.
Hence, x= 1/63, and p(ω) = 2ω−1/63 for ω = 1, . . . ,6.
29. (a) By inclusion–exclusion, P(A∪B) = P(A)+P(B)−P(A∩B), which can be re-
arranged as P(A∩B) = P(A)+P(B)−P(A∪B).
(b) Since P(A) = P(A∩B)+P(A∩Bc),
P(A∩Bc) = P(A)−P(A∩B) = P(A∪B)−P(B), by part (a).
(c) Since B and A∩Bc are disjoint,
P(B∪ (A∩Bc)) = P(B)+P(A∩Bc) = P(A∪B), by part (b).
(d) By De Morgan’s law, P(Ac∩Bc) = P([A∪B]c) = 1−P(A∪B).
30. We must check the four axioms of a probability measure. First,
P(∅) = λP1(∅)+(1−λ )P2(∅) = λ ·0+(1−λ ) ·0 = 0.
Second,
P(A) = λP1(A)+(1−λ )P2(A) ≥ λ ·0+(1−λ ) ·0 = 0.
Third,
P
(
∞⋃
n=1
An
)
= λP1
(
∞⋃
n=1
An
)
+(1−λ )P2
(
∞⋃
n=1
An
)
= λ
∞
∑
n=1
P1(An)+(1−λ )
∞
∑
n=1
P2(An)
=
∞
∑
n=1
[λP1(An)+(1−λ )P2(An)]
=
∞
∑
n=1
P(An).
Fourth, P(Ω) = λP1(Ω)+(1−λ )P2(Ω) = λ +(1−λ ) = 1.
31. First, since ω0 /∈∅, µ(∅) = 0. Second, by definition,µ(A) ≥ 0. Third, for disjoint
An, suppose ω0 ∈ ⋃nAn. Then ω0 ∈ Am for some m, and ω0 /∈ An for n 6= m. Then
µ(Am) = 1 and µ(An) = 0 for n 6=m. Hence, µ
(⋃
nAn
)
= 1 and ∑n µ(An) = µ(Am) =
1. A similar analysis shows that if ω0 /∈⋃nAn then µ(⋃nAn) and ∑n µ(An) are both
zero. Finally, since ω0 ∈Ω, µ(Ω) = 1.
Chapter 1 Problem Solutions 7
32. Starting with the assumption that for any two disjoint events A and B, P(A∪B) =
P(A)+P(B), we have that for N = 2,
P
( N⋃
n=1
An
)
=
N
∑
n=1
P(An). (∗)
Now we must show that if (∗) holds for any N ≥ 2, then (∗) holds for N+1. Write
P
(N+1⋃
n=1
An
)
= P
([ N⋃
n=1
An
]
∪AN+1
)
= P
( N⋃
n=1
An
)
+P(AN+1), additivity for two events,
=
N
∑
n=1
P(An)+P(AN+1), by (∗),
=
N+1
∑
n=1
P(An).
33. Since An := Fn∩F cn−1∩·· ·∩F c1 ⊂ Fn, it is easy to see that
N⋃
n=1
An ⊂
N⋃
n=1
Fn.
The hard part is to show the reverse inclusion ⊃. Suppose ω ∈⋃Nn=1Fn. Then ω ∈ Fn
for some n in the range 1, . . . ,N. However, ω may belong to Fn for several values of
n since the Fn may not be disjoint. Let
k := min{n : ω ∈ Fn and 1≤ n≤ N}.
In other words, 1≤ k ≤ N and ω ∈ Fk, but ω /∈ Fn for n< k; in symbols,
ω ∈ Fk ∩F ck−1∩·· ·∩F c1 =: Ak.
Hence, ω ∈ Ak ⊂
⋃N
n=1An. The proof that
⋃
∞
n=1An ⊂
⋃
∞
n=1Fn is similar except that
k :=min{n : ω ∈ Fn and n≥ 1}.
34. For arbitrary events Fn, let An be as in the preceding problem. We can then write
P
(
∞⋃
n=1
Fn
)
= P
(
∞⋃
n=1
An
)
=
∞
∑
n=1
P(An), since the An are disjoint,
= lim
N→∞
N
∑
n=1
P(An), by def. of infinite sum,
= lim
N→∞
P
( N⋃
n=1
An
)
= lim
N→∞
P
( N⋃
n=1
Fn
)
.
8 Chapter 1 Problem Solutions
35. For arbitrary events Gn, put Fn := G
c
n. Then
P
(
∞⋂
n=1
Gn
)
= 1−P
(
∞⋃
n=1
Fn
)
, by De Morgan’s law,
= 1− lim
N→∞
P
( N⋃
n=1
Fn
)
, by the preceding problem,
= 1− lim
N→∞
[
1−P
( N⋂
n=1
Gn
)]
, by De Morgan’s law,
= lim
N→∞
P
( N⋂
n=1
Gn
)
.
36. By the inclusion–exclusion formula,
P(A∪B) = P(A)+P(B)−P(A∩B) ≤ P(A)+P(B).
This establishes the union bound for N = 2. Now suppose the union bound holds for
some N ≥ 2. We must show it holds for N+1. Write
P
(N+1⋃
n=1
Fn
)
= P
([ N⋃
n=1
Fn
]
∪FN+1
)
≤ P
( N⋃
n=1
Fn
)
+P(FN+1), by the union bound for two events,
≤
N
∑
n=1
P(Fn)+P(FN+1), by the union bound for N events,
=
N+1
∑
n=1
P(Fn).
37. To establish the union bound for a countable sequence of events, we proceed as fol-
lows. Let An := Fn∩F cn−1∩·· ·∩F c1 ⊂ Fn be disjoint with
⋃
∞
n=1An =
⋃
∞
n=1Fn. Then
P
(
∞⋃
n=1
Fn
)
= P
(
∞⋃
n=1
An
)
=
∞
∑
n=1
P(An), since the An are disjoint,
≤
∞
∑
n=1
P(Fn), since An ⊂ Fn.
38. Following the hint, we put Gn :=
⋃
∞
k=nBk so that we can write
P
(
∞⋂
n=1
∞⋃
k=n
Bk
)
= P
(
∞⋂
n=1
Gn
)
= lim
N→∞
P
( N⋂
n=1
Gn
)
, limit property of P,
Chapter 1 Problem Solutions 9
= lim
N→∞
P(GN), since Gn ⊃ Gn+1,
= lim
N→∞
P
(
∞⋃
k=N
Bk
)
, definition of GN ,
≤ lim
N→∞
∞
∑
k=N
P(Bk), union bound.
This last limit must be zero since ∑∞k=1P(Bk) < ∞.
39. In this problem, the probability of an interval is its length.
(a) P(A0) = 1, P(A1) = 2/3, P(A2) = 4/9= (2/3)
2, and P(A3) = 8/27= (2/3)
3.
(b) P(An) = (2/3)
n.
(c) Write
P(A) = P
(
∞⋂
n=1
An
)
= lim
N→∞
P
( N⋂
n=1
An
)
, limit property of P,
= lim
N→∞
P(AN), since An ⊃ An+1,
= lim
N→∞
(2/3)N = 0.
40. Consider the collection consisting of the empty set along with all unions of the form⋃
iAki for some finite subsequence of distinct elements from {1, . . . ,n}. We first show
that this collection is a σ -field. First, it contains ∅ by definition. Second, since
A1, . . . ,An is a partition, (⋃
i
Aki
)c
=
⋃
i
Ami ,
where mi is the subsequence {1, . . . ,n} \ {ki}. Hence, the collection is closed under
complementation. Third,
∞⋃
n=1
(⋃
i
Akn,i
)
=
⋃
j
Am j ,
where an integer l ∈ {1, . . . ,n} is in {m j} if and only if kn,i = l for some n and some i.
This shows that the collection is a σ -field. Finally, since every element in our collec-
tion must be contained in every σ -field that contains A1, . . . ,An, our collection must
be σ(A1, . . . ,An).
41. We claim that A is not a σ -field. Our proof is by contradiction: We assume A is a
σ -field and derive a contradiction. Consider the set
∞⋂
n=1
[0,1/2n) = {0}.
Since [0,1/2n)∈Cn ⊂An ⊂A , the intersection must be inA since we are assuming
A is a σ -field. Hence, {0} ∈A . Now, any set in A must belong to some An. By the
10 Chapter 1 Problem Solutions
preceding problem, every set in An must be a finite union of sets from Cn. However,
the singleton set {0} cannot be expressed as a finite union of sets from any Cn. Hence,
{0} /∈A .
42. Let Ai := X
−1({xi}) for i = 1, . . . ,n. By the problems mentioned in the hint, for any
subset B, if X−1(B) 6= ∅, then
X−1(B) =
⋃
i:xi∈B
Ai ∈ σ(A1, . . . ,An).
It follows that the smallest σ -field containing all the X−1(B) is σ(A1, . . . ,An).
43. (a) F =
{
∅,A,B,{3},{1,2},{4,5},{1,2,4,5},Ω}
(b) The corresponding probabilities are 0,5/8,7/8,1/2,1/8,3/8,1/2,1.
(c) Since {1} /∈F , P({1}) is not defined.
44. Suppose that a σ -field A contains an infinite sequence Fn of sets. If the sequence is
not disjoint, we can construct a new sequence An that is disjoint with each An ∈ A .
Let a = a1,a2, . . . be an infinite sequence of zeros and ones. Then A contains each
union of the form ⋃
i:ai=1
Ai.
Furthermore, since the Ai are disjoint, each sequence a gives a different union, and
we know from the text that the number of infinite sequences a is uncountably infinite.
45. (a) First, since ∅ is in each Aα , ∅ ∈ ⋂α Aα . Second, if A ∈ ⋂α Aα , then A ∈Aα
for each α , and so Ac ∈Aα for each α . Hence, Ac ∈ ⋂α Aα . Third, if An ∈A
for all n, then for each n and each α , An ∈ Aα . Then ⋃nAn ∈ Aα for each α ,
and so
⋃
nAn ∈
⋂
α Aα .
(b) We first note that
A1 = {∅,{1},{2},{3,4},{2,3,4},{1,3,4},{1,2},Ω}
and
A2 = {∅,{1},{3},{2,4},{2,3,4},{1,2,4},{1,3},Ω}.
It is then easy to see that
A1∩A2 = {∅,{1},{2,3,4},Ω}.
(c) First note that by part (a),
⋂
A :C⊂A A is a σ -field, and since C ⊂ A for each
A , the σ -field
⋂
A :C⊂A A contains C . Finally, ifD is any σ -field that contains
C , then D is one of the A s in the intersection. Hence,
C ⊂
⋂
A :C⊂A
A ⊂ D .
Thus
⋂
A :C⊂A A is the smallest σ -field that contains C .
Chapter 1 Problem Solutions 11
46. The union of two σ -fields is not always a σ -field. Here is an example. Let Ω :=
{1,2,3,4}, and put
F :=
{
∅,{1,2},{3,4},Ω} and G := {∅,{1,3},{2,4},Ω}.
Then
F ∪G = {∅,{1,2},{3,4},{1,3},{2,4},Ω}
is not a σ -field since it does not contain {1,2}∩{1,3}= {1}.
47. Let Ω denote the positive integers, and let A denote the collection of subsets A such
that either A or Ac is finite.
(a) Let E denote the subset of even integers. Then E does not belong to A since
neither E nor E c (the odd integers) is a finite set.
(b) To show that A is closed under finite unions, we consider two cases. First
suppose that A1, . . . ,An are all finite. Then∣∣∣∣ n⋃
i=1
Ai
∣∣∣∣ ≤ n∑
i=1
|Ai| < ∞,
and so
⋃n
i=1Ai ∈A . In the second case, suppose that some Acj is finite. Then( n⋃
i=1
Ai
)c
=
n⋂
i=1
Aci ⊂ Acj .
Hence, the complement of
⋃n
i=1Ai is finite, and so the union belongs to A .
(c) A is not a σ -field. To see this, put Ai := {2i} for i = 1,2, . . . . Then ⋃∞i=1Ai =
E /∈A by part (a).
48. Let Ω be an uncountable set. LetA denote the collection of all subsets A such that
either A is countable or Ac is countable. We show thatA is a σ -field. First, the empty
set is countable. Second, if A ∈A , we must show that Ac ∈A . There are two cases.
If A is countable, then the complement of Ac is A, and so Ac ∈A . If Ac is countable,
then Ac ∈ A . Third, let A1,A2, . . . belong to A . There are two cases to consider. If
all An are countable, then
⋃
nAn is also countable by an earlier problem. Otherwise,
if some Acm is countable, then write(
∞⋃
n=1
An
)c
=
∞⋂
n=1
Acn ⊂ Acm.
Since the subset of a countable set is countable, we see that the complement of⋃
∞
n=1An is countable, and thus the union belongs to A .
49. (a) Since (a,b] =
⋂
∞
n=1(a,b+
1
n
), and since each (a,b+ 1
n
) ∈B, (a,b] ∈B.
(b) Since {a}=⋂∞n=1(a− 1n ,a+ 1n ), and since each (a− 1n ,a+ 1n )∈B, the singleton
{a} ∈B.
12 Chapter 1 Problem Solutions
(c) Since by part (b), singleton sets are Borel sets, and since A is a countable union
of Borel sets, A ∈B; i.e., A is a Borel set.
(d) Using part (a), write
λ
(
(a,b]
)
= λ
(
∞⋂
n=1
(a,b+ 1
n
)
)
= lim
N→∞
λ
( N⋂
n=1
(a,b+ 1
n
)
)
, limit property of probability,
= lim
N→∞
λ
(
(a,b+ 1
N
)
)
, decreasing sets,
= lim
N→∞
(b+ 1
N
)−a, characterization of λ ,
= b−a.
Similarly, using part (b), we can write
λ
({a}) = λ( ∞⋂
n=1
(a− 1
n
,a+ 1
n
)
)
= lim
N→∞
λ
( N⋂
n=1
(a− 1
n
,a+ 1
n
)
)
, limit property of probability,
= lim
N→∞
λ
(
(a− 1
N
,a+ 1
N
)
)
, decreasing sets,
= lim
N→∞
2/N, characterization of λ ,
= 0.
50. Let I denote the collection of open intervals, and let O denote the collection of open
sets. We need to show that σ(I ) = σ(O). Since I ⊂ O , every σ -field containing
O also contains I . Hence, the smallest σ -field containing O contains I ; i.e., I ⊂
σ(O). By the definition of the smallest σ -field containing I , it follows that σ(I )⊂
σ(O). Now, if we can show thatO ⊂ σ(I ), then it will similarly follow that σ(O)⊂
σ(I ). Recall that in the problem statement, it was shown that every open set U can
be written as a countable union of open intervals. This meansU ∈ σ(I ). This proves
that O ⊂ σ(I ) as required.
51. MATLAB. Chips from S1 are 80% reliable; chips from S2 are 70% reliable.
52. Observe that
N(Ow,S1) = N(OS1)−N(Od,S1) = N(OS1)
(
1− N(Od,S1)
N(OS1)
)
and
N(Ow,S2) = N(OS2)−N(Od,S2) = N(OS2)
(
1− N(Od,S2)
N(OS2)
)
.
Chapter 1 Problem Solutions 13
53. First write
P(A|B∩C)P(B|C) = P(A∩ [B∩C])
P(B∩C) ·
P(B∩C)
P(C)
=
P([A∩B]∩C)
P(C)
= P(A∩B|C).
From this formula, we can isolate the equation
P(A|B∩C)P(B|C) = P([A∩B]∩C)
P(C)
.
Multiplying through by P(C) yields P(A|B∩C)P(B|C)P(C) = P(A∩B∩C).
54. (a) P(MM) = 140/(140+60) = 140/200 = 14/20 = 7/10 = 0.7. Then P(HT) =
1−P(MM) = 0.3.
(b) Let D denote the event that a workstation is defective. Then
P(D) = P(D|MM)P(MM)+P(D|HT)P(HT)
= (.1)(.7)+(.2)(.3)
= .07+ .06 = 0.13.
(c) Write
P(MM|D) = P(D|MM)P(MM)
P(D)
=
.07
.13
=
7
13
.
55. Let O denote the event that a cell is overloaded, and let B denote the event that a call
is blocked. The problem statement tells us that
P(O) = 1/3, P(B|O) = 3/10, and P(B|Oc) = 1/10.
To find P(O|B), first write
P(O|B) = P(B|O)P(O)
P(B)
=
3/10 ·1/3
P(B)
.
Next compute
P(B) = P(B|O)P(O)+P(B|Oc)P(Oc)
= (3/10)(1/3)+(1/10)(2/3) = 5/30 = 1/6.
We conclude that
P(O|B) = 1/10
1/6
=
6
10
=
3
5
= 0.6.
56. The problem statement tells us that P(R1|T0) = ε and P(R0|T1) = δ . We also know
that
1 = P(Ω) = P(T0∪T1) = P(T0)+P(T1).
The problem statement tells us that these last two probabilities are the same; hence
they are both equal to 1/2. To find P(T1|R1), we begin by writing
P(T1|R1) = P(R1|T1)P(T1)
P(R1)
.
14 Chapter 1 Problem Solutions
Next, we note that P(R1|T1) = 1−P(R0|T1) = 1−δ . By the law of total probability,
P(R1) = P(R1|T1)P(T1)+P(R1|T0)P(T0)
= (1−δ )(1/2)+ ε(1/2) = (1−δ + ε)/2.
So,
P(T1|R1) = (1−δ )(1/2)
(1−δ + ε)/2 =
1−δ
1−δ + ε .
57. Let H denote the event that a student does the homework, and let E denote the event
that a student passes the exam. Then the problem statement tells us that
P(E|H) = .8, P(E|H c) = .1, and P(H) = .6.
We need to compute P(E) and P(H|E). To begin, write
P(E) = P(E|H)P(H)+P(E|H c)P(H c)
= (.8)(.6)+(.1)(1− .6) = .48+ .04 = .52.
Next,
P(H|E) = P(E|H)P(H)
P(E)
=
.48
.52
=
12
13
.
58. The problem statement tells us that
P(AF |CF) = 1/3, P(AF |CcF) = 1/10, and P(CF) = 1/4.
We must compute
P(CF |AF) = P(AF |CF)P(CF)
P(AF)
=
(1/3)(1/4)
P(AF)
=
1/12
P(AF)
.
To compute the denominator, write
P(AF) = P(AF |CF)P(CF)+P(AF |CcF)P(CcF)
= (1/3)(1/4)+(1/10)(1−1/4) = 1/12+3/40 = 19/120.
It then follows that
P(CF |AF) = 1
12
· 120
19
=
10
19
.
59. Let F denote the event that a patient receives a flu shot. Let S, M, and R denote the
events that Sue, Minnie, or Robin sees the patient. The problem tells us that
P(S) = .2, P(M) = .4, P(R) = .4, P(F |S) = .6, P(F |M) = .3, and P(F |R) = .1.
We must compute
P(S|F) = P(F |S)P(S)
P(F)
=
(.6)(.2)
P(F)
=
.12
P(F)
.
Chapter 1 Problem Solutions 15
Next,
P(F) = P(F |S)P(S)+P(F|M)P(M)+P(F|R)P(R)
= (.6)(.2)+(.3)(.4)+(.1)(.4) = .12+ .12+ .04 = 0.28.
Thus,
P(S|F) = 12
100
· 100
28
=
3
7
.
60. (a) Let Ω = {1,2,3,4,5} with P(A) := |A|/|Ω|. Without loss of generality, let 1
and 2 correspond to the two defective chips. Then D := {1,2} is the event that
a defective chip is tested. Hence, P(D) = |D|/5= 2/5.
(b) Your friend’s information tells you that of the three chips you may test, one is
defective and two are not. Hence, the conditional probability that the chip you
test is defective is 1/3.
(c) Yes, your intuition is correct. To prove this, we construct a sample space and
probability measure and compute the desired conditional probability. Let
Ω := {(i, j,k) : i< j and k 6= i,k 6= j},
where i, j,k ∈ {1,2,3,4,5}. Here i and j are the chips taken by the friend, and
k is the chip that you test. We again take 1 and 2 to be the defective chips. The
10 possibilities for i and j are
12 13 14 15
23 24 25
34 35
45
For each pair in the above table, there are three possible values of k:
345 245 235 235
145 135 134
125 124
123
Hence, there are 30 triples in Ω. For the probability measure we take P(A) :=
|A|/|Ω|. Now let Fi j denote the event that the friend takes chips i and j with
i< j. For example, if the friend takes chips 1 and 2, then from the second table,
k has to be 3 or 4 or 5; i.e.,
F12 = {(1,2,3),(1,2,4),(1,2,5)}.
The event that the friend takes two chips is then
T := F12∪F13∪F14∪F15∪F23∪F24∪F25∪F34∪F35∪F45.
Now the event that you test a defective chip is
D := {(i, j,k) : k = 1 or 2 and i< j with i, j 6= k}.
16 Chapter 1 Problem Solutions
We can now compute
P(D|T ) = P(D∩T )
P(T )
.
Since the Fi j that make up T are disjoint, |T |= 10 ·3= 30 and P(T ) = |T |/|Ω|=
1. We next observe that
D∩T = ∅∪ [D∩F13]∪ [D∩F14]∪ [D∩F15]
∪ [D∩F23]∪ [D∩F24]∪ [D∩F25]
∪ [D∩F34]∪ [D∩F35]∪ [D∩F45].
Of the above intersections, the first six intersections are singleton sets, and the
last three are pairs. Hence, |D∩T |= 6 ·1+3 ·2 and so P(D∩T ) = 12/30= 2/5.
We conclude that P(D|T ) = P(D∩ T )/P(T ) = (2/5)/1 = 2/5, which is the
answer in part (a).
Remark. The model in part (c) can be used to solve part (b) by observing
that the probability in part (b) is
P(D|F12∪F13∪F14∪F15∪F23∪F24∪F25),
which can be similarly evaluated.
61. (a) If two sets A and B are disjoint, then by definition, A∩B= ∅.
(b) If two events A and B are independent, then by definition,P(A∩B) = P(A)P(B).
(c) If two events A and B are disjoint, then P(A∩B) = P(∅) = 0. In order for
them to be independent, we must have P(A)P(B) = 0; i.e., at least one of the
two events must have zero probability. If two disjoint events both have positive
probability, then they cannot be independent.
62. LetW denote the event that the decoder outputs the wrong message. Of course,W c
is the event that the decoder outputs the correct message. We must find P(W ) =
1−P(W c). Now, W c occurs if only the first bit is flipped, or only the second bit is
flipped, or only the third bit is flipped, or if no bits are flipped. Denote these disjoint
events by F100, F010, F001, and F000, respectively. Then
P(W c) = P(F100∪F010∪F001∪F000)
= P(F100)+P(F010)+P(F001)+P(F000)
= p(1− p)2+(1− p)p(1− p)+(1− p)2p+(1− p)3
= 3p(1− p)2 +(1− p)3.
Hence,
P(W ) = 1−3p(1− p)2− (1− p)3 = 3p2−2p3.
If p= 0.1, then P(W ) = 0.03−0.002= 0.028.
63. Let Ai denote the event that your phone selects channel i, i= 1, . . . ,10. Let B j denote
the event that your neighbor’s phone selects channel j, j = 1, . . . ,10. Let P(Ai) =
Chapter 1 Problem Solutions 17
P(B j) = 1/10, and assume Ai and B j are independent. Then
P
( 10⋃
i=1
[Ai∪B j]
)
=
10
∑
i=1
P(Ai∩B j) =
10
∑
i=1
P(Ai)P(B j) =
10
∑
i=1
1
10
· 1
10
= 0.1.
64. Let L denote the event that the left airbag works properly, and let R denote the event
that the right airbag works properly. Assume L and R are independent with P(Lc) =
P(Rc) = p. The probability that at least one airbag works properly is
P(L∪R) = 1−P(Lc∩Rc) = 1−P(Lc)P(Rc) = 1− p2.
65. The probability that the dart never lands within 2 cm of the center is
P
(
∞⋂
n=1
Acn
)
= lim
N→∞
P
( N⋂
n=1
Acn
)
, limit property of P,
= lim
N→∞
N
∏
n=1
P(Acn), independence,
= lim
N→∞
N
∏
n=1
(1− p)
= lim
N→∞
(1− p)N = 0.
66. LetWi denote the event that you win on your ith play of the lottery. The probability
that you win at least once in n plays is
P
( n⋃
i=1
Wi
)
= 1−P
( n⋂
i=1
W ci
)
= 1−
n
∏
i=1
P(W ci ), by independence,
= 1−
n
∏
i=1
(1− p) = 1− (1− p)n.
We need to choose n so that 1− (1− p)n > 1/2, which happens if and only if
1/2 > (1− p)n or − ln2 > n ln(1− p) or − ln2
ln(1− p) < n,
where the last step uses the fact that ln(1− p) is negative. For p = 10−6, we need
n> 693147.
67. Let A denote the event that Anne catches no fish, and let B denote the event that Betty
catches no fish. Assume A and B are independent with P(A) = P(B) = p. We must
compute
P(A|A∪B) = P(A∩ [A∪B])
P(A∪B) =
P(A)
P(A∪B) ,
where the last step uses the fact that A⊂ A∪B. To compute the denominator, write
P(A∪B) = 1−P(Ac∩Bc) = 1−P(Ac)P(Bc) = 1− (1− p)2 = 2p− p2 = p(2− p).
18 Chapter 1 Problem Solutions
Then
P(A|A∪B) = p
p(2− p) =
1
2− p .
68. We show that A and B\C are independent as follows. First, since C ⊂ B,
P(B) = P(C)+P(B\C).
Next, since A and B are independent and since A and C are independent,
P(A∩B) = P(A)P(B) = P(A)[P(C)+P(B\C)] = P(A∩C)+P(A)P(B\C).
Again using the fact that C ⊂ B, we now write
P(A∩B) = P(A∩ [C∪B\C]) = P(A∩C)+P(A∩B\C).
It follows that P(A∩B\C) = P(A)P(B\C), which establishes the claimed indepen-
dence.
69. We show that A, B, and C are mutually independent. To begin, note that P(A) =
P(B) = P(C) = 1/2. Next, we need to identify the events
A∩B = [0,1/4)
A∩C = [0,1/8)∪ [1/4,3/8)
B∩C = [0,1/8)∪ [1/2,5/8)
A∩B∩C = [0,1/8)
so that we can compute
P(A∩B) = P(A∩C) = P(B∩C) = 1/4 and P(A∩B∩C) = 1/8.
We find that
P(A∩B) = P(A)P(B), P(A∩C) = P(A)P(C), P(B∩C) = P(B)P(C),
and
P(A∩B∩C) = P(A)P(B)P(C).
70. From a previous problem we have that P(A∩C|B) = P(A|B∩C)P(C|B). Hence,
P(A∩C|B) = P(A|B)P(C|B) if and only if P(A|B∩C) = P(A|B).
71. We show that the probability of the complementary event is zero. By the union bound,
P
(
∞⋃
n=1
∞⋂
k=n
Bck
)
≤
∞
∑
n=1
P
(
∞⋂
k=n
Bck
)
.
We show that every term on the right is zero. Write
P
(
∞⋂
k=n
Bck
)
= lim
N→∞
P
( N⋂
k=n
Bck
)
, limit property of P,
Chapter 1 Problem Solutions 19
= lim
N→∞
N
∏
k=n
P(Bck), independence,
= lim
N→∞
N
∏
k=n
[1−P(Bk)]
≤ lim
N→∞
N
∏
k=n
exp[−P(Bk)], the hint,
= lim
N→∞
exp
[
−
N
∑
k=n
P(Bk)
]
= exp
[
− lim
N→∞
N
∑
k=n
P(Bk)
]
, since exp is continuous,
= exp
[
−
∞
∑
k=n
P(Bk)
]
= e−∞ = 0,
where the second-to-last step uses the fact that ∑∞k=1P(Bk) = ∞⇒ ∑∞k=nP(Bk) = ∞.
72. There are 3 ·5 ·7= 105 possible systems.
73. There are 2n n-bit numbers.
74. There are 100! different orderings of the 100 message packets. In order that the first
header packet to be received is the 10th packet to arrive, the first 9 packets to be
received must come from the 96 data packets, the 10th packet must come from the 4
header packets, and the remaining 90 packets can be in any order. More specifically,
there are 96 possibilities for the first packet, 95 for the second, . . . , 88 for the ninth, 4
for the tenth, and 90! for the remaining 90 packets. Hence, the desired probability is
96 · · ·88 ·4 ·90!
100!
=
96 · · ·88 ·4
100 · · ·91 =
90 ·89 ·88 ·4
100 ·99 ·98 ·97 = 0.02996.
75.
(
5
2
)
= 10 pictures are needed.
76. Suppose the player chooses distinct digits wxyz. The player wins if any of the 4!= 24
permutations of wxyz occurs. Since each permutation has probability 1/10000 of
occurring, the probability of winning is 24/10000 = 0.0024.
77. There are
(
8
3
)
= 56 8-bit words with 3 ones (and 5 zeros).
78. The probability that a random byte has 4 ones and 4 zeros is
(
8
4
)
/28 = 70/256 =
0.2734.
79. In the first case, since the prizes are different, order is important. Hence, there are
41 ·40 ·39= 63960 outcomes. In the second case, since the prizes are the same, order
is not important. Hence, there are
(
41
3
)
= 10660 outcomes.
80. There are
(
52
14
)
possible hands. Since the deck contains 13 spades, 13 hearts, 13
diamonds, and 13 clubs, there are
(
13
2
)(
13
3
)(
13
4
)(
13
5
)
hands with 2 spades, 3 hearts, 4
20 Chapter 1 Problem Solutions
diamonds, and 5 clubs. The probability of such a hand is(
13
2
)(
13
3
)(
13
4
)(
13
5
)(
52
14
) = 0.0116.
81. All five cards are of the same suit if and only if they are all spades or all hearts or all
diamonds or all clubs. These are four disjoint events. Hence, the answer is four times
the probability of getting all spades:
4
(
13
5
)(
52
5
) = 4 1287
2598960
= 0.00198.
82. There are
(
n
k1,...,km
)
such partitions.
83. The general result is (
n
k0, . . . ,km−1
)/
mn.
When n = 4 and m = 10 and a player chooses xxyz, we compute
(
4
2,1,1
)
/10000 =
0.0012. For xxyy, we compute
(
4
2,2
)
/10000 = 0.0006. For xxxy, we compute(
4
3,1
)
/10000 = 0.0004.
84. Two apples and three carrots corresponds to (0,0,1,1,0,0,0). Five apples corre-
sponds to (0,0,0,0,0,1,1).
CHAPTER 2
Problem Solutions
1. (a) {ω : X(ω)≤ 3}= {1,2,3}.
(b) {ω : X(ω) > 4}= {5,6}.
(c) P(X ≤ 3) = P(X = 1)+P(X = 2)+P(X = 3) = 3(2/15) = 2/5, and
P(X > 4) = P(X = 5)+P(X = 6) = 2/15+1/3= 7/15.
2. (a) {ω : X(ω) = 2}= {1,2,3,4}.
(b) {ω : X(ω) = 1}= {41,42, . . . ,52}.
(c) P(X = 1 or X = 2) = P({1,2,3,4}∪{41,42, . . . ,52}). Since these are disjoint
events, the probability of their union is 4/52+12/52 = 16/52= 4/13.
3. (a) {ω ∈ [0,∞) : X(ω)≤ 1}= [0,1].
(b) {ω ∈ [0,∞) : X(ω)≤ 3}= [0,3].
(c) P(X ≤ 1) = ∫ 10 e−ω dω = 1−e−1. P(X ≤ 3) = 1−e−3, P(1< X ≤ 3) = P(X ≤
3)−P(X ≤ 1) = e−1− e−3.
4. First, since X−1(∅) = ∅, µ(∅) = P(X−1(∅))= P(∅) = 0. Second, µ(B) =
P(X−1(B))≥ 0. Third, for disjoint Bn,
µ
(
∞⋃
n=1
Bn
)
=P
(
X−1
(
∞⋃
n=1
Bn
))
=P
(
∞⋃
n=1
X−1(Bn)
)
=
∞
∑
n=1
P(X−1(Bn))=
∞
∑
n=1
µ(Bn).
Fourth, µ(IR) = P(X−1(IR)) = P(Ω) = 1.
5. Since
P(Y > n−1) =
∞
∑
k=n
P(Y = k) = P(Y = n)+
∞
∑
k=n+1
P(Y = k) = P(Y = n)+P(Y > n),
it follows that P(Y = n) = P(Y > n−1)−P(Y > n).
6. P(Y = 0) = P({TTT,THH,HTH,HHT}) = 4/8= 1/2, and
P(Y = 1) = P({TTH,THT,HTT,HHH}) = 4/8= 1/2.
7. P(X = 1) = P(X = 2) = P(X = 3) = P(X = 4) = P(X = 5) = 2/15 and P(X = 6) =
1/3.
8. P(X = 2)=P({1,2,3,4})= 4/52= 1/13. P(X = 1)=P({41,42, . . . ,52})= 12/52=
3/13. P(X = 0) = 1−P(X = 2)−P(X = 1) = 9/13.
9. The possible values of X are 0,1,4,9,16. We have P(X = 0) = P({0}) = 1/7, P(X =
1) = P({−1,1}) = 2/7, P(X = 4) = P({−2,2}) = 2/7, P(X = 9) = P({3}) = 1/7,
and P(X = 16) = P({4}) = 1/7.
21
22 Chapter 2 Problem Solutions
10. We have
P(X > 1) = 1−P(X ≤ 1) = 1− [P(X = 0)+P(X = 1)]
= 1− [e−λ +λe−λ ] = 1− e−λ (1+λ ).
When λ = 1, P(X > 1) = 1− e−2(2) = 1−2/e= 0.264.
11. The probability that the sensor fails to activate is
P(X < 4) = P(X ≤ 3) = P(X = 0)+ · · ·+P(X = 3) = e−λ (1+λ +λ 2/2!+λ 3/3!).
If λ = 2, P(X < 4) = e−2(1+ 2+ 2+ 4/3) = e−2(19/3) = 0.857. The probability
that the sensor activates is 1−P(X < 4) = 0.143.
12. Let {Xk = 1} correspond to the event that the kth student gets an A. This event has
probability P(Xk = 1) = p. Now, the event that only the kth student gets an A is
{Xk = 1 and Xl = 0 for l 6= k}.
Hence, the probability that exactly one student gets an A is
P
( 15⋃
k=1
{Xk = 1 and Xl = 0 for l 6= k}
)
=
15
∑
k=1
P({Xk = 1 and Xl = 0 for l 6= k})
=
15
∑
k=1
p(1− p)14
= 15(.1)(.9)14 = 0.3432.
13. Let X1,X2,X3 be the random digits of the drawing. Then P(Xi = k) = 1/10 for k =
0, . . . ,9 since each digit has probability 1/10 of being chosen. Then if the player
chooses d1d2d3, the probability of winning is
P
(
{X1 = d1,X2 = d2,X3 = d3}∪{X1 = d1,X2 = d2,X3 6= d3}
∪{X1 = d1,X2 6= d2,X3 = d3}∪{X1 6= d1,X2 = d2,X3 = d3}
)
,
which is equal to .13+3[.12(.9)]= 0.028 since the union is disjoint and since X1,X2,X3
are independent.
14.
P
( m⋃
k=1
{Xk < 2}
)
= 1−P
( m⋂
k=1
{Xk ≥ 2}
)
= 1−
m
∏
k=1
P(Xk ≥ 2) = 1−
m
∏
k=1
[1−P(Xk ≤ 1)]
= 1− [1−{e−λ +λe−λ}]m = 1− [1− e−λ (1+λ )]m.
Chapter 2 Problem Solutions 23
15. (a)
P
( n⋃
i=1
{Xi ≥ 2}
)
= 1−P
( n⋂
i=1
{Xi ≤ 1}
)
= 1−
n
∏
i=1
P(Xi ≤ 1)
= 1−
n
∏
i=1
[P(Xi = 0)+P(Xi = 1)]
= 1− [e−λ +λe−λ ]n = 1− e−nλ (1+λ )n.
(b) P
( n⋂
i=1
{Xi ≥ 1}
)
=
n
∏
i=1
P(Xi ≥ 1) =
n
∏
i=1
[1−P(Xi = 0)] = (1− e−λ )n.
(c) P
( n⋂
i=1
{Xi = 1}
)
=
n
∏
i=1
P(Xi = 1) = (λe−λ )n = λ ne−nλ .
16. For the geometric0 pmf, write
∞
∑
k=0
(1− p)pk = (1− p)
∞
∑
k=0
pk = (1− p) · 1
1− p = 1.
For the geometric1 pmf, write
∞
∑
k=1
(1− p)pk−1 = (1− p)
∞
∑
k=1
pk−1 = (1− p)
∞
∑
n=0
pn = (1− p) · 1
1− p = 1.
17. Let Xi be the price of stock i, which is a geometric0(p) random variable. Then
P
( 29⋃
i=1
{Xi > 10}
)
= 1−P
( 29⋂
i=1
{Xi ≤ 10}
)
= 1−
29
∏
i=1
[(1− p)(1+ p+ · · ·+ p10)]
= 1−
(
(1− p)1− p
11
1− p
)29
= 1− (1− p11)29.
Substituting p= .7, we have 1− (1− .711)29 = 1− (.98)29 = 1− .560 = 0.44.
18. For the first problem, we have
P(min(X1, . . . ,Xn) > `) = P
( n⋂
k=1
{Xk > `}
)
=
n
∏
k=1
P(Xk > `) =
n
∏
k=1
p` = pn`.
Similarly,
P(max(X1, . . . ,Xn)≤ `)=P
( n⋂
k=1
{Xk≤ `}
)
=
n
∏
k=1
P(Xk≤ `)=
n
∏
k=1
(1− p`)= (1− p`)n.
19. Let Xk denote the number of coins in the pocket of the kth student. Then the Xk are
independent and is uniformly distributed from 0 to 20; i.e., P(Xk = i) = 1/21.
24 Chapter 2 Problem Solutions
(a) P
( 25⋂
k=1
{Xk ≥ 5}
)
=
25
∏
k=1
16/21= (16/21)25 = 1.12×10−3.
(b) P
( 25⋃
k=1
{Xk ≥ 19}
)
= 1−P
( 25⋂
k=1
{Xk ≤ 18}
)
= 1− (1−2/21)25 = 0.918.
(c) The probability that only student k has 19 coins in his or her pocket is
P
(
{Xk = 10}∩
⋂
l 6=k
{Xl 6= 19}
)
= (1/21)(20/21)24 = 0.01477.
Hence, the probability that exactly one student has 19 coins is
P
( 25⋃
k=1
{Xk = 10}∩
⋂
l 6=k
{Xl 6= 19}
)
= 25(0.01477) = 0.369.
20. Let Xi = 1 if block i is good. Then P(Xi = 1) = p and
P(Y = k) = P
({X1 = 1}∩· · ·∩{Xk−1 = 1}∩{Xk = 0})= pk−1(1− p), k= 1,2, . . . .
Hence, Y ∼ geometric1(p).
21. (a) Write
P(X > n) =
∞
∑
k=n+1
(1− p)pk−1 =
∞
∑`
=0
(1− p)p`+n
= (1− p)pn
∞
∑`
=0
p` = (1− p)pn 1
1− p = p
n.
(b) Write
P(X > n+ k|X > n) = P(X > n+ k,X > n)
P(X > n)
=
P(X > n+ k)
P(X > n)
=
pn+k
pn
= pk.
22. Since P(Y > k) = P(Y > n+ k|Y > n), we can write
P(Y > k) = P(Y > n+ k|Y > n) = P(Y > n+ k,Y > n)
P(Y > n)
=
P(Y > n+ k)
P(Y > n)
.
Let p := P(Y > 1). Taking k = 1 above yields P(Y > n+ 1) = P(Y > n)p. Then
with n = 1 we have P(Y > 2) = P(Y > 1)p = p2. With n = 2 we have P(Y > 3) =
P(Y > 2)P(Y > 1) = p3. In general then P(Y > n) = pn. Finally, P(Y = n) = P(Y >
n−1)−P(Y > n) = pn−1− pn = pn−1(1− p), which is the geometric1(p) pmf.
23. (a) To compute pX (i), we sum row i of the matrix. This yields pX (1) = pX (3) = 1/4
and pX (2) = 1/2. To compute pY ( j), we sum column j to get pY (1) = pY (3) =
1/4 and pY (2) = 1/2.
Chapter 2 Problem Solutions 25
(b) To compute P(X < Y ), we sum pXY (i, j) over i and j such that i< j. We have
P(X < Y ) = pXY (1,2)+ pXY (1,3)+ pXY (2,3) = 0+1/8+0 = 1/8.
(c) We claim that X and Y are not independent. For example, pXY (1,2) = 0 is not
equal to pX (1)pY (2) = 1/8.
24. (a) To compute pX (i), we sum row i of the matrix. This yields pX (1) = pX (3) = 1/4
and pX (2) = 1/2. To compute pY ( j), we sum column j to get pY (1) = pY (3) =
1/6 and pY (2) = 2/3.
(b) To compute P(X < Y ), we sum pXY (i, j) over i and j such that i< j. We have
P(X <Y ) = pXY (1,2)+ pXY (1,3)+ pXY (2,3) = 1/6+1/24+1/12 = 7/24.
(c) Using the results of part (a), it is easy to verify that pX (i)pY ( j) = pXY (i, j) for
i, j = 1,2,3. Hence, X and Y are independent.
25. To compute the marginal of X , write
pX (1) =
e−3
3
∞
∑
j=0
3 j
j!
=
e−3
3
e3 = 1/3.
Similarly,
pX (2) =
4e−6
6
∞
∑
j=0
6 j
j!
=
4e−6
6
e6 = 2/3.
Alternatively, pX (2) = 1− pX (1) = 1−1/3 = 2/3. Of course pX (i) = 0 for i 6= 1,2.
We clearly have pY ( j) = 0 for j < 0 and
pY ( j) =
3 j−1e−3
j!
+4
6 j−1e−6
j!
, j ≥ 0.
Since pX (1)pY ( j) 6= pXY (1, j), X and Y are not independent.
26. (a) For k ≥ 1,
pX (k) =
∞
∑
n=0
(1− p)pk−1kne−k
n!
= (1− p)pk−1e−k
∞
∑
n=0
kn
n!
= (1− p)pk−1e−kek = (1− p)pk−1,
which we recognize as the geometric1(p) pmf.
(b) Next,
pY (0) =
∞
∑
k=1
(1− p)pk−1e−k = 1− p
e
∞
∑
k=1
(p/e)k−1
=
1− p
e
∞
∑
m=0
(p/e)m =
1− p
e
· 1
1− p/e =
1− p
e− p .
26 Chapter 2 Problem Solutions
(c) Since pX (1)pY (0) = (1− p)2/(e− p) is not equal to pXY (1,0) = (1− p)/e, X
and Y are not independent.
27. MATLAB. Here is a script:
p = ones(1,51)/51;
k=[0:50];
i = find(g(k) >= -16);
fprintf(’The answer is %g\n’,sum(p(i)))
where
function y = g(x)
y = 5*x.*(x-10).*(x-20).*(x-30).*(x-40).*(x-50)/1e6;
28. MATLAB. If you modified your program for the preceding problem only by the way
you compute P(X = k), then you may get only 0.5001= P(g(X)≥−16 and X ≤ 50).
Note that g(x) > 0 for x> 50. Hence, you also have to add P(X ≥ 51) = p51 = 0.0731
to 0.5001 to get 0.5732.
29. MATLAB. OMITTED.
30. MATLAB. OMITTED.
31. MATLAB. OMITTED.
32. E[X ] = 2(1/3)+5(2/3) = 12/3= 4.
33. E[I(2,6)(X)]=∑5k=3P(X = k)= (1− p)[p3+ p4+ p5]. For p= 1/2, we get E[I(2,6)(X)]=
7/64= 0.109375.
34. Write
E[1/(X+1)] =
∞
∑
n=0
1
n+1
λ ne−λ
n!
=
e−λ
λ
∞
∑
n=0
λ n+1
(n+1)!
=
e−λ
λ
∞
∑
n=1
λ n
n!
=
e−λ
λ
[
∞
∑
n=0
λ n
n!
−1
]
=
e−λ
λ [e
λ −1] = 1− e
−λ
λ .
35. Since var(X) = E[X2]− (E[X ])2, E[X2] = var(X)+(E[X ])2. Hence, E[X2] = 7+22 =
7+4= 11.
36. Since Y = cX , E[Y ] = E[cX ] = cm. Hence,
var(Y ) = E[(Y − cm)2] = E[(cX− cm)2] = E[c2(X−m)2]
= c2E[(X−m)2] = c2 var(X) = c2σ2.
37. We begin with
E[(X+Y )3] = E[X3 +3X2Y +3XY 2 +Y 3]
= E[X3]+3E[X2Y ]+3E[XY 2]+E[Y 3]
= E[X3]+3E[X2]E[Y ]+3E[X ]E[Y 2]+E[Y 3], by independence.
Chapter 2 Problem Solutions 27
Now, as noted in the text, for a Bernoulli(p) random variable, Xn = X , and so E[Xn] =
E[X ] = p. Similarly E[Y n] = q. Thus,
E[(X+Y )3] = p+3pq+3pq+q = p+6pq+q.
38. The straightforward approach is to put
f (c) := E[(X− c)2] = E[X2]−2mc+ c2
and differentiate with respect to c to get f ′(c) =−2m+2c. Solving f ′(c) = 0 results
in c= m. An alternative approach is to write
E[(X− c)2] = E[{(X−m)+(m− c)}2] = σ2 +(m− c)2.
From this expression, it is obvious that c= m minimizes the expectation.
39. The two sketches are:
a
1
( )x a/ 1/2
( )xI
a )8[ ,
a
1
x a/
( )xI
a )8[ ,
40. The right-hand side is easy: E[X ]/2 = (3/4)/2 = 3/8 = 0.375. The left-hand side is
more work:
P(X ≥ 2) = 1−P(X ≤ 1) = 1− [P(X = 0)+P(X = 1)] = 1− e−λ (1+λ ).
For λ = 3/4, P(X ≥ 2) = 0.1734. So the bound is a little more than twice the value
of the probability.
41. The Chebyshev bound is (λ + λ 2)/4. For λ = 3/4, the bound is 0.3281, which is
a little better than the Markov inequality bound in the preceding problem. The true
probability is 0.1734.
42. Comparing the definitions of ρXY and cov(X ,Y ), we find ρXY = cov(X ,Y )/(σXσY ).
Hence, cov(X ,Y ) = σXσYρXY . Since cov(X ,Y ) := E[(X −mX )(Y −mY ), if Y = X ,
we see that cov(X ,X) = E[(X−mX )2] =: var(X).
43. Put
f (a) := E[(X−aY )2] = E[X2]−2aE[XY ]+a2E[Y 2] = σ2X −2aρσXσY +a2σ2Y .
Then
f ′(a) = −2ρσXσY +2aσ2Y .
Setting this equal to zero and solving for a yields a= ρ(σX/σY ).
28 Chapter 2 Problem Solutions
44. Since P(X =±1) = P(X =±2) = 1/4, E[X ] = 0. Similarly, since P(XY =±1) = 1/4
and P(XY =±4) = 1/4, E[XY ] = 0. Thus, E[XY ] = 0 = E[X ]E[Y ] and we see that X
and Y are uncorrelated. Next, since X = 1 implies Y = 1, P(X = 1,Y = 1) = P(X =
1) = 1/4 while P(Y = 1) = P(X = 1 or X =−1) = 1/2. Thus,
P(X = 1)P(Y = 1) = (1/4)(1/2) = 1/8, but P(X = 1,Y = 1) = 1/4.
45. As discussed in the text, for uncorrelated random variables, the variance of the sum
is the sum of the variances. Since independent random variables are uncorrelated, the
same results holds for them too. Hence, for Y = X1 + · · ·+XM ,
var(Y ) =
M
∑
k=1
var(Xk).
We also have E[Y ] = ∑Mk=1E[Xk]. Next, since the Xk are i.i.d. geometric1(p), E[Xk] =
1/(1− p) and var(Xk) = p/(1− p)2. It follows that var(Y ) =Mp/(1− p)2 and E[Y ] =
M/(1− p). We conclude by writing
E[Y 2] = var(Y )+(E[Y ])2 =
Mp
(1− p)2 +
M2
(1− p)2 =
M(p+M)
(1− p)2 .
46. From E[Y ] = E[dX− s(1−X)] = dp− s(1− p) = 0, we find that d/s= (1− p)/p.
47. (a) p= 1/1000.
(b) Since (1− p)/p= (999/1000)/(1/1000) = 999, the fair odds against are 999 :1.
(c) Since the fair odds of 999 :1 are not equal to the offered odds of 500 :1, the game
is not fair. To make the game fair, the lottery should pay $900 instead of $500.
48. First note that ∫
∞
1
1
t p
dt =

1
1− p ·
1
t p−1
∣∣∣∞
1
, p 6= 1,
ln t
∣∣∣∞
1
, p= 1.
For p> 1, the integral is equal to 1/(p−1). For p≤ 1, the integral is infinite.
For 0< p≤ 1, write
∞
∑
k=1
1
kp
≥
∞
∑
k=1
∫ k+1
k
1
t p
dt =
∫
∞
1
1
t p
dt = ∞.
For p> 1, it suffices to show that ∑∞k=2 1/kp < ∞. To this end, write
∞
∑
k=2
1
kp
=
∞
∑
k=1
1
(k+1)p
≤
∞
∑
k=1
∫ k+1
k
1
t p
dt =
∫
∞
1
1
t p
dt < ∞.
49. First write
E[Xn] =
∞
∑
k=1
kn
C−1p
kp
= C−1p
∞
∑
k=1
1
kp−n
.
Chapter 2 Problem Solutions 29
By the preceding problem this last sum is finite for p− n > 1, or equivalently, n <
p− 1. Otherwise the sum is infinite; the case 1 ≥ p− n > 0 being handled by the
preceding problem, and the case 0≥ p−n being obvious.
50. If all outcomes are equally likely,
H(X) =
n
∑
i=1
pi log
1
pi
=
1
n
n
∑
i=1
logn = logn.
If X is a constant random variable with pi = 0 for i 6= j, then
H(X) =
n
∑
i=1
pi log
1
pi
= p j log
1
p j
= 1log1 = 0.
51. Let P(X = xi) = pi for i= 1, . . . ,n. Then
E[g(X)] =
n
∑
i=1
g(xi)pi and g(E[X ]) = g
(
n
∑
i=1
xipi
)
.
For n= 2, Jensen’s inequality says that
p1g(x1)+ p2g(x2) ≥ g(p1x1 + p2x2).
If we put λ = p1, then 1−λ = p2 and the above inequality becomes
λg(x1)+(1−λ )g(x2) ≥ g(λx1 +(1−λ )x2),
which is just the definition of a convex function. Hence, if g is convex, Jensen’s
inequality holds for n = 2. Now suppose Jensen’s inequality holds for some n ≥ 2.
We must show it holds for n+1. The case of n is
n
∑
i=1
g(xi)pi ≥ g
(
n
∑
i=1
xipi
)
, if p1 + · · ·+ pn = 1.
Now suppose that p1 + · · ·+ pn+1 = 1, and write
n+1
∑
i=1
g(xi)pi = (1− pn+1)
[
n
∑
i=1
g(xi)
pi
1− pn+1
]
+ pn+1g(xn+1).
Let us focus on the quantity in brackets. Since
n
∑
i=1
pi
1− pn+1 =
p1 + · · ·+ pn
pn+1
=
1− pn+1
1− pn+1 = 1,
Jensen’s inequality for n terms yields
n
∑
i=1
g(xi)
pi
1− pn+1 ≥ g
(
n
∑
i=1
xi
pi
1− pn+1
)
.
30 Chapter 2 Problem Solutions
Hence,
n+1
∑
i=1
g(xi)pi ≥ (1− pn+1)g
(
n
∑
i=1
xi
pi
1− pn+1
)
+ pn+1g(xn+1).
Now apply the two-term Jensen inequality to get
n+1
∑
i=1
g(xi)pi ≥ g
(
(1− pn+1)
[
n
∑
i=1
xi
pi
1− pn+1
]
+ pn+1xn+1
)
= g
(
n+1
∑
i=1
pixi
)
.
52. With X = |Z|α and g(x) = xβ/α , we have
E[g(X)] = E[Xβ/α ] = E[(|Z|α)β/α ] = E[|Z|β ]
and
E[X ] = E[|Z|α ].
Then Jensen’s inequality tells us that
E[|Z|β ] ≥ (E[|Z|α ])β/α .
Raising both sides the 1/β power yields Lyapunov’s inequality.
53. (a) For all discrete random variables, we have ∑iP(X = xi) = 1. For a nonnegative
random variable, if xk < 0, we have
1 = ∑
i
P(X = xi) ≥ ∑
i
I[0,∞)(xi)P(X = xi)+P(X = xk) = 1+P(X = xk).
From this it follows that 0≥ P(X = xk)≥ 0, and so P(X = xk) = 0.
(b) Write
E[X ] = ∑
i
xiP(X = xi) = ∑
i:xi≥0
xiP(X = xi)+ ∑
k:xk<0
xiP(X = xk).
By part (a), the last sum is zero. The remaining sum is obviously nonnegative.
(c) By part (b), 0≤ E[X−Y ] = E[X ]−E[Y ]. Hence, E[Y ]≤ E[X ].
CHAPTER 3
Problem Solutions
1. First, E[X ] = G′X (1) =
(
1
6 +
4
3 z
)∣∣∣
z=1
= 16 +
4
3 =
9
6 =
3
2 . Second, E[X(X − 1)] =
G′′X (1) = 4/3. Third, from E[X(X−1)] = E[X2]−E[X ], we have E[X2] = 4/3+3/2=
17/6. Finally, var(X) = E[X2]− (E[X ])2 = 17/6−9/4= (34−27)/12= 7/12.
2. pX (0) =GX (0) =
(
1
6 +
1
6 z+
2
3 z
2
)∣∣∣
z=0
= 1/6, pX (1) =G
′
X (0) =
(
1
6 +
4
3 z
)∣∣∣
z=0
= 1/6,
and pX (2) = G
′′
X (0)/2= (4/3)/2= 2/3.
3. To begin, note that G′X (z) = 5((2+ z)/3)
4/3, and G′′X (z) = 20((2+ z)/3)
3/9. Hence,
E[X ] = G′X (1) = 5/3 and E[X(X − 1)] = 20/9. Since E[X(X − 1)] = E[X2]−E[X ],
E[X2] = 20/9+ 5/3 = 35/9. Finally, var(X) = E[X2]− (E[X ])2 = 35/9− 25/9 =
10/9.
4. For X ∼ geometric0(p),
GX (z) =
∞
∑
n=0
znP(X = n) =
∞
∑
n=0
zn(1− p)pn = (1− p)
∞
∑
n=0
(zp)n = (1− p) 1
1− pz .
Then
E[X ] = G′X (1) =
1− p
(1− pz)2 p
∣∣∣∣
z=1
=
1− p
(1− p)2 p=
p
1− p .
Next,
E[X2]−E[X ] = E[X(X−1)] = G′′X (1)=
(1− p)p
(1− pz)4 ·2p(1− pz)
∣∣∣∣
z=1
=
(1− p)p ·2p
(1− p)3 =
2p2
(1− p)2 .
This implies that
E[X2] =
2p2
(1− p)2 +
p
1− p =
2p2 + p(1− p)
(1− p)2 =
p+ p2
(1− p)2 .
Finally,
var(X) = E[X2]− (E[X ])2 = p+ p
2
(1− p)2 −
p2
(1− p)2 =
p
(1− p)2 .
For X ∼ geometric1(p),
GX (z) =
∞
∑
n=1
znP(X = n) =
∞
∑
n=1
zn(1− p)pn−1 = 1− p
p
∞
∑
n=1
(zp)n
=
1− p
p
pz
1− pz =
(1− p)z
1− pz .
31
32 Chapter 3 Problem Solutions
Now
G′X (z) =
(1− pz)(1− p)+(1− p)pz
(1− pz)2 =
1− p
(1− pz)2 .
Hence,
E[X ] = G′X (1) =
1
1− p .
Next,
G′′X (z) =
(1− p)
(1− pz)4 ·2(1− pz)p =
2p(1− p)
(1− pz)3 .
We then have
E[X2]−E[X ] = E[X(X−1)] = G′′X (1) =
2p(1− p)
(1− p)3 =
2p
(1− p)2 ,
and
E[X2] =
2p
(1− p)2 +
1
1− p =
2p+1− p
(1− p)2 =
1+ p
(1− p)2 .
Finally,
var(X) = E[X2]− (E[X ])2 = 1+ p
(1− p)2 −
1
(1− p)2 =
p
(1− p)2 .
5. Since the Xi are independent Poisson(λi), we use probability generating functions to
find GY (z), which turns out to be Poisson(λ ) with λ := λ1 + · · ·+λn. It then follows
that P(Y = 2) = λ 2e−λ /2. It remains to write
GY (z) = E[z
Y ] = E[zX1+···+Xn ] = E[zX1 · · ·zXn ] =
n
∏
i=1
E[zXi ] =
n
∏
i=1
eλi(z−1)
= exp
[
n
∑
i=1
λi(z−1)
]
= eλ (z−1),
which is the Poisson(λ ) pgf. Hence, Y ∼ Poisson(λ ) as claimed.
6. If GX (z) = ∑∞k=0 zkP(X = k), then GX (1) = ∑∞k=0P(X = k) = 1. For the particular
formula in the problem, we must have 1 = GX (1) = (a0 +a1 +a2 + · · ·+an)m/D, or
D= (a0 +a1 +a2 + · · ·+an)m.
7. From the table on the inside of the front cover of the text, E[Xi] = 1/(1− p). Thus,
E[Y ] =
n
∑
i=1
E[Xi] =
n
1− p .
Second, since the variance of the sum of uncorrelated random variables is the sum of
the variances, and since var(Xi) = p/(1− p)2, we have
var(Y ) =
n
∑
i=1
var(Xi) =
np
(1− p)2 .
Chapter 3 Problem Solutions 33
It then easily follows that
E[Y 2] = var(Y )+(E[Y ])2 =
np
(1− p)2 +
n2
(1− p)2 =
n(p+n)
(1− p)2 .
SinceY is the sum of i.i.d. geometric1(p) random variables, the pgf ofY is the product
of the individual pgfs. Thus,
GY (z) =
n
∏
i=1
(1− p)z
1− pz =
[
(1− p)z
1− pz
]n
.
8. Starting with GY (z) = [(1− p)+ pz]n, we have
E[Y ] = G′Y (1) = n[(1− p)+ pz]n−1p
∣∣∣
z=1
= np.
Next,
E[Y (Y −1)] = G′′Y (1) = n(n−1)[(1− p)+ pz]n−2p2
∣∣∣
z=1
= n(n−1)p2.
It then follows that
E[Y 2] = E[Y (Y −1)]+E[Y ] = n(n−1)p2 +np = n2p2−np2 +np.
Finally,
var(Y ) = E[Y 2]− (E[Y ])2 = n2p2−np2 +np− (np)2 = np(1− p).
9. For the binomial(n, p) random variable,
GY (z) =
n
∑
k=0
P(Y = k)zk =
n
∑
k=0
(
n
k
)
pk(1− p)n−kzk.
Hence,
1 = GY (1) =
n
∑
k=0
(
n
k
)
pk(1− p)n−k.
10. Starting from
n
∑
k=0
(
n
k
)
pk(1− p)n−k = 1,
we follow the hint and replace p with a/(a+ b). Note that 1− p = b/(a+ b). With
these substitutions, the above equation becomes
n
∑
k=0
(
n
k
)(
a
a+b
)k(
b
a+b
)n−k
= 1 or
n
∑
k=0
(
n
k
)
akbn−k
(a+b)k(a+b)n−k
= 1.
Multiplying through by (a+b)n we have
n
∑
k=0
(
n
k
)
akbn−k = (a+b)n.
34 Chapter 3 Problem Solutions
11. Let Xi = 1 if bit i is in error, Xi = 0 otherwise. Then Yn := X1+ · · ·+Xn is the number
of errors in n bits. Assume the Xi are independent. Then
GYn(z) = E[z
Yn ] = E[zX1+···+Xn ] = E[zX1 · · ·zXn ]
= E[zX1 ] · · ·E[zXn ], by independence,
= [(1− p)+ pz]n, since the Xi are i.i.d. Bernoulli(p).
We recognize this last expression as the binomial(n, p) probability generating func-
tion. Thus, P(Yn = k) =
(
n
k
)
pk(1− p)n−k.
12. Let Xi ∼ binomial(ni, p) denote the number of students in the ith room. Then Y =
X1 + · · ·+XM is the total number of students in the school. Next,
GY (z) = E[z
Y ] = E[zX1+···+XM ]
= E[zX1 ] · · ·E[zXM ], by independence,
=
M
∏
i=1
[(1− p)+ pz]ni , since Xi ∼ binomial(ni, p),
= [(1− p)+ pz]n1+···+nM .
Setting n := n1 + · · ·+ nM , we see that Y ∼ binomial(n, p). Hence, P(Y = k) =(
n
k
)
pk(1− p)n−k.
13. LetY =X1+ · · ·+Xn, where the Xi are i.i.d. with P(Xi = 1) = 1− p and P(Xi = 2) = p.
Observe that Xi−1∼ Bernoulli(p). Hence,
Y = n+
n
∑
i=1
(Xi−1)︸ ︷︷ ︸
=: Z
.
Since Z is the sum of i.i.d. Bernoulli(p) random variables, Z∼ binomial(n, p). Hence,
P(Y = k) = P(n+Z = k) = P(Z = k−n)
=
(
n
k−n
)
pk−n(1− p)2n−k, k = n, . . . ,2n.
14. Let Xi be i.i.d. Bernoulli(p), where Xi = 1 means bit i is flipped. Then Y := X1+ · · ·+
Xn (n = 10) is the number of bits flipped. A codeword cannot be decoded if Y > 2.
We need to find P(Y > 2). Observe that
GY (z) = E[z
Y ] = E[zX1+···+Xn ] = E[zX1 · · ·zXn ] =
n
∏
i=1
E[zXi ] = [(1− p)+ pz]n.
This is the pgf of a binomial(n, p) random variable. Hence,
P(Y > 2) = 1−P(Y ≤ 2) = 1− [P(Y = 0)+P(Y = 1)+P(Y = 2)]
= 1−
[(
10
0
)
(1− p)10+
(
10
1
)
p(1− p)9+
(
10
2
)
p2(1− p)8
]
= 1− (1− p)8[(1− p)2+10p(1− p)+45p2].
Chapter 3 Problem Solutions 35
15. For n= 150 and p= 1/100, we have
k P
(
Binomial(n, p) = k
)
P
(
Poisson(np) = k
)
0 0.2215 0.2231
1 0.3355 0.3347
2 0.2525 0.2510
3 0.1258 0.1255
4 0.0467 0.0471
5 0.0138 0.0141
16. If the Xi are i.i.d. with mean m, then
E[Mn] = E
[
1
n
n
∑
i=1
Xi
]
=
1
n
n
∑
i=1
E[Xi] =
1
n
n
∑
i=1
m =
nm
n
= m.
If X is any random variable with mean m, then E[cX ] = cE[X ] = cm, and
var(cX) = E[(cX− cm)2] = E[c2(X−m)2] = c2E[(X−m)2] = c2 var(X).
17. In general, we have
P(|Mn−m|< ε) ≥ 0.9 ⇔ P(|Mn−m| ≥ ε) < 0.1.
By Chebyshev’s inequality,
P(|Mn−m| ≥ ε) ≤ σ
2
nε2
< 0.1
if n> σ2/(.1)ε2. For σ2 = 1 and ε = 0.25, we require n> 1/(.1)(.25)2 = 1/.00625=
160 students. If instead ε = 1, we require n> 0.1= 10 students.
18. (a) E[Xi] = E[IB(Zi)] = P(Zi ∈ B). Setting p := P(Zi ∈ B), we see that Xi = IB(Zi)∼
Bernoulli(p). Hence, var(Xi) = p(1− p).
(b) In fact, the Xi are independent. Hence, they are uncorrelated.
19. Mn = 0 if and only if all the Xi are zero. Hence,
P(Mn = 0) = P
( n⋂
i=1
{Xi = 0}
)
=
n
∏
i=1
P(Xi = 0) = (1− p)n.
In particular, if p = 1/1000, then P(M100 = 0) = (1− p)100 = 0.999100 = 0.905.
Hence, the chances are more than 90% that when we run a simulation, M100 = 0 and
we learn nothing!
20. If Xi = Z ∼ Bernoulli(1/2) for all i, then
Mn =
1
n
n
∑
i=1
Xi =
1
n
n
∑
i=1
Z = Z,
36 Chapter 3 Problem Solutions
and m= E[Xi] = E[Z] = 1/2. So,
P(|Mn−m| ≥ 1/4) = P(|Z−1/2| ≥ 1/4).
Now, Z−1/2=±1/2, and |Z−1/2|= 1/2 with probability one. Thus,
P(|Z−1/2| ≥ 1/4) = P(1/2≥ 1/4) = 1 6→ 0.
21. From the discussion of the weak law in the text, we have
P(|Mn−m| ≥ εn) ≤ σ
2
nε2n
.
If nε2n → ∞ as n→ ∞, then probability on the left will go to zero as n→ ∞.
22. We have from the example that with p := λ/(λ + µ), pX |Z(i| j) =
(
j
i
)
pi(1− p) j−i
for i = 0, . . . , j. In other words, as a function of i, pX |Z(i| j) is a binomial( j, p) pmf.
Hence,
E[X |Z = j] =
j
∑
i=0
ipX |Z(i| j)
is just the mean of a binomial( j, p) pmf. The mean of such a pmf is jp. Hence,
E[X |Z = j] = jp= jλ/(λ + µ).
23. The problem is telling us that P(Y = k|X = i) = (n
k
)
pki (1− pi)n−k. Hence
P(Y < 2|X = i) = P(Y = 0|X = i)+P(Y = 1|X = i) = (1− pi)n+npi(1− pi)n−1.
24. The problem is telling us that P(X = k|Y = j) = λ kj e−λ j/k!. Hence,
P(X > 2|Y = j) = 1−P(X ≤ 2|Y = j)
= 1− [P(X = 0|Y = j)+P(X = 1|Y = j)+P(X = 2|Y = j)]
= 1− [e−λ j +λ je−λ j +λ 2j e−λ j/2]
= 1− e−λ j [1+λ j+λ 2j /2].
25. For the first formula, write
pX |Y (xi|y j) :=
P(X = xi,Y = y j)
P(Y = y j)
=
P(X = xi)P(Y = y j)
P(Y = y j)= P(X = xi) = pX (xi).
Similarly, for the other formula,
pY |X (y j|xi) :=
P(Y = y j,X = xi)
P(X = xi)
=
P(Y = y j)P(X = xi)
P(X = xi)
= P(Y = y j) = pY (y j).
Chapter 3 Problem Solutions 37
26. We use the law of total probability to write
P(T = n) = P(X−Y = n) =
∞
∑
k=0
P(X−Y = n|Y = k)P(Y = k)
=
∞
∑
k=0
P(X− k = n|Y = k)P(Y = k), by the substitution law,
=
∞
∑
k=0
P(X = n+ k|Y = k)P(Y = k)
=
∞
∑
k=0
P(X = n+ k)P(Y = k), by independence,
=
∞
∑
k=0
(1− p)pn+k · (1−q)qk
= (1− p)(1−q)pn
∞
∑
k=0
(pq)k =
(1− p)(1−q)pn
1− pq .
27. The problem is telling us that
P(Y = n|X = 1) = µ
ne−µ
n!
and P(Y = n|X = 2) = ν
ne−ν
n!
.
The problem also tells us that P(X = 1) = P(X = 2) = 1/2. We can now write
P(X = 1|Y = 2)= P(X = 1,Y = 2)
P(Y = 2)
=
P(Y = 2|X = 1)P(X = 1)
P(Y = 2)
=
(µ2e−µ/2)(1/2)
P(Y = 2)
.
It remains to use the law of total probability to compute
P(Y = 2) =
2
∑
i=1
P(Y = 2|X = i)P(X = i)
= [P(Y = 2|X = 1)+P(Y = 2|X = 2)]/2
= [µ2e−µ/2+ν2e−ν/2]/2 = [µ2e−µ +ν2e−ν ]/4.
We conclude by writing
P(X = 1|Y = 2) = µ
2e−µ/4
[µ2e−µ +ν2e−ν ]/4 =
1
1+(ν/µ)2eµ−ν .
28. Let X = 0 or X = 1 according to whether message zero or message one is sent. The
problem tells us that P(X = 0) = P(X = 1) = 1/2 and that
P(Y = k|X = 0) = (1− p)pk and P(Y = k|X = 1) = (1−q)qk,
where q 6= p. We need to compute
P(X = 1|Y = k) = P(Y = k|X = 1)P(X = 1)
P(Y = k)
=
(1−q)qk(1/2)
P(Y = k)
.
38 Chapter 3 Problem Solutions
We next use the law of total probability to compute
P(Y = k) = [P(Y = k|X = 0)+P(Y = k|X = 1)]/2= [(1− p)pk+(1−q)qk]/2.
We can now compute
P(X = 1|Y = k) = (1−q)q
k(1/2)
[(1− p)pk+(1−q)qk]/2 =
1
1+ (1−p)p
k
(1−q)qk
.
29. Let R denote the number of red apples in a crate, and letG denote the number of green
apples in a crate. The problem is telling us that R∼ Poisson(ρ) and G∼ Poisson(γ)
are independent. If T = R+G is the total number of apples in the crate, we must
compute
P(G= 0|T = k) = P(T = k|G= 0)P(G= 0)
P(T = k)
.
We first use the law of total probability, substitution, and independence to write
P(T = k|G= 0) = P(R+G= k|G= 0) = P(R= k|G= 0) = P(R= k) = ρke−ρ/k!.
We also note from the text that the sum of two independent Poisson random variables
is a Poisson random variable whose parameter is the sum of the individual parameters.
Hence, P(T = k) = (ρ + γ)ke−(ρ+γ)/k!. We can now write
P(G= 0|T = k) = ρ
ke−ρ/k! · e−γ
(ρ + γ)ke−(ρ+γ)/k! =
( ρ
ρ + γ
)k
.
30. We begin with
P(X = n|Y = 1) = P(Y = 1|X = n)P(X = n)
P(Y = 1)
=
1
n+1 · λ
ne−λ
n!
P(Y = 1)
.
Next, we compute
P(Y = 1) =
∞
∑
n=0
P(Y = 1|X = n)P(X = n) =
∞
∑
n=0
1
n+1
· λ
ne−λ
n!
=
e−λ
λ
∞
∑
n=0
λ n+1
(n+1)!
=
e−λ
λ
∞
∑
k=1
λ k
k!
=
e−λ
λ
[
∞
∑
k=0
λ k
k!
−1
]
=
e−λ
λ [e
λ −1] = 1− e
−λ
λ .
We conclude with
P(X = n|Y = 1) =
1
n+1 · λ
ne−λ
n!
P(Y = 1)
=
1
n+1 · λ
ne−λ
n!
(1− e−λ )/λ =
λ n+1
(eλ −1)(n+1)! .
Chapter 3 Problem Solutions 39
31. We begin with
P(X = n|Y = k) = P(Y = k|X = n)P(X = n)
P(Y = k)
=
(
n
k
)
pk(1− p)n−k ·λ ne−λ /n!
P(Y = k)
.
Next,
P(Y = k) =
∞
∑
n=0
P(Y = k|X = n)P(X = n) =
∞
∑
n=k
(
n
k
)
pk(1− p)n−kλ ne−λ /n!
=
pkλ ke−λ
k!
∞
∑
n=k
[(1− p)λ ]n−k
(n− k)! =
pkλ ke−λ
k!
∞
∑
m=0
[(1− p)λ ]m
m!
=
pkλ ke−λ
k!
e(1−p)λ =
(pλ )ke−pλ
k!
.
Note that Y ∼ Poisson(pλ ). We continue with
P(X = n|Y = k) =
(
n
k
)
pk(1− p)n−k ·λ ne−λ /n!
P(Y = k)
=
(
n
k
)
pk(1− p)n−k ·λ ne−λ /n!
(pλ )ke−pλ /k!
=
[(1− p)λ ]n−ke−(1−p)λ
(n− k)! .
32. First write
P
(
X > k
∣∣max(X ,Y ) > k) = P({X > k}∩{max(X ,Y ) > k})
P
(
max(X ,Y ) > k
) = P(X > k)
P
(
max(X ,Y ) > k
) ,
since {X > k} ⊂ {max(X ,Y ) > k}. We next compute
P
(
max(X ,Y ) > k
)
= 1−P(max(X ,Y )≤ k) = 1−P(X ≤ k)P(Y ≤ k).
If we put θk := P(X ≤ k) and use the fact that X and Y have the same pmf, then
P
(
X > k
∣∣max(X ,Y ) > k) = 1−θk
1−θ 2k
=
1−θk
(1−θk)(1+θk) =
1
1+θk
.
With n= 100 and p= .01, we compute
θ1 = P(X ≤ 1) = P(X = 0)+P(X = 1) = .99100 + .9999 = .366+ .370= .736.
It follows that the desired probability is 1/(1+θ1) = 1/1.736 = 0.576.
33. (a) Observe that
P(XY = 4) = P(X = 1,Y = 4)+P(X = 2,Y = 2)+P(X = 4,Y = 1)
= (1− p)(1−q)[pq4+ p2q2 + p4q].
40 Chapter 3 Problem Solutions
(b) Write
pZ( j) = ∑
i
pY ( j− i)pX (i)
=
∞
∑
i=0
pY ( j− i)pX (i), since pX (i) = 0 for i< 0,
=
j
∑
i=0
pY ( j− i)pX (i), since pY (k) = 0 for k < 0,
= (1− p)(1−q)
j
∑
i=0
piq j−i = (1− p)(1−q)q j
j
∑
i=0
(p/q)i.
Now, if p= q,
pZ( j) = (1− p)2p j( j+1).
If p 6= q,
pZ( j) = (1− p)(1−q)q j 1− (p/q)
j+1
1− p/q = (1− p)(1−q)
q j+1− p j+1
q− p .
34. For j = 0,1,2,3,
pZ( j) =
j
∑
i=0
(1/16) = ( j+1)/16.
For j = 4,5,6,
pZ( j) =
3
∑
i= j−3
(1/16) = (7− j)/16.
For other values of j, pZ( j) = 0.
35. We first write
P(Y = j|X = 1)
P(Y = j|X = 0) ≥
P(X = 0)
P(X = 1)
as
λ j1 e−λ1/ j!
λ j0 e−λ0/ j!
≥ 1− p
p
.
We can further simplify this to(λ1
λ0
) j
≥ 1− p
p
eλ1−λ0 .
Taking logarithms and rearranging, we obtain
j ≥
[
λ1−λ0+ ln 1− p
p
]/
ln(λ1/λ0).
Observe that the right-hand side is just a number (threshold) that is computable from
the problem data. If we observe Y = j, we compare j to the threshold. If j is greater
than or equal to this number, we decide X = 1; otherwise, we decide X = 0.
Chapter 3 Problem Solutions 41
36. We first write
P(Y = j|X = 1)
P(Y = j|X = 0) ≥
P(X = 0)
P(X = 1)
as
(1−q1)q j1
(1−q0)q j0
≥ 1− p
p
.
We can further simplify this to(q1
q0
) j
≥ (1− p)(1−q0)
p(1−q1) .
Taking logarithms and rearranging, we obtain
j ≤
[
ln
(1− p)(1−q0)
p(1−q1)
]/
ln(q1/q0),
since q1 < q0 implies ln(q1/q0) < 0.
37. Starting with P(X = xi|Y = y j) = h(xi), we have
P(X = xi,Y = y j) = P(X = xi|Y = y j)P(Y = y j) = h(xi)pY (y j).
If we can show that h(xi) = pX (xi), then it will follow that X and Y are independent.
Now observe that the sum over j of the left-hand side reduces to P(X = xi) = pX (xi).
The sum over j of the right-hand side reduces to h(xi). Hence, pX (xi) = h(xi) as
desired.
38. First write
pXY (1, j) = (1/3)3
je−3/ j! and pXY (2, j) = (4/6)6 je−6/ j!
Notice that 3 je−3/ j! is a Poisson(3) pmf and 6 je−6/ j! is a Poisson(6) pmf. Hence,
pX (1) = ∑∞j=0 pXY (1, j) = 1/3 and pX (2) = ∑∞j=0 pXY (2, j) = 2/3. It then follows
that pY |X ( j|1) is Poisson(3) and pY |X ( j|2) is Poisson(6). With these observations, it
is clear that
E[Y |X = 1] = 3 and E[Y |X = 2] = 6,
and
E[Y ] = E[Y |X = 1](1/3)+E[Y |X = 2](2/3) = 3(1/3)+6(2/3) = 1+4 = 5.
To obtain E[X |Y = j], we first compute
pY ( j) = pXY (1, j)+ pXY (2, j) = (1/3)3
je−3/ j!+(2/3)6 je−6/ j!
and
pX |Y (1| j) =
(1/3)3 je−3/ j!
(1/3)3 je−3/ j!+(2/3)6 je−6/ j!
=
1
1+2 j+1e−3
and
pX |Y (2| j) =
(2/3)6 je−6/ j!
(1/3)3 je−3/ j!+(2/3)6 je−6/ j!
=
1
1+2−( j+1)e3
.
42 Chapter 3 Problem Solutions
We now have
E[X |Y = j] = 1 · 1
1+2 j+1e−3
+2 · 1
1+2−( j+1)e3
=
1
1+2 j+1e−3
+2 · 2
j+1e−3
1+2 j+1e−3
=
1+2 j+2e−3
1+2 j+1e−3
.
39. Since Y is conditionally Poisson(k) given X = k, E[Y |X = k] = k. Hence,
E[Y ] =
∞
∑
k=1
E[Y |X = k]P(X = k) =
∞
∑
k=1
kP(X = k) = E[X ] =
1
1− p ,
since X ∼ geometric1(p). Next
E[XY ] =
∞
∑
k=1
E[XY |X = k]P(X = k) =
∞
∑
k=1
E[kY |X = k]P(X = k), by substitution,
=
∞
∑
k=1
kE[Y |X = k]P(X = k) =
∞
∑
k=1
k2P(X = k) = E[X2]
= var(X)+(E[X ])2 =
p
(1− p)2 +
1(1− p)2 =
1+ p
(1− p)2 .
Since E[Y 2|X = k] = k+ k2,
E[Y 2] =
∞
∑
k=1
E[Y 2|X = k]P(X = k) =
∞
∑
k=1
(k+ k2)P(X = k) = E[X ]+E[X2]
=
1
1− p +
1+ p
(1− p)2 =
2
(1− p)2 .
Finally, we can compute
var(Y ) = E[Y 2]− (E[Y ])2 = 2
(1− p)2 −
1
(1− p)2 =
1
(1− p)2 .
40. From the solution of the example, it is immediate that E[Y |X = 1] = λ and E[Y |X =
0] = λ/2. Next,
E[Y ] = E[Y |X = 0](1− p)+E[Y |X = 1]p = (1− p)λ/2+ pλ .
Similarly,
E[Y 2] = E[Y 2|X = 0](1− p)+E[Y 2|X = 1]p
= (λ/2+λ 2/4)(1− p)+(λ +λ 2)p.
To conclude, we have
var(Y ) = E[Y 2]− (E[Y ])2 = (λ/2+λ 2/4)(1− p)+(λ +λ 2)p− [(1− p)λ/2+ pλ ]2.
Chapter 3 Problem Solutions 43
41. Write
E[(X+1)Y 2] =
1
∑
i=0
E[(X+1)Y 2|X = i]P(X = i) =
1
∑
i=0
E[(i+1)Y 2|X = i]P(X = i)
=
1
∑
i=0
(i+1)E[Y 2|X = i]P(X = i).
Now, since given X = i, Y is conditionally Poisson(3(i+1)),
E[Y 2|X = i] = (λ +λ 2)
∣∣∣
λ=3(i+1)
= 3(i+1)+9(i+1)2.
It now follows that
E[(X+1)Y 2] =
1
∑
i=0
(i+1)[3(i+1)+9(i+1)2]P(X = i)
=
1
∑
i=0
(i+1)2[3+9(i+1)]P(X = i)
= 12(1/3)+84(2/3) = 4+56 = 60.
42. Write
E[XY ] =
∞
∑
n=0
E[XY |X = n]P(X = n) =
∞
∑
n=0
E[nY |X = n]P(X = n)
=
∞
∑
n=0
nE[Y |X = n]P(X = n) =
∞
∑
n=0
n
1
n+1
λ ne−λ
n!
= E
[
X
X+1
]
= E
[
X+1−1
X+1
]
= 1−E
[
1
X+1
]
.
By a problem in the previous chapter, this last expectation is equal to (1− e−λ )/λ .
Hence,
E[XY ] = 1− 1− e
−λ
λ .
43. Write
E[XY ] =
∞
∑
n=1
E[XY |X = n]P(X = n) =
∞
∑
n=1
E[nY |X = n]P(X = n)
=
∞
∑
n=1
nE[Y |X = n]P(X = n) =
∞
∑
n=1
n
n
1−qP(X = n)
=
1
1−qE[X
2] =
1
1−q
[
var(X)+(E[X ])2
]
=
1
1−q
[
p
(1− p)2 +
1
(1− p)2
]
=
1+ p
(1−q)(1− p)2 .
44 Chapter 3 Problem Solutions
44. Write
E[X2] =
∞
∑
k=1
E[X2|Y = k]P(Y = k) =
∞
∑
n=1
(k+ k2)P(Y = k) = E[Y +Y 2]
= E[Y ]+E[Y 2] = m+(r+m2) = m+m2 + r.
45. Using probability generating functions, we see that
GV (z) = E[z
X+Y ] = E[zX zY ] = E[zX ]E[zY ]
= [(1− p)+ pz]n[(1− p)+ pz]m = [(1− p)+ pz]n+m.
Thus, V ∼ binomial(n+m, p). We next compute
P(V = 10|X = 4) = P(X+Y = 10|X = 4) = P(4+Y = 10|X = 4)
= P(Y = 6|X = 4) = P(Y = 6) =
(
m
6
)
p6(1− p)m−6.
46. Write
GY (z) = E[z
Y ] =
∞
∑
k=1
E[zY |X = k]P(X = k) =
∞
∑
k=1
ek(z−1)P(X = k)
=
∞
∑
k=1
(ez−1)kP(X = k) = GX (ez−1) =
(1− p)ez−1
1− pez−1 .
CHAPTER 4
Problem Solutions
1. Let Vi denote the input voltage at the ith sampling time. The problem tells us that the
Vi are independent and uniformly distributed on [0,7]. The alarm sounds if Vi > 5 for
i= 1,2,3. The probability of this is
P
( 3⋂
i=1
{Vi > 5}
)
=
3
∏
i=1
P(Vi > 5).
Now, P(Vi > 5) =
∫ 7
5 (1/7)dt = 2/7. Hence, the desired probability is (2/7)
3 =
8/343 = 0.0233.
2. We must solve
∫
∞
t f (x)dx= 1/2 for t. Now,∫
∞
t
2x−3 dx = − 1
x2
∣∣∣∣∞
t
=
1
t2
.
Solving 1/t2 = 1/2, we find that t =
√
2.
3. To find c, we solve
∫ 1
0 cx
−1/2 dx= 1. The left-hand side of this equation is 2cx1/2|10 =
2c. Solving 2c= 1 yields c= 1/2. For the median, we must solve
∫ 1
t (1/2)x
−1/2 dx=
1/2 or x1/2|1t = 1/2. We find that t = 1/4.
4. (a) For t ≥ 0, P(X > t) = ∫ ∞t λe−λx dx=−e−λx|∞t = e−λ t .
(b) First, P(X > t+∆t|X > t) = P(X > t+∆t,X > t)/P(X > t). Next, observe that
{X > t+∆t}∩{X > t} = {X > t+∆t},
and so P(X > t + ∆t|X > t) = P(X > t + ∆t)/P(X > t) = e−λ (t+∆t)/e−λ t =
e−λ∆t .
5. Let Xi denote the voltage output by regulator i. Then the Xi are i.i.d. exp(λ ) random
variables. Now put
Y :=
10
∑
i=1
I(v,∞)(Xi)
so that Y counts the number of regulators that output more than v volts. We must
compute P(Y = 3). Now, the I(v,∞)(Xi) are i.i.d. Bernoulli(p) random variables, where
p = P(Xi > v) =
∫
∞
v
λe−λx dx = −e−λx
∣∣∣∣∞
v
= e−λv.
Next, we now from the previous chapter that a sum of n i.i.d. Bernoulli(p) random
variables is a binomial(n, p). Thus,
P(Y = 3) =
(
n
3
)
p3(1− p)n−3 =
(
10
3
)
e−3λv(1− e−λv)7 = 120e−3λv(1− e−λv)7.
45
46 Chapter 4 Problem Solutions
6. First note that P(Xi > 2) =
∫
∞
2 λe−λx dx=−e−λx|∞2 = e−2λ .
(a) P
(
min(X1, . . . ,Xn) > 2
)
= P
(⋂n
i=1{Xi > 2}
)
= ∏ni=1P(Xi > 2) = e−2nλ .
(b) Write
P
(
max(X1, . . . ,Xn) > 2
)
= 1−P(max(X1, . . . ,Xn)≤ 2)
= 1−P
( n⋂
i=1
{Xi ≤ 2}
)
= 1−
n
∏
i=1
P(Xi ≤ 2) = 1− [1− e−2λ ]n.
7. (a) P(Y ≤ 2) = ∫ 20 µe−µy dy= 1− e−2µ .
(b) P(X ≤ 12,Y ≤ 12) = P(X ≤ 12)P(Y ≤ 12) = (1− e−12λ )(1− e−12µ).
(c) Write
P({X ≤ 12}∪{Y ≤ 12}) = 1−P(X > 12,Y > 12)
= 1−P(X > 12)P(Y > 12)
= 1− e−12λ e−12µ = 1− e−12(λ+µ).
8. (a) Make the change of variable y= λxp, dy= λ pxp−1 dx to get∫
∞
0
λ pxp−1e−λxp dx =
∫
∞
0
e−y dy = −e−y
∣∣∣∣∞
0
= 1.
(b) The same change of variables also yields
P(X > t) =
∫
∞
t
λ pxp−1e−λxp dx =
∫
∞
λ t p
e−y dy = e−λ t
p
.
(c) The probability that none of the Xi exceeds 3 is
P
( n⋂
i=1
{Xi ≤ 3}
)
=
n
∏
i=1
P(Xi ≤ 3) = [1−P(X1 > 3)]n = [1− e−λ3p ]n.
The probability that at least one of them exceeds 3 is
P
( n⋃
i=1
{Xi > 3}
)
= 1−P
( n⋂
i=1
{Xi ≤ 3}
)
= 1− [1− e−λ3p ]n.
9. (a) Since f ′(x) =−xe−x2/2/√2pi , f ′(x) < 0 for x> 0 and f ′(x) > 0 for x< 0.
(b) Since f ′′(x)= (x2−1)e−x2/2/√2pi , we see that f ′′(x)> 0 for |x|> 1 and f ′′(x)<
0 for |x|< 1.
(c) Rearrange ex
2/2 ≥ x2/2 to get e−x2/2 ≤ 2/x2→ 0 as |x| → ∞.
Chapter 4 Problem Solutions 47
10. Following the hint, write f (x) = ϕ((x−m)/σ)/σ , where ϕ is the standard normal
density. Observe that f ′(x) = ϕ ′((x−m)/σ)/σ2 and f ′′(x) = ϕ ′′((x−m)/σ)/σ3.
(a) Since the argument of ϕ ′ is positive for x > m and negative for x < m, f (x) is
decreasing for x>m and increasing for x<m. Hence, f has a global maximum
at x= m.
(b) Since the absolute value of the argument of ϕ ′′ is greater than one if and only if
|x−m|> σ , f (x) is concave for |x−m|< σ and convex for |x−m|> σ .
11. Since ϕ is bounded, limσ→∞ ϕ((x−m)/σ)/σ = 0. Hence, limσ→∞ f (x) = 0. For
x 6= m, we have
f (x) =
exp
[
−
(
x−m
σ
)2/
2
]
√
2pi σ
≤ 2√
2pi σ
(
x−m
σ
)2 = 2σ√2pi(x−m)2 → 0
as σ → 0. Otherwise, since f (m) = [√2pi σ ]−1, limσ→0 f (m) = ∞.
12. (a) f (x) = ∑n pn fn(x) is obviously nonnegative. Also,∫
∞
−∞
f (x)dx =
∫
∞
−∞ ∑n pn fn(x)dx = ∑n pn
∫
∞
−∞
fn(x)dx = ∑
n
pn ·1 = ∑
n
pn = 1.
(b)
x
1 2 30
3/4
1/2
1/4
0
(c)
x
1 2 30
3/4
1/2
1/4
0
13. Clearly, (g∗h)(x) = ∫ ∞−∞ g(y)h(x− y)dy≥ 0 since g and h are nonnegative. Next,∫
∞
−∞
(g∗h)(x)dx =
∫
∞
−∞
(∫
∞
−∞
g(y)h(x− y)dy
)
dx
=
∫
∞
−∞
g(y)
(∫
∞
−∞
h(x− y)dx
)
dy
=
∫
∞
−∞
g(y)
(∫
∞
−∞
h(θ)dθ
)
︸ ︷︷ ︸
= 1
dy =
∫
∞
−∞
g(y)dy = 1.
14. (a) Let p> 1. On Γ(p) =
∫
∞
0 x
p−1e−x dx, use integration by parts with u= xp−1 and
dv= e−x dx. Then du= (p−1)xp−2dx, v=−e−x, and
Γ(p) = −xp−1e−x
∣∣∣∞
0︸ ︷︷ ︸
= 0
+(p−1)
∫
∞
0
x(p−1)−1e−x dx = (p−1)Γ(p−1).
48 Chapter 4 Problem Solutions
(b) On Γ(1/2) =
∫
∞
0 x
−1/2e−x dx, make the change of variable x= y2/2 or y=
√
2x.
Then dx= ydy and x−1/2 =
√
2/y. Hence,
Γ(1/2) =
∫
∞
0
√
2
y
e−y
2/2ydy =
√
2
∫
∞
0
e−y
2/2 dy =
√
2
√
2pi
∫
∞
0
e−y2/2√
2pi
dy
=
√
2
√
2pi · 1
2
=
√
pi.
(c) By repeatedly using the recursion formula in part (a), we have
Γ
(
2n+1
2
)
=
2n−1
2
Γ
(
2n−1
2
)
=
2n−1
2
· 2n−3
2
Γ
(
2n−3
2
)
...
=
2n−1
2
· 2n−3
2
· · · 5
2
· 3
2
· 1
2
Γ(1/2)
=
2n−12
· 2n−3
2
· · · 5
2
· 3
2
· 1
2
·√pi
=
(2n−1)!!
2n
√
pi.
(d) First note that gp(y) = 0 for y ≤ 0, and similarly for gq(y). Hence, in order to
have gp(y)gq(x−y) > 0, we need y> 0 and x−y> 0, or equivalently, x> y> 0.
Of course, if x ≤ 0 this does not happen. Thus, (gp ∗gq)(x) = 0 for x ≤ 0. For
x> 0, we follow the hint and write
(gp ∗gq)(x) =
∫
∞
−∞
gp(y)gq(x− y)dy
=
∫ x
0
gp(y)gq(x− y)dy
=
1
Γ(p)Γ(q)
∫ x
0
yp−1e−y · (x− y)q−1e−(x−y) dy
=
xq−1e−x
Γ(p)Γ(q)
∫ x
0
yp−1(1− y/x)q−1 dy
=
xqe−x
Γ(p)Γ(q)
∫ 1
0
(xθ)p−1(1−θ)q−1 dθ , ch. of var. θ = y/x,
=
xp+q−1e−x
Γ(p)Γ(q)
∫ 1
0
θ p−1(1−θ)q−1 dθ . (∗)
Now, the left-hand side is a convolution of densities, and is therefore a density
by Problem 13. In particular, this means that the left-hand side integrates to
one. On the right-hand side, note that
∫
∞
0 x
p+q−1e−x dx = Γ(p+ q). Hence,
integrating the above equation with respect to x from zero to infinity yields
1 =
Γ(p+q)
Γ(p)Γ(q)
∫ 1
0
θ p−1(1−θ)q−1 dθ .
Solving for the above integral and substituting the result into (∗), we find that
(gp ∗gq)(x) = gp+q(x).
Chapter 4 Problem Solutions 49
15. (a) In
∫
∞
−∞ fλ (x)dx =
∫
∞
−∞ λ f (λx)dx, make the change of variable y = λx, dy =
λ dx to get ∫
∞
−∞
fλ (x)dx =
∫
∞
−∞
f (y)dy = 1.
(b) Observe that
g1,λ (x) = λ
(λx)0 e−λx
0!
= λe−λx,
which we recognize as the exp(λ ) density.
(c) The desired probability is
Pm(t) :=
∫
∞
t
λ (λx)
m−1 e−λx
(m−1)! dx.
Note that P1(t) =
∫
∞
t λe−λx dx = e−λ t . For m > 1, apply integration by parts
with u= (λx)m−1/(m−1)! and dv= λe−λx dx. Then
Pm(t) =
(λ t)m−1 e−λ t
(m−1)! +Pm−1(t).
Applying this result recursively, we find that
Pm(t) =
(λ t)m−1 e−λ t
(m−1)! +
(λ t)m−2 e−λ t
(m−2)! + · · ·+ e
−λ t .
(d) We have
g 2m+1
2 ,
1
2
(x) = 12
( 12x)
m−1/2 e−x/2
Γ((2m+1)/2)
=
(1/2)m(1/2)1/2xm−1/2e−x/2
(2m−1)!!
2m
√
pi
=
xm−1/2e−x/2
(2m−1) · · ·5 ·3 ·1 ·√2pi .
16. (a) We see that b1,1(x) = 1 is the uniform(0,1) density, b2,2(x) = 6x(1− x), and
b1/2,1(x) = 1/(2
√
x).
0 0.25 0.5 0.75 1
0
0.5
1
1.5
2
b
1,1
(x)
b
2,2
(x)
b
1/2,1
(x)
x
50 Chapter 4 Problem Solutions
(b) From Problem 14(d) and its hint, we have
gp+q(x) = (gp ∗gq)(x) = x
p+q−1e−x
Γ(p)Γ(q)
∫ 1
0
θ p−1(1−θ)q−1dθ .
Integrating the left and right-hand sides with respect to x from zero to infinity
yields
1 =
Γ(p+q)
Γ(p)Γ(q)
∫ 1
0
θ p−1(1−θ)q−1dθ ,
which says that the beta density integrates to one.
17. Starting with
Γ(p)Γ(q) = Γ(p+q)
∫ 1
0
up−1(1−u)q−1 du,
make the change of variable u= sin2 θ , du= 2sinθ cosθ dθ . We obtain
Γ(p)Γ(q) = Γ(p+q)
∫ 1
0
up−1(1−u)q−1 du
= Γ(p+q)
∫ pi/2
0
(sin2 θ)p−1(1− sin2 θ)q−1 ·2sinθ cosθ dθ
= 2Γ(p+q)
∫ pi/2
0
(sinθ)2p−1(cosθ)2q−1 dθ .
Setting p= q= 1/2 on both sides yields
Γ(1/2)2 = 2
∫ pi/2
0
1dθ = pi,
and it follows that Γ(1/2) =
√
pi .
18. Starting with
Γ(p)Γ(q) = Γ(p+q)
∫ 1
0
up−1(1−u)q−1 du,
make the change of variable u= sin2 θ , du= 2sinθ cosθ dθ . We obtain
Γ(p)Γ(q)
Γ(p+q)
=
∫ 1
0
up−1(1−u)q−1 du
=
∫ pi/2
0
(sin2 θ)p−1(1− sin2 θ)q−1 ·2sinθ cosθ dθ
= 2
∫ pi/2
0
(sinθ)2p−1(cosθ)2q−1 dθ .
Setting p= (n+1)/2 and q= 1/2 on both sides yields
Γ
(
n+1
2
)√
pi
Γ
(
n+2
2
) = 2∫ pi/2
0
sinn θ dθ ,
and the desired result follows.
Chapter 4 Problem Solutions 51
19. Starting with the integral definition of B(p,q), make the change of variable u = 1−
e−θ , which implies both du= e−θ dθ and 1−u= e−θ . Hence,
B(p,q) =
∫ 1
0
up−1(1−u)q−1 du =
∫
∞
0
(1− e−θ )p−1(e−θ )q−1e−θ dθ
=
∫
∞
0
(1− e−θ )p−1e−qθ dθ .
20. We first use the fact that the density is even and then make the change of variable
eθ = 1+ x2/ν , which implies both eθdθ = 2x/ν dx and x=
√
ν(eθ −1). Thus,∫
∞
−∞
(
1+
x2
ν
)−(ν+1)/2
dx = 2
∫
∞
0
(
1+
x2
ν
)−(ν+1)/2
dx
= 2
∫
∞
0
(eθ )−(ν+1)/2 · ν2 eθ
1√
ν(eθ −1) dθ
=
√
ν
∫
∞
0
(eθ )−ν/2(eθ )1/2/
√
eθ −1dθ
=
√
ν
∫
∞
0
(eθ )−ν/2(1− e−θ )−1/2 dθ
=
√
ν
∫
∞
0
(1− e−θ )1/2−1e−θν/2 dθ .
By the preceding problem, this is equal to
√
ν B(1/2,ν/2), and we see that Student’s t
density integrates to one.
21. (a) Using Stirling’s formula,
Γ
(1+ν
2
)
√
ν Γ
(
ν
2
) ≈
√
2pi
(1+ν
2
)(1+ν)/2−1/2
e−(1+ν)/2
√
ν
√
2pi
(ν
2
)ν/2−1/2
e−ν/2
=
(1+ν
2
)ν/2
e−1/2
√
ν
(ν
2
)ν/2−1/2
=
(1+ν
ν
)ν/2 (ν/2)1/2√
ν
e−1/2 = [(1+1/ν)ν ]1/2
1√
2e1/2
→ (e1)1/2 1√
2e1/2
=
1√
2
.
(b) First write(
1+
x2
ν
)(ν+1)/2
=
[(
1+
x2
ν
)ν]1/2(
1+
x2
ν
)1/2
→ [ex2 ]1/211/2 = ex2/2.
It then follows that
fν(x) =
(
1+ x
2
ν
)−(ν+1)/2
√
νB( 12 ,
ν
2 )
=
Γ
(1+ν
2
)
√
ν Γ
(
ν
2
) · 1√
pi
(
1+ x
2
ν
)(ν+1)/2
→ 1√
2pi ex2/2
=
e−x2/2√
2pi
.
52 Chapter 4 Problem Solutions
22. Making the change of variable t = 1/(1+ z) as suggested in the hint, note that it is
equivalent to 1+ z= t−1, which implies dz=−t−2dt. Thus,∫
∞
0
zp−1
(1+ z)p+q
dz =
∫ 1
0
(1
t
−1
)p−1
t p+q
dt
t2
=
∫ 1
0
(1− t
t
)p−1
t p+q−2 dt
=
∫ 1
0
(1− t)p−1tq−1 dt = B(q, p) = B(p,q).
Hence, fZ(z) integrates to one.
23. E[X ] =
∫
∞
1
x · 2
x3
dx =
∫
∞
1
2x−2 dx =
−2
x
∣∣∣∣∞
1
= 2.
24. If the input-output relation has n levels, then the distance from−Vmax to+Vmax should
be n∆; i.e., n∆ = 2Vmax, or ∆ = 2Vmax/n. Next, we have from the example in the text
that the performance is ∆2/12, and we need ∆2/12< ε , or
1
12
(2Vmax
n
)2
< ε.
Solving this for n yields Vmax/
√
3ε < n= 2b. Taking natural logarithms, we have
b > ln
(
Vmax/
√
3ε
)/
ln2.
25. We use the change of variable x= z−m as follows:
E[Z] =
∫
∞
−∞
z fZ(z)dz =
∫
∞
−∞
z f (z−m)dz =
∫
∞
−∞
(x+m) f (x)dx
=
∫
∞
−∞
x f (x)dx+m
∫
∞
−∞
f (x)dx = E[X ]+m = 0+m = m.
26. E[X2] =
∫
∞
1
x2 · 2
x3
dx =
∫
∞
1
2
x
dx = 2lnx
∣∣∣∣∞
1
= 2(∞−0) = ∞.
27. First note that since Student’s t density is even, E[|X |k] = ∫ ∞−∞ |x|k fν(x)dx is propor-
tional to∫
∞
0
xk
(1+ x2/ν)(ν+1)/2
dx=
∫ 1
0
xk
(1+ x2/ν)(ν+1)/2
dx+
∫
∞
1
xk
(1+ x2/ν)(ν+1)/2
dx
With regard to this last integral, observe that∫
∞
1
xk
(1+ x2/ν)(ν+1)/2
dx ≤
∫
∞
1
xk
(x2/ν)(ν+1)/2
dx = ν(ν+1)/2
∫
∞
1
dx
xν+1−k
,
which is finite if ν + 1− k > 1, or k < ν . Next, instead of breaking the range of
integration at one, we break it at the solution of x2/ν = 1, or x=
√
ν . Then∫
∞
√
ν
xk
(1+ x2/ν)(ν+1)/2
dx≥
∫
∞
√
ν
xk
(x2/ν + x2/ν)(ν+1)/2
dx=
(
ν
2
)(ν+1)/2 ∫ ∞
√
ν
dx
xν+1−k
,
which is infinite if ν +1− k ≤ 1, or k ≥ ν .
Chapter 4 Problem Solutions 53
28. Begin with E[Y 4] = E[(Z+n)4] = E[Z4 +4Z3n+6Z2n2 +4Zn3 +n4]. The moments
of the standard normal were computed in an example in this chapter. Hence E[Y 4] =
3+4 ·0 ·n+6 ·1 ·n2+4 ·0 ·n3+n4 = 3+6n2 +n4.
29. E[Xn] =
∫
∞
0
xn
xp−1e−x
Γ(p)
dx =
1
Γ(p)
∫
∞
0
x(n+p)−1e−x dx =
Γ(n+ p)
Γ(p)
.
30. (a) First write
E[X ] =
∫
∞
0
x · xe−x2/2 dx = 1
2
∫
∞
−∞
x2e−x
2/2 dx =
√
2pi
2
∫
∞
−∞
x2
e−x2/2√
2pi
dx,
where the last integral is the second moment of a standard normal density, which
is one. Hence, E[X ] =
√
2pi
2
=
√
pi/2.
(b) For higher-order moments, first write
E[Xn] =
∫
∞
0
xn · xe−x2/2 dx =
∫
∞
0
xn+1e−x
2/2

Outros materiais

Perguntas Recentes