Buscar

Independência Condicional

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 3, do total de 6 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 6, do total de 6 páginas

Prévia do material em texto

distribution of X given Y . Note that
fX|Y (x|y) =
fX(x)fY |X(y|x)∫∞
−∞ fX(x)fY |X(y|x)dx
=
ba
Γ(a)x
a−1 exp(−bx) · x exp(−xy)∫∞
−∞
ba
Γ(a)x
a−1 exp(−bx) · x exp(−xy)dx
=
xa exp(−(b+ y)x)∫∞
−∞ x
(a+1)−1 exp(−(b+ y)x)dx
=
xa exp(−(b+ y)x)
Γ(a+1)
ba+1
∫∞
−∞
ba+1
Γ(a+1)x
(a+1)−1 exp(−(b+ y)x)dx
=
xa exp(−(b+ y)x)
Γ(a+1)
ba+1
· 1
Definition 5.41
=
ba+1
Γ(a+ 1)
xa exp(−(b+ y)x)
That is, X|Y = y ∼ Gamma(a+ 1, b+ y).
7.3.1 Independence
Independence describes a particular type of conditional density that is commonly used in Statistics. Informally, X1
and X2 are independent if, no matter what value is observed for X1, this observation brings no information about
X2 (and vice-versa). This is a generalization of the concept of independence between events (Definition 2.46).
Independence between random vectors is formally presented in 7.28
Definition 7.28. We say that X1, . . . ,Xd are conditionally independent given Y if, for every x1, . . . ,xd and y,
f(X1,...,Xd)|Y(x1, . . . ,xd|y) =
d∏
i=1
fXi|Y (xi|y)
In particular, we say that X1, . . . ,Xd are independent if Y is empty, that is, for every x1, . . . ,xd
f(X1,...,Xd)(x1, . . . ,xd) =
d∏
i=1
fXi(xi)
Example 7.29. Consider that
fX1,X2|θ(x1, x2|t) = tx1+x2(1− t)2−x1−x2I(x1 ∈ {0, 1})I(x2 ∈ {0, 1})
128
That is, given θ, each Xi is a discrete random variable that assumes value 0 or 1. Note that
fX1|θ(x1|t) =
∫ ∞
−∞
f(X1,X2)|θ(x1, x2|t) Theorem 7.25
=
1∑
x2=0
tx1+x2(1− t)2−x1−x2I(x1 ∈ {0, 1})
= (tx1(1− t)2−x1 + tx1+1(1− t)1−x1)I(x1 ∈ {0, 1})
= tx1(1− t)1−x1(t+ (1− t))I(x1 ∈ {0, 1})
= tx1(1− t)1−x1I(x1 ∈ {0, 1})
Similarly,
fX2|θ(x2|t) = tx2(1− t)1−x2I(x2 ∈ {0, 1})
That is, X1|θ ∼ Bernoulli(θ) and X2|θ ∼ Bernoulli(θ). Finally, note that
fX1,X2|θ(x1, x2|t) = tx1+x2(1− t)2−x1−x2I(x1 ∈ {0, 1})I(x2 ∈ {0, 1})
= tx1(1− t)1−x1I(x1 ∈ {0, 1}) · tx2(1− t)1−x2I(x2 ∈ {0, 1})
= fX1|θ(x1|t)fX2|θ(x2|t)
Conclude from Definition 7.28 that X1 is independent of X2 given θ.
Lemma 7.30. The following statements are equivalent:
1. (X1, . . . ,Xd) are conditionally independent given Y.
2. There exist functions h1, . . . , hd such that f(X1,...,Xd)|Y(x1, . . . ,xd|y) =
∏n
j=1 hj(xj ,y).
3. For every i, fXi|X−i,Y(xi|x−i,y) = fXi|Y(xi|y).
4. For every i, fXi|Xi−11 ,Y(xi|x
i−1
1 ,y) = fXi|Y(xi|y).
Proof. The proof strategy consists of showing that, for each i, statement i follows from statement i− 1. Finally,
we prove that statement 1 follows from statement 4. The symbols X and x refer to (X1, . . . ,Xd) and (x1, . . . ,xd).
• (1 =⇒ 2)
fX|Y(x|y) =
d∏
j=1
fXj |Y (xj |y)
=
d∏
j=1
h(xj ,y) h(xj ,y) = fXj |Y (xj |y)
129
• (2 =⇒ 3) Note that,
fXi|X−i,Y(xi|x−i,y) =
f(X1,...,Xd)|Y(x1, . . . ,xd|y)
f(X1,...,Xi−1,Xi+1,...,Xd)|Y(x1, . . . ,xi−1,xi+1, . . .xd|y)
Definition 7.20
=
f(X1,...,Xd)|Y(x1, . . . ,xd|y)∫
R f(X1,...,Xd)|Y(x1, . . . ,xd|y)dxi
Theorem 7.25
=
∏d
j=1 hj(xj ,y)∫
R
∏d
j=1 hj(xj ,y)dxi
(2)
=
∏d
j=1 hj(xj ,y)∏
j 6=i hj(xj ,y)
∫
R hi(xi,y)dxi
=
h˜i(xi,y)∫
R hi(xi,y)dxi
=
∏
j 6=i
∫
R hj(xj ,y)dxj∏
j 6=i
∫
R hj(xj ,y)dxj
· hi(xi,y)∫
R hi(xi,y)dxi
=
∫
Rd−1
∏d
j=1 hj(xj ,y)dx−i∫
Rd
∏d
j=1 hj(xj ,y)dx
=
∫
Rd−1 fX|Y(x|y)dx−i∫
Rd fX|Y(x|y)dx
(2)
= fXi|Y(xi|y) Theorem 7.25
• (3 =⇒ 4)
fXi|Xi−11 ,Y(xi|x
i−1
1 ,y) =
fXi1|Y(x
i
1|y)
fXi−11 |Y(x
i−1
1 |y)
Definition 7.20
=
∫
Rd−i fX|Y(x|y)dxdi+1
fXi−11 |Y(x
i−1
1 |y)
Theorem 7.25
=
∫
Rd−i fX−i|Y(x−i|y)fXi|X−i,Y (xi|x−i,y)dxdi+1
fXi−11 |Y(x
i−1
1 |y)
=
fXi|Y (xi|y)
∫
Rd−i fX−i|Y(x−i|y)dxdi+1
fXi−11 |Y(x
i−1
1 |y)
(3)
=
fXi|Y (xi|y)fXi−11 |Y(x
i−1
1 |y)
fXi−11 |Y(x
i−1
1 |y)
Theorem 7.25
= fXi|Y (xi|y)
• (4 =⇒ 1)
fX|Y(x|y) =
d∏
i=1
fXi|Xi−11 ,Y(xi|x
i−1
1 ,y)
=
d∏
i=1
fXi|Y(xi|y) (4)
130
Example 7.31. Recall Example 7.9
f(X,Y )(x, y) = 24xyI(x > 0)I(y > 0)I(x+ y < 1)
Since f(X,Y )(x, y) 6= h1(x)h2(y), X and Y are not independent (Lemma 7.30.2). Also,
fX(x) =
∫ ∞
−∞
f(X,Y )(x, y)dy Lemma 7.11
=
∫ 1−x
0
24xyI(x > 0)dy
= 12xI(x > 0)y2
∣∣∣∣(1−x)
0
= 12x(1− x)2I(x > 0)
Similarly fY (y) = 12y(1−y)2I(y > 0). Therefore, since f(X,Y )(x, y) 6= fX(x)fY (y), it follows from Definition 7.28
that X and Y are not independent.
7.3.2 Exercises
Exercise 7.32. The frequency of individuals, θ, who will vote in candidate D during elections is such that
θ ∼ Beta(a, b). In order to estimate the value of θ a random sample of the population is asked whether they
will vote in candidate D. Let X be the total number of people who said that they will vote in D. Assume that,
X|θ ∼ Binomial(n, θ). Find the distribution of θ|X.
Solution:
fθ|X(t|x) =
fθ(t)fX|θ(x|t)∫∞
−∞ fθ(t)fX|θ(x|t)dt
Theorem 7.26
=
β−1(a, b)ta−1(1− t)b−1(nx)tx(1− t)n−x∫∞
−∞ β
−1(a, b)ta−1(1− t)b−1(nx)θx(1− t)n−xdt
=
ta+x−1(1− t)b+n−x−1∫∞
−∞ t
a+x−1(1− t)b+n−x−1dt
=
ta+x−1(1− t)b+n−x−1
β(a+ x, b+ n− x) ∫∞−∞ β−1(a+ x, b+ n− x)ta+x−1(1− t)b+n−x−1dt
=
ta+x−1(1− t)b+n−x−1
β(a+ x, b+ n− x) · 1 Definition 5.47
= β−1(a+ x, b+ n− x)ta+x−1(1− t)b+n−x−1
Therefore, θ|X = x ∼ Beta(a+ x, b+ n− x).
Exercise 7.33. Consider Example 7.10. Show that X and Y are not independent.
Solution: In order to show that X and Y are not independent, it is enough to show that fX,Y (x, y) 6=
131
fX(x)fY (y). Observe that
fX,Y (1, 1) = 0.1
fX(1) = fX,Y (1, 0) + fX,Y (1, 1) = 0.3 + 0.1 = 0.4
fY (1) = fX,Y (0, 1) + fX,Y (1, 1) = 0.4 + 0.1 = 0.5
Since 0.1 6= 0.4 · 0.5, conclude that X and Y are not independent.
Exercise 7.34. Consider Example 7.8. Show that X and Y are independent.
Solution: It is enough to show that, for every x, y ∈ R, fX,Y (x, y) = fX(x)fY (y). Recall from Example 7.8
that fX(x) = I(0,1)(x). Similarly, fY (y) = I(0,1)(y). Hence,
fX,Y (x, y) = I(0,1)2(x, y) = I(0,1)(x)I(0,1)(y) = fX(x)fY (y)
and X and Y are independent.
Exercise 7.35. In Example 7.29, consider that θ ∼ Uniform(0, 1). We proved that, given θ, X1 and X2 are
independent. Show that X1 and X2 are not independent (when θ is not given).
Exercise 7.36. Consider that, given θ, X1 and X2 are independent and fXi|θ(xi|θ) = θ exp(−θx) Let θ ∼
Gamma(a, b).
(a) Find f(X1,X2)(x1, x2).
(b) Show that X1 and X2 are not independent.
Exercise 7.37. Let X ∼ Exp(α) and Y ∼ Exp(β) be independent random variables, Find P(cX > dY ), where
c, d > 0.
Solution: Let A = {(x, y) ∈ R2 : cx > dy}.
P (cX > dY ) =
∫
A
fX,Y (x, y)d(x, y)
=
∫
A
fX(x)fY (y)d(x, y) Definition 7.28
=
∫ ∞
0
∫ cx
d
0
αe−αxβe−βydydx
=
∫ ∞
0
−αe−αxe−βy
∣∣∣∣ cxd
0
dx
=
∫ ∞
0
(αe−αx − αe−αxe−β cxd )dx
= −e−αx
∣∣∣∣∞
0
+
α
α+ cβd
e−(α+
cβ
d
)x
∣∣∣∣∞
0
= 1− α
α+ cβd
=
cβ
dα+ cβ
Exercise 7.38. Let X ∼ Uniform(a, b) and Y ∼ Uniform(c, d) be independent random variables. Find P(X < Y ),
when a < c < b < d.
132
Solution: Let A = {(x, y) ∈ R2 : x < y}.
P (X < Y ) =
∫
A
fX,Y (x, y)d(x, y)
=
∫
A
fX(x)fY (y)d(x, y) Definition 7.28
=
∫ d
c
∫ min(b,y)
a
1
b− a
1
d− cdxdy
=
∫ b
c
∫ y
a
1
(b− a)(d− c)dxdy +
∫ d
b
∫ b
a
1
(b− a)(d− c)dxdy
=
∫ b
c
y − a
(b− a)(d− c)dy +
∫ d
b
1
d− cdy
=
(b2 − c2)− 2a(b− c)
2(b− a)(d− c) +
d− b
d− c
Exercise 7.39. Let X,Y, Z ∼ U(0, 1) be independent random variables. Compute P(X ≥ Y Z).
Exercise 7.40. Prove that, if X and Y are independent, then f(X) is independent of g(Y ) for any real valued
functions f any g.
133

Continue navegando