Buscar

Transformações univariadas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes
Você viu 3, do total de 4 páginas

Faça como milhares de estudantes: teste grátis o Passei Direto

Esse e outros conteúdos desbloqueados

16 milhões de materiais de várias disciplinas

Impressão de materiais

Agora você pode testar o

Passei Direto grátis

Você também pode ser Premium ajudando estudantes

Prévia do material em texto

x -2 -1 0 1 2 3
fX(x) 0.1 0.2 0.3 0.1 0.25 0.05
Table 7: Probability density of X in Example 8.2.
x 0 1 4 9
fX2(x) 0.3 0.3 0.35 0.05
Table 8: Probability density of X2 in Example 8.2.
8 Transformations
8.1 Univariate Transformations
Lemma 8.1. Let X be a discrete random variable and g be a function. If Y = g(X), then
fY (y) =
∑
{x:g(x)=y}
fX(x)
Proof.
fY (y) = P(Y = y) Definition 7.6
= P(g(X) = y) Y = g(X)
= P(X ∈ {x : g(x) = y})
=
∫
{x:g(x)=y}
fX(x)dx Lemma 7.7
=
∑
{x:g(x)=y}
P(X = x) Definition 7.6
Example 8.2. Let X be a discrete random variable with a pmf given by table 7. Let g(x) = x2. Using
Lemma 8.1, fX2(0) = fX(0) = 0.3, fX2(1) = fX(−1) + fX(1) = 0.3, ” fX2)(4) = fX(−2) + fX(2) = 0.35 and
fX2(9) = fX(3) = 0.05. Hence, the pmf of X
2 is given by table 8.
Lemma 8.3. Let g be a strictly increasing and differentiable function and X be a continuous random variable. If
Y = g(x), then
fY (y) = fX(g
−1(y)) · ∂g
−1(y)
∂y
Proof.
FY (y) = P(Y ≤ y)
= P(g(X) ≤ y)
= P(X ≤ g−1(y)) g is strictly increasing
149
Hence, using Lemma 5.12,
fY (y) =
∂FY (y)
∂y
=
∂P(X ≤ g−1(y))
∂y
=
∂P(X ≤ g−1(y))
∂g−1(y)
· ∂g
−1(y)
∂y
Chain rule
= fX(g
−1(y)) · ∂g
−1(y)
∂y
Example 8.4 (Gut (1995; p.19)). Let X ∼ Uniform(0, 1), g(x) = xn and Y = g(X). Note that g is increasing
and differentiable, and g−1(y) = y
1
n . Therefore, it follows from Lemma 8.3 that
fY (y) = fX(g
−1(y)) · ∂g
−1(y)
∂y
= I(0 < y
1
n < 1)
∂y
1
n
∂y
=
1
ny
n−1
n
I(0 < y < 1)
Note that
E[Y ] =
∫ 1
0
y
ny
n−1
n
dy = (n+ 1)−1
E[Y 2] =
∫ 1
0
y2
ny
n−1
n
dy = (2n+ 1)−1
V ar[Y ] = E[Y 2]− E[Y ]2 = n
2
(2n+ 1)(n+ 1)2
Lemma 7.67
That is, the higher the value of n, the more Y will be concentrated around 0.
Example 8.5. Let g(x) = ax+ b, a > 0 and Y = g(X). Note that g−1(y) = y−ba . Therefore,
fY (y) = fX(g
−1(y)) · ∂g
−1(y)
∂y
=
1
a
fX(
y − b
a
)
Example 8.6. Let X ∼ Exp(λ), g(x) = x2 and Y = g(X). Observe that X ≥ 0 and g(x) : R+ → R is strictly
increasing. Also observe that g−1(y) = √y. Hence,
fY (y) = fX(
√
y) · ∂
√
y
∂y
Lemma 8.3
=
1
λ
e−
√
y
λ · 1
2
√
y
table 5
=
1
2λ
√
y
e−
√
y
λ
150
Example 8.7. Let Z ∼ N(0, 1) and g(z) = z2. What is the distribution of g(Z)? Observe that g is not strictly
increasing on R. Hence, we cannot use Lemma 8.3. We can follow similar steps, though:
P(Z2 ≤ y) = P(−√y ≤ Z ≤ √y)
= 1− P(Z ≤ −√y ∪ Z ≥ √y)
= 1− (P(Z ≤ −√y) + P(Z ≥ √y)) Lemma 2.3
= 1− 2P(Z ≤ −√y) simmetry of the standard normal around 0
= 1− 2FZ(−√y)
Hence,
fZ2(y) =
∂P(Z2 ≤ y)
∂y
Definition 7.6
=
∂(1− 2FZ(−√y))
∂(−√y) ·
∂(−√y)
∂y
Chain rule
= −2fZ(−√y) · −1
2
√
y
=
1√
pi
√
2
· 1√
y
e−
(
√
y)2
2 table 5
=
1
Γ(0.5)20.5
y0.5−1e−
y
2 Γ(0.5) =
√
pi
That is, Z2 ∼ Gamma(0.5, 2). This distribution is also called a Chi-square distribution with one degree of freedom.
Lemma 8.8. Let g(x) = ax+ b, a 6= 0, and Y = g(X), then fY (y) = 1|a|fX(y−ba ).
Proof. If a > 0, then the proof follows from Example 8.5. If a < 0, then
FY (y) = P(Y ≤ y)
= P(aX + b ≤ y)
= P
(
X >
y − b
a
)
= 1− FX
(
y − b
a
)
fY (y) =
∂FY (y)
∂y
Definition 7.6
=
∂(1− FX(y−ba ))
∂y
= −1
a
fX
(
y − b
a
)
=
1
|a|fX
(
y − b
a
)
8.1.1 Exercises
Exercise 8.9. Use Lemma 8.8 to prove the following results:
(a) If X ∼ Uniform(a, b), then cX + d ∼ Uniform(min(ca+ d, cb+ d),max(ca+ d, cb+ d)).
151
(b) If X ∼ Gamma(k, λ), then If c > 0, cX ∼ Gamma(k, cλ).
(c) If X ∼ Beta(α, β), then 1−X ∼ Beta(β, α).
(d) If X ∼ N(µ, σ2), then aX + b ∼ N(aµ+ b, σ2a2).
Exercise 8.10. Let X be a continuous random variable with distribution function F , and let Y = F (X). Show
that Y has uniform distribution over (0, 1).
Exercise 8.11 (Fundamental theorem of simulation).
(a) Let F be a continuous cdf, U ∼ Uniform(0, 1) and X = F−1(U). Show that F is the cdf of X.
(b) Let X ∼ Exp(λ). Find F−1X . Write a code that, for each λ, simulates from X.
Exercise 8.12. Let X be a random variable such that fX(x) = (pi(1 + x
2))−1. We say that X follows a Cauchy
distribution. Show that X−1 follows a Cauchy distribution. Note that g(x) = x−1 is not differentiable at 0 and
also is not increasing.
8.2 Bivariate Transformations
8.2.1 Convolution of random variables
Lemma 8.13 (Convolution of random variables). Let X and Y be independent random variables and Z = X+Y .
fZ(z) =
∫ ∞
−∞
fX(x)fY (z − x)dx
Proof.
FZ(z) = P(Z ≤ z) Definition 7.1
= P(X + Y ≤ z)
=
∫
{(x,y):x+y≤z}
fX,Y (x, y)d(x, y) Lemma 7.7
=
∫ ∞
−∞
∫ z−x
−∞
fX,Y (x, y)dydx
=
∫ ∞
−∞
∫ z−x
−∞
fX(x)fY (y)dydx Definition 7.29
fZ(z) =
dFZ(z)
dz
Lemma 5.12
=
d
∫∞
−∞
∫ z−x
−∞ fX(x)fY (y)dxdy
dz
=
∫ ∞
−∞
d
dz
∫ z−x
−∞
fX(x)fY (y)dxdy
=
∫ ∞
−∞
fX(x)fY (z − x)dx Fundamental Theorem of Calculus
152

Continue navegando