Logo Passei Direto
Buscar
Material
páginas com resultados encontrados.
páginas com resultados encontrados.

Prévia do material em texto

Ozyegin Univ. EE503 Spring 2015 Due 5pm Apr. 15, 2015
Homework 6
Please scan and send your homework to my e-mail: ali.ercan@ozyegin.edu.tr
1. Radar signal detection. The received signal S for a radar channel is 0 if there is no target and a random
variable X ∼ N (0, P ) if there is a target. Both possibilities occur with equal probability. Thus
S =
{
0 with probability 12
X ∼ N (0, P ) with probability 12 .
The radar receiver observes Y = S + Z, where the noise Z ∼ N (0, N) is independent of S. Find the
optimal decoder (in the minimum probability of error sense) for deciding whether S = 0 or S = X and
its probability of error. Give your answer in terms of intervals of y and express the boundary points
of the intervals in terms of P and N . Hint: Define Θ to be 0 if S = 0 and 1 if S = X and work with Θ
Solution:
To cast this problem as a standard detection problem, we define a random variable Θ by
Θ =
{
0 if S = 0
1 if S = X
Then pΘ(0) = pΘ(1) =
1
2 . The optimal decoder D(·) for Θ uses the MAP rule: i.e., set D(y) = θ where
θ maximizes the conditional pmf pΘ|Y (θ|y). By Bayes rule,
pΘ|Y (θ|y) =
fY |Θ(y|θ)pΘ(θ)
fY (y)
=⇒ D(y) =
{
1 if
fY |Θ(y|1)
fY |Θ(y|0) > 1
0 otherwise
The likelihood ratio can be written
fY |Θ(y|1)
fY |Θ(y|0)
=
1√
2π(P+N)
e−
y2
2(P+N)
1√
2πN
e−
y2
2N
=
√
N
P +N
ey
2( P2(P+N)N ) ,
hence
fY |Θ(y|1)
fY |Θ(y|0)
> 1 ⇔
√
N
P +N
ey
2( P2(P+N)N ) > 1 ⇔ y2 > (P +N)N
P
ln
(
P +N
N
)
.
Thus the MAP decision rule becomes
D(y) =



0 |y| ≤
√
(P+N)N
P ln
(
P+N
N
)
1 otherwise
1
https://www.coursehero.com/file/11349562/hw6Solutions/
Th
is 
stu
dy
 re
so
ur
ce
 w
as
sh
are
d v
ia 
Co
ur
se
He
ro
.co
m
https://www.coursehero.com/file/11349562/hw6Solutions/
To find the error probability, define τ =
√
(P+N)N
P ln
(
P+N
N
)
. Then
Pe = P{D(Y ) 6= Θ}
= P{D(Y ) = 1, Θ = 0} + P{D(Y ) = 0, Θ = 1}
= P{|Y | ≥ τ, Θ = 0} + P{|Y | < τ, Θ = 1}
= pΘ(0)
(
∫ −τ
−∞
fY |Θ(y|0) dy +
∫ ∞
τ
fY |Θ(y|0) dy
)
+ pΘ(1)
∫ τ
−τ
fY |Θ(y|1) dy
=
1
2
(
2
∫ ∞
τ
fY |Θ(y|0) dy +
∫ τ
−τ
fY |Θ(y|1) dy
)
=
1
2
(
2Q
(
τ√
N
)
+
(
1− 2Q
(
τ√
P +N
)))
= Q
(
√
(P +N)
P
ln
(
P +N
N
)
)
+
1
2
− Q
(
√
N
P
ln
(
P +N
N
)
)
.
In Figure 1 on page 2, the pdfs of noise and signal+ noise intersect at ±τ . The decision region for “no
signal” is the interval [−τ,+τ ]. The error probability is the average of the probabilities of the tail of
the noise and of the central region of signal+ noise. In the example shown in Figure 1, SNR = 4 and
Pe = 0.1780.
−10 −8 −6 −4 −2 0 2 4 6 8 10
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
noise N(0,N)
signal+noise N(0,P+N)
Figure 1: PDFs of noise (N = 1) and radar signal+ noise (P +N = 4).
2. Photodetector. Photons arrive at a photo-detector according to a Poisson process. If there is a signal
sent (event H1), the photons arrive with mean λ1 photons per second. This means if the number
of photons collected at the photo-detector for t seconds is N(t), then N(t) ∼ Poisson(λ1t). If there
is no signal sent (event H0), then the number of photons collected due to ambient light is N(t) ∼
Poisson(λ0t). Obviously λ1 > λ0 > 0. Prior probabilities are P (H0) = P (H1) = 1/2, i.e., the events
that a signal is sent or not are equally likely a-priori.
2
https://www.coursehero.com/file/11349562/hw6Solutions/
Th
is 
stu
dy
 re
so
ur
ce
 w
as
sh
are
d v
ia 
Co
ur
se
He
ro
.co
m
https://www.coursehero.com/file/11349562/hw6Solutions/
a) If the photo-detector has counted k photons in 1 second, what is the posterior probability that a
signal was sent? (What is P{H1|N(1) = k}?)
b) If the photo-detector has counted k photons in 1 second, what is the posterior probability that a
signal was not sent?
c) Suppose at the end of 1 second, if more than n photons are collected (N(1) > n), the detector
declares “signal is sent” (event D1). Otherwise (N(1) ≤ n) it declares “signal was not sent” (event
D0). Define the probability of error as
Pe = P (H0, D1)+P (H1, D0),
i.e., either the signal was not sent but the detector declared it was, or vice-versa. Find Pe (in
terms of λ0, λ1, and n). You can leave the summation, i.e., you needn’t find a form without a
summation sign.
d) Find the MAP detector rule for this experiment. That is, if k photons are collected, find the MAP
decision rule and n∗ such that
D =
{
D1 k > n
∗
D0 k ≤ n∗
Show that choosing n = n∗ in Part (c) minimizes Pe.
Solution:
a)
P{H1|N(1) = k} =
P{N(1) = k|H1}P(H1)
P{N(1) = k} =
1
2
λk1
k! e
−λ1
1
2
λk0
k! e
−λ0 + 12
λk1
k! e
−λ1
=
λk1e
−λ1
λk0e
−λ0 + λk1e
−λ1 .
b) By symmetry
P{H0|N(1) = k} =
λk0e
−λ0
λk0e
−λ0 + λk1e
−λ1
.
c)
Pe = P(H0) P{N(1) > n|H0}+ P(H1) P{N(1) ≤ n|H1}
= P(H0)(1− P{N(1) ≤ n|H0}) + P (H1) P{N(1) ≤ n|H1})
=
1
2
− 1
2
(
n
∑
k=0
λk0
k!
e−λ0 −
n
∑
k=0
λk1
k!
e−λ1
)
=
1
2
− 1
2
(
n
∑
k=0
1
k!
(λk0e
−λ0 − λk1e−λ1)
)
.
d) Suppose k photons are collected. MAP detector chooses the event with greater a-posteriori prob-
ability:
D =
{
D1 P{H1|N(1) = k} > P{H0|N(1) = k}
D0 o.w.
=
{
D1 λ
k
1e
−λ1 > λk0e
−λ0
D0 o.w.
=
{
D1 k >
λ1−λ0
ln(λ1)−ln(λ0) = n
∗
D0 o.w.
3
https://www.coursehero.com/file/11349562/hw6Solutions/
Th
is 
stu
dy
 re
so
ur
ce
 w
as
sh
are
d v
ia 
Co
ur
se
He
ro
.co
m
https://www.coursehero.com/file/11349562/hw6Solutions/
Observe that for k ≤ n∗, the elements of the summation in Pe formula are positive. Thus, by
setting n = n∗, only positive terms are summed but negative terms are left out. This maximizes
the sum, which minimizes Pe.
3. Suppose
fX,Y (x, y) =
{
x+ y, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1
0, otherwise.
a) Find the MMSE estimate of X given Y .
b) Find the MMSE for the MMSE estimate.
c) Find the MMSE linear estimate of X given Y .
d) Find the MMSE for the MMSE linear estimate.
e) Find the MAP estimate of X given Y . What is the mean square error of the MAP estimate
(E
(
(X̂ −X)2
)
)?
f) Find the ML estimate and corresponding mean square error of X given Y .
Solution:
a) To find the MMSE estimate, we need fX|Y (x|y). For that, we need fY (y).
fY (y) =
∫ 1
0
(x+ y)dx = y + 1/2, 0 ≤ y ≤ 1.
Thus
fX|Y (x|y) =
{
x+y
y+1/2 , 0 ≤ x ≤ 1, 0 ≤ y ≤ 1
0, otherwise.
Now we can find X̂ = E(X |Y ):
E (X |Y = y) =
∫ 1
0
x
x+ y
y + 1/2
dx =
1
12y + 6
+
1
2
,
therefore
X̂ = E(X |Y ) = 1
12Y + 6
+
1
2
.
b) MMSE is E (Var (X|Y)).
E
(
X2|Y = y
)
=
∫ 1
0
x2
x+ y
y + 1/2
dx =
1
12y + 6
+
1
3
,
Var (X|Y = y) = 1
12y + 6
+
1
3
−
(
1
12y + 6
+
1
2
)2
=
1
12
− 1
36(2y + 1)2
,
E (Var (X|Y)) =
∫ 1
0
(
1
12
− 1
36(2y + 1)2
)
(y + 1/2)dy =
1
12
− ln(3)
144
≈ 0.0757.
4
https://www.coursehero.com/file/11349562/hw6Solutions/
Th
is 
stu
dy
 re
so
ur
ce
 w
as
sh
are
d v
ia 
Co
ur
se
He
ro
.co
m
https://www.coursehero.com/file/11349562/hw6Solutions/
c)
E (Y ) =
∫ 1
0
y(y + 1/2)dy = 7/12,
E
(
Y 2
)
=
∫ 1
0
y2(y + 1/2)dy = 5/12,
Var (Y) = E
(
Y 2
)
−(E (Y ))2 = 11/144.
By symmetry
E (X) = 7/12, Var (X) = 11/144.
The covariance is
Cov (X,Y) = E (XY )−E (X) E (Y ) =
∫ 1
0
∫ 1
0
xy(x+ y)dxdy − 49/144 = −1/144.
Then the linear estimate is
X̂ =
−1/144
11/144
(Y − 7/12) + 7/12 = − Y
11
+
7
11
.
d) MMSE of the linear estimate is
MMSELE = Var (X)− (Cov (X,Y))
2
Var (Y)
= 11/144− (1/144)
2
11/144
= 5/66 ≈ 0.0758.
Observe that MMSE ≈ 0.0757 < MMSELE ≈ 0.0758 < Var (X) ≈ 0.0764. However the linear
estimator does pretty close to the non-linear MMSE estimator (on average). This is not always
the case, as shown in the class.
e) It was found above that
fX|Y (x|y) =
{
x+y
y+1/2 , 0 ≤ x ≤ 1, 0 ≤ y ≤ 1
0, otherwise
is an increasing function of x for 0 ≤ x ≤ 1. Therefore the MAP estimator is
X̂ = argmax
x
(fX|Y (x|y)) = 1.
Notice that the MAP estimator is constant, no matter what Y is (in other words, Y is ignored).
MSEMAP = E
(
(X − X̂)2
)
= E
(
(X − 1)2
)
= E
(
X2
)
−2E (X)+1 = 5/12− 27/12 + 1 = 0.25.
Notice that the MSE of the MAP estimator is even largerthan Var (X)! You would do better
(in the MSE sense, on average over many realizations of X) than the MAP estimator by just
guessing X̂ = E(X). Clearly MAP estimator is not the right choice for this estimation problem,
if your goal is to minimize the MSE. On the other hand, if you want to minimize the probability
of error in the additive Gaussian noise channel example of the lecture notes, than MAP estimator
is optimal (as shown in the lecture).
f) By symmetry
fY |X(y|x) =
{
x+y
x+1/2 , 0 ≤ x ≤ 1, 0 ≤ y ≤ 1
0, otherwise.
5
https://www.coursehero.com/file/11349562/hw6Solutions/
Th
is 
stu
dy
 re
so
ur
ce
 w
as
sh
are
d v
ia 
Co
ur
se
He
ro
.co
m
https://www.coursehero.com/file/11349562/hw6Solutions/
Therefore for 0 ≤ x ≤ 1, 0 ≤ y ≤ 1,
∂
∂x
fY |X(y|x) =
2− 4y
(2x+ 1)2
.
Notice that ∂∂xfY |X(y|x) > 0 for 0 ≤ y < 0.5 (i.e., fY |X(y|x) is increasing and X = 1 maximizes
it) and ∂∂xfY |X(y|x) < 0 for 0.5 < y ≤ 1 (i.e., fY |X(y|x) is decreasing and X = 0 maximizes it).
Therefore the ML estimate is
X̂ =
{
1, 0 ≤ y ≤ 0.5
0, 0.5 < y ≤ 1.
Since for y = 0.5 the pdf is flat, any choice of X̂ is good, I chose arbitrarily X̂ = 1 for that case.
The MMSE is
MMSEML = EY
(
E
(
(X − X̂)2|Y = y
))
= EY
(
∫ 1
0
(X − X̂)2fX|Y (x|y)dx
)
=
∫ 0.5
0
∫ 1
0
(x− 1)2fX|Y (x|y)fY (y)dxdy +
∫ 1
0.5
∫ 1
0
x2fX|Y (x|y)fY (y)dxdy
=
∫ 0.5
0
∫ 1
0
(x− 1)2fX,Y (x, y)dxdy +
∫ 1
0.5
∫ 1
0
x2fX,Y (x, y)dxdy
=
1
12
+
1
4
=
1
3
.
As you can see, the ML estimator is even worse, although it actually does not ignore Y .
4. Consider the additive noise channel shown in the figure below, where X and Z are zero mean and
uncorrelated, and a and b are constants.
a bX
Z
Y = b(aX + Z)
Find the MMSE linear estimate of X given Y and its MSE in terms only of σX , σZ , a, and b.
Solution:
First we find the mean and variance of Y and its covariance with X . In the following we use the
notation σ2X = P and σ
2
Z = N .
E(Y ) = E(abX + bZ) = abE(X) + bE(Z) = 0
Var(Y ) = E(abX + bZ)2 − (E(abX + bZ))2
= E(a2b2X2 + 2ab2XZ + b2Z2)− E(Y )2
= a2b2E(X2) + 2ab2E(X)E(Z) + b2E(Z2)− E(Y )2
= a2b2P + 0 + b2N − 0 = a2b2P + b2N
6
https://www.coursehero.com/file/11349562/hw6Solutions/
Th
is 
stu
dy
 re
so
ur
ce
 w
as
sh
are
d v
ia 
Co
ur
se
He
ro
.co
m
https://www.coursehero.com/file/11349562/hw6Solutions/
Cov(X,Y ) = E[(X − E(X))(Y − E(Y ))]
= E(XY )
= E[X(abX + bZ)]
= abE(X2) + bE(XZ)
= abP + bE(X)E(Z)
= abP
The MMSE linear estimate of X given Y is given by
X̂ =
Cov(X,Y )
σ2Y
(Y − E(Y )) + E(X)
=
abP
a2b2P + b2N
(Y − E(Y )) + E(X)
=
aP
b(a2P +N)
Y
The MSE of the linear estimate is the minimum MMSE:
MMSE = σ2X −
Cov2(X,Y )
σ2Y
= P − a
2b2P 2
a2b2P + b2N
=
a2P 2 + PN − a2P 2
a2P +N
=
PN
a2P +N
5. Camera measurement. The measurement from a camera can be expressed as Y = AX + Z, where X
is the object position with mean µ and variance σ2X , A is the occlusion indicator function and is equal
to 1 (if the camera can see the object) with probability p, and 0 (if the camera cannot see the object)
with probability (1− p), and Z is the measurement error with mean 0 and variance σ2Z . Assume that
X , A, and Z are independent. Find the best linear MSE estimate of X given the camera measurement
Y . Your answer should be in terms only of µ, σ2X , σ
2
Z , and p.
Solution:
The best MSE linear estimate is given by
X̂ =
Cov(X,Y )
Var(Y )
(Y − E(Y )) + E(X).
Now,
E(X) = µ.
E(Y ) = E(AX + Z)
= E(A)E(X) + E(Z) = pµ.
Var(Y ) = E[(AX + Z − pµ)2]
= E[((AX − pµ) + Z)2]
= E[(AX − pµ)2] + σ2Z
= E[(AX)2]− p2µ2 + σ2Z
= E(A2)E(X2)− p2µ2 + σ2Z
= p(σ2X + µ
2)− p2µ2 + σ2Z = pσ2X + p(1− p)µ2 + σ2Z .
7
https://www.coursehero.com/file/11349562/hw6Solutions/
Th
is 
stu
dy
 re
so
ur
ce
 w
as
sh
are
d v
ia 
Co
ur
se
He
ro
.co
m
https://www.coursehero.com/file/11349562/hw6Solutions/
Cov(X,Y ) = E[(X − µ)(AX + Z − pµ)]
= E[(X − µ)(AX − pµ)]
= E(A)E[(X − µ)X ] = pσ2X .
Substituting, we obtain
X̂ =
pσ2X
pσ2X + p(1− p)µ2 + σ2Z
(Y − pµ) + µ.
6. Consider the following joint distribution of X and Y :
fX,Y (x, y) =
{
c, 0 ≤ x ≤ 1, y ≥ 0, y ≤ 1− e−x
0, otherwise.
a) Find c.
b) Find MMSE estimate of X given Y = y.
c) Find the pdf of the estimate.
Solution:
a) The pdf should integrate to 1. Since the pdf is constant over the area where it is non-zero, c must
be the inverse of the area:
c
∫ 1
0
1− e−xdx = 1 ⇒ c = e.
b) We know that the MMSE estimate is X̂ = E(X |Y ). Let us first find fX|Y (x|y)
fY (y) =
∫ 1
− ln(1−y)
e dx = e(1 + ln(1 − y)), 0 ≤ y ≤ 1− 1/e.
fX|Y (x|y) =
{
1
1+ln(1−y) , 0 ≤ x ≤ 1, y ≥ 0, y ≤ 1− e−x
0, otherwise.
Thus
E(X |Y = y) =
∫ 1
− ln(1−y)
x
1 + ln(1 − y)dx =
1− ln2(1− y)
2(1 + ln(1− y)) =
1− ln(1− y)
2
.
Thus
X̂ = E(X |Y ) = 1− ln(1 − Y )
2
.
c) First notice that X̂ is a monotonously increasing function of Y . Thus, we can use the formula in the
lecture notes, with one term in the summation with X̂ = g(Y ), g(y) = 1−ln(1−y)2 , |g′(y)| = 12−2y ,
and y is the solution to x = g(y) thus y = 1− e1−2x
fX̂(x) =
fY (y)
|g′(y)|
∣
∣
∣
∣
y=1−e1−2x
=
{
4e(1− x)e1−2x, 0.5 ≤ x ≤ 1
0, otherwise.
Note that the range of values that X̂ can take are from 0.5 (when Y = 0) to 1 (when Y = 1− 1/e),
thus the pdf is zero outside of this range. You have to write this for full credit.
8
https://www.coursehero.com/file/11349562/hw6Solutions/
Th
is 
stu
dy
 re
so
ur
ce
 w
as
sh
are
d v
ia 
Co
ur
se
He
ro
.co
m
Powered by TCPDF (www.tcpdf.org)
https://www.coursehero.com/file/11349562/hw6Solutions/
http://www.tcpdf.org

Mais conteúdos dessa disciplina