Solucionario Walpole 8 ED
285 pág.

Solucionario Walpole 8 ED

Pré-visualização50 páginas
= E(Y¯ \u2212Bx¯) = E(Y¯ )\u2212 x¯E(B) = \u3b1 + \u3b2x¯\u2212 \u3b2x¯ = \u3b1,
\u3c32A = \u3c3
2
Y¯\u2212Bx¯ = \u3c3
2
Y¯ + x¯
2\u3c32B \u2212 2x¯\u3c3Y¯ B =
\u3c32
n
+
x¯2\u3c32
n\u2211
i=1
(xi \u2212 x¯)2
, since \u3c3Y¯ B = 0.
Hence
\u3c32A =
n\u2211
i=1
x2i
n
n\u2211
i=1
(xi \u2212 x¯)2
\u3c32.
11.16 We have the following:
Cov(Y¯ , B) = E
\uf8f1\uf8f4\uf8f4\uf8f2\uf8f4\uf8f4\uf8f3
(
1
n
n\u2211
i=1
Yi \u2212 1
n
n\u2211
i=1
µYi
)\uf8ee\uf8ef\uf8ef\uf8f0
n\u2211
i=1
(xi \u2212 x¯)Yi
n\u2211
i=1
(xi \u2212 x¯)2
\u2212
n\u2211
i=1
(xi \u2212 x¯)µYi
n\u2211
i=1
(xi \u2212 x¯)2
\uf8f9\uf8fa\uf8fa\uf8fb
\uf8fc\uf8f4\uf8f4\uf8fd\uf8f4\uf8f4\uf8fe
=
n\u2211
i=1
(xi \u2212 x¯)E(Yi \u2212 µYi)2 +
\u2211
i6=j
(xi \u2212 x¯)E(Yi \u2212 µYi)(Yj \u2212 µYj )
n
n\u2211
i=1
(xi \u2212 x¯)2
=
n\u2211
i=1
(xi \u2212 x¯)\u3c32Yi +
\u2211
i6=j
Cov(Yi, Yj)
n
n\u2211
i=1
(xi \u2212 x¯)2
.
Now, \u3c32Yi = \u3c3
2 for all i, and Cov(Yi, Yj) = 0 for i 6= j. Therefore,
Cov(Y¯ , B) =
\u3c32
n\u2211
i=1
(xi \u2212 x¯)
n
n\u2211
i=1
(xi \u2212 x¯)2
= 0.
156 Chapter 11 Simple Linear Regression and Correlation
11.17 Sxx = 26, 591.63 \u2212 778.72/25 = 2336.6824, Syy = 172, 891.46 \u2212 20502/25 = 4791.46,
Sxy = 65, 164.04\u2212 (778.7)(2050)/25 = 1310.64, and b = 0.5609.
(a) s2 = 4791.46\u2212(0.5609)(1310.64)
23
= 176.362.
(b) The hypotheses are
H0 : \u3b2 = 0,
H1 : \u3b2 6= 0.
\u3b1 = 0.05.
Critical region: t < \u22122.069 or t > 2.069.
Computation: t = 0.5609\u221a
176.362/2336.6824
= 2.04.
Decision: Do not reject H0.
11.18 Sxx = 57, 557 \u2212 7072/9 = 2018.2222, Syy = 51, 980 \u2212 6582/9 = 3872.8889, Sxy =
53, 258\u2212 (707)(658)/9 = 1568.4444, a = 12.0623 and b = 0.7771.
(a) s2 = 3872.8889\u2212(0.7771)(1568.4444)
7
= 379.150.
(b) Since s = 19.472 and t0.025 = 2.365 for 7 degrees of freedom, then a 95% confidence
interval is
12.0623± (2.365)
\u221a
(379.150)(57, 557)
(9)(2018.222)
= 12.0623± 81.975,
which implies \u221269.91 < \u3b1 < 94.04.
(c) 0.7771± (2.365)
\u221a
379.150
2018.2222
implies \u22120.25 < \u3b2 < 1.80.
11.19 Sxx = 25.85 \u2212 16.52/11 = 1.1, Syy = 923.58 \u2212 100.42/11 = 7.2018, Sxy = 152.59 \u2212
(165)(100.4)/11 = 1.99, a = 6.4136 and b = 1.8091.
(a) s2 = 7.2018\u2212(1.8091)(1.99)
9
= 0.40.
(b) Since s = 0.632 and t0.025 = 2.262 for 9 degrees of freedom, then a 95% confidence
interval is
6.4136± (2.262)(0.632)
\u221a
25.85
(11)(1.1)
= 6.4136± 2.0895,
which implies 4.324 < \u3b1 < 8.503.
(c) 1.8091± (2.262)(0.632)/\u221a1.1 implies 0.446 < \u3b2 < 3.172.
11.20 Sxx = 8134.26 \u2212 311.62/12 = 43.0467, Syy = 7407.80 \u2212 297.22/12 = 47.1467, Sxy =
7687.76\u2212 (311.6)(297.2)/12 = \u221229.5333, a = 42.5818 and b = \u22120.6861.
(a) s2 = 47.1467\u2212(\u22120.6861)(\u221229.5333)
10
= 2.688.
Solutions for Exercises in Chapter 11 157
(b) Since s = 1.640 and t0.005 = 3.169 for 10 degrees of freedom, then a 99% confidence
interval is
42.5818± (3.169)(1.640)
\u221a
8134.26
(12)(43.0467)
= 42.5818± 20.6236,
which implies 21.958 < \u3b1 < 63.205.
(c) \u22120.6861± (3.169)(1.640)/\u221a43.0467 implies \u22121.478 < \u3b2 < 0.106.
11.21 Sxx = 37, 125 \u2212 6752/18 = 11, 812.5, Syy = 17, 142 \u2212 4882/18 = 3911.7778, Sxy =
25, 005\u2212 (675)(488)/18 = 6705, a = 5.8254 and b = 0.5676.
(a) s2 = 3911.7778\u2212(0.5676)(6705)
16
= 6.626.
(b) Since s = 2.574 and t0.005 = 2.921 for 16 degrees of freedom, then a 99% confidence
interval is
5.8261± (2.921)(2.574)
\u221a
37, 125
(18)(11, 812.5)
= 5.8261± 3.1417,
which implies 2.686 < \u3b1 < 8.968.
(c) 0.5676± (2.921)(2.574)/\u221a11, 812.5 implies 0.498 < \u3b2 < 0.637.
11.22 The hypotheses are
H0 : \u3b1 = 10,
H1 : \u3b1 > 10.
\u3b1 = 0.05.
Critical region: t > 1.734.
Computations: Sxx = 67, 100\u2212 11102/20 = 5495, Syy = 74, 725\u2212 11732/20 = 5928.55,
Sxy = 67, 690 \u2212 (1110)(1173)/20 = 2588.5, s2 = 5928.55\u2212(0.4711)(2588.5)18 = 261.617 and
then s = 16.175. Now
t =
32.51\u2212 10
16.175
\u221a
67, 100/(20)(5495)
= 1.78.
Decision: Reject H0 and claim \u3b1 > 10.
11.23 The hypotheses are
H0 : \u3b2 = 6,
H1 : \u3b2 < 6.
\u3b1 = 0.025.
Critical region: t = \u22122.228.
158 Chapter 11 Simple Linear Regression and Correlation
Computations: Sxx = 15, 650 \u2212 4102/12 = 1641.667, Syy = 2, 512.925 \u2212 54452/12 =
42, 256.25, Sxy = 191, 325 \u2212 (410)(5445)/12 = 5, 287.5, s2 = 42,256.25\u2212(3,221)(5,287.5)10 =
2, 522.521 and then s = 50.225. Now
t =
3.221\u2212 6
50.225/
\u221a
1641.667
= \u22122.24.
Decision: Reject H0 and claim \u3b2 < 6.
11.24 Using the value s = 19.472 from Exercise 11.18(a) and the fact that y¯0 = 74.230 when
x0 = 80, and x¯ = 78.556, we have
74.230± (2.365)(19.472)
\u221a
1
9
+
1.4442
2018.222
= 74.230± 15.4216.
Simplifying it we get 58.808 < µY | 80 < 89.652.
11.25 Using the value s = 1.64 from Exercise 11.20(a) and the fact that y0 = 25.7724 when
x0 = 24.5, and x¯ = 25.9667, we have
(a) 25.7724 ± (2.228)(1.640)
\u221a
1
12
+ (\u22121.4667)
2
43.0467
= 25.7724 ± 1.3341 implies 24.438 <
µY | 24.5 < 27.106.
(b) 25.7724± (2.228)(1.640)
\u221a
1 + 1
12
+ (\u22121.4667)
2
43.0467
= 25.7724± 3.8898 implies 21.883 <
y0 < 29.662.
11.26 95% confidence bands are obtained by plotting the limits
(6.4136 + 1.809x)± (2.262)(0.632)
\u221a
1
11
+
(x\u2212 1.5)2
1.1
.
1.0 1.2 1.4 1.6 1.8 2.0
8.
0
8.
5
9.
0
9.
5
10
.0
10
.5
Temperature
Co
nv
er
te
d 
Su
ga
r
11.27 Using the value s = 0.632 from Exercise 11.19(a) and the fact that y0 = 9.308 when
x0 = 1.6, and x¯ = 1.5, we have
9.308± (2.262)(0.632)
\u221a
1 +
1
11
+
0.12
1.1
= 9.308± 1.4994
implies 7.809 < y0 < 10.807.
Solutions for Exercises in Chapter 11 159
11.28 sing the value s = 2.574 from Exercise 11.21(a) and the fact that y0 = 34.205 when
x0 = 50, and x¯ = 37.5, we have
(a) 34.205± (2.921)(2.574)
\u221a
1
18
+ 12.5
2
11,812.5
= 34.205± 1.9719 implies 32.23 < µY | 50 <
36.18.
(b) 34.205± (2.921)(2.574)
\u221a
1 + 1
18
+ 12.5
2
11,812.5
= 34.205± 7.7729 implies 26.43 < y0 <
41.98.
11.29 (a) 17.1812.
(b) The goal of 30 mpg is unlikely based on the confidence interval for mean mpg,
(27.95, 29.60).
(c) Based on the prediction interval, the Lexus ES300 should exceed 18 mpg.
11.30 It is easy to see that
n\u2211
i=1
(yi \u2212 y\u2c6i) =
n\u2211
i=1
(yi \u2212 a\u2212 bxi) =
n\u2211
i=1
[yi \u2212 (y¯ \u2212 bx¯)\u2212 bxi)
=
n\u2211
i=1
(yi \u2212 y¯i)\u2212 b
n\u2211
i=1
(xi \u2212 x¯) = 0,
since a = y¯ \u2212 bx¯.
11.31 When there are only two data points x1 6= x2, using Exercise 11.30 we know that
(y1 \u2212 y\u2c61) + (y2 \u2212 y\u2c62) = 0. On the other hand, by the method of least squares on page
395, we also know that x1(y1 \u2212 y\u2c61) + x2(y2 \u2212 y\u2c62) = 0. Both of these equations yield
(x2 \u2212 x1)(y2 \u2212 y\u2c62) = 0 and hence y2 \u2212 y\u2c62 = 0. Therefore, y1 \u2212 y\u2c61 = 0. So,
SSE = (y1 \u2212 y\u2c61)2 + (y2 \u2212 y\u2c62)2 = 0.
Since R2 = 1\u2212 SSE
SST
, we have R2 = 1.
11.32 (a) Suppose that the fitted model is y\u2c6 = bx. Then
SSE =
n\u2211
i=1
(yi \u2212 y\u2c6i)2 =
n\u2211
i=1
(yi \u2212 bxi)2.
Taking derivative of the above with respect to b and setting the derivative to zero,
we have \u22122
n\u2211
i=1
xi(yi \u2212 bxi) = 0, which implies b =
nP
i=1
xiyi
nP
i=1
x2i
.
(b) \u3c32B =
V ar
\u201e
nP
i=1
xiYi
«
(
nP
i=1
x2i )
2
=
nP
i=1
x2i\u3c3
2
Yi
(
nP
i=1
x2i )
2
= \u3c3
2
nP
i=1
x2i
, since Yi\u2019s are independent.
160 Chapter 11 Simple Linear Regression and Correlation
(c) E(B) =
E
\u201e
nP
i=1
xiYi
«
nP
i=1
x2i
=
nP
i=1
xi(\u3b2xi)
nP
i=1
x2i
= \u3b2.
11.33 (a) The scatter plot of the data is shown next.
5 10 15 20 25 30
20
40
60
80
10
0
x
y y^ = 3.416x
(b)
n\u2211
i=1
x2i = 1629 and
n\u2211
i=1
xiyi = 5564. Hence b =
5564
1629
= 3.4156. So, y\u2c6 = 3.4156x.
(c) See (a).
(d) Since there is only one regression coefficient, \u3b2, to be estimated, the degrees of
freedom in estimating \u3c32 is n\u2212 1. So,
\u3c3\u2c62 = s2 =
SSE
n\u2212 1 =
n\u2211
i=1
(yi \u2212 bxi)2
n\u2212 1 .
(e) V ar(y\u2c6i) = V ar(Bxi) = x
2
iV ar(B) =
x2i \u3c3
2
nP
i=1
x2i
.
(f) The plot is shown next.
5 10 15 20 25 30
20
40
60
80
10
0
x
y
11.34 Using part (e) of Exercise 11.33, we can see that the variance of a prediction y0 at x0
Solutions for Exercises in Chapter