285 pág.

# Solucionario Walpole 8 ED

DisciplinaProbabilidade e Estatística11.884 materiais113.275 seguidores
Pré-visualização50 páginas
```t < \u22122.069 or t > 2.069.
Computations: t = 0.392
\u221a
23\u221a
1\u22120.3922 = 2.04.
Decision: Fail to reject H0 at level 0.05. However, the P -value = 0.0530 which is
marginal.
11.54 (a) From the data of Exercise 11.9 we find Sxx = 244.26 \u2212 452/9 = 19.26, Syy =
133, 786\u2212 10942/9 = 804.2222, and Sxy = 5348.2\u2212 (45)(1094)/9 = \u2212121.8. So,
r = \u2212121.8\u221a
(19.26)(804.2222)
= \u22120.979.
(b) The hypotheses are
H0 : \u3c1 = \u22120.5,
H1 : \u3c1 < \u22120.5.
\u3b1 = 0.025.
Critical regions: z < \u22121.96.
Computations: z =
\u221a
6
2
ln
[
(0.021)(1.5)
(1.979)(0.5)
]
= \u22124.22.
Decision: Reject H0; \u3c1 < \u22120.5.
Solutions for Exercises in Chapter 11 167
(c) (\u22120.979)2(100%) = 95.8%.
11.55 Using the value s = 16.175 from Exercise 11.6 and the fact that y\u2c60 = 48.994 when
x0 = 35, and x¯ = 55.5, we have
(a) 48.994±(2.101)(16.175)\u221a1/20 + (\u221220.5)2/5495 which implies to 36.908 < µY | 35 <
61.080.
(b) 48.994 ± (2.101)(16.175)\u221a1 + 1/20 + (\u221220.5)2/5495 which implies to 12.925 <
y0 < 85.063.
11.56 The fitted model can be derived as y\u2c6 = 3667.3968\u2212 47.3289x.
The hypotheses are
H0 : \u3b2 = 0,
H1 : \u3b2 6= 0.
t = \u22120.30 with P -value = 0.77. Hence H0 cannot be rejected.
11.57 (a) Sxx = 729.18\u2212 118.62/20 = 25.882, Sxy = 1714.62\u2212 (118.6)(281.1)/20 = 47.697,
so b = Sxy
Sxx
= 1.8429, and a = y¯ \u2212 bx¯ = 3.1266. Hence y\u2c6 = 3.1266 + 1.8429x.
(b) The hypotheses are
H0 : the regression is linear in x,
H1 : the regression is not linear in x.
\u3b1 = 0.05.
Critical region: f > 3.07 with 8 and 10 degrees of freedom.
Computations: SST = 13.3695, SSR = 87.9008, SSE = 50.4687, SSE(pure) =
16.375, and Lack-of-fit SS = 34.0937.
Source of Sum of Degrees of Mean Computed
Variation Squares Freedom Square f
Regression
Error{
Lack of fit
Pure error
87.9008
50.4687{
34.0937
16.375
1
18{
8
10
87.9008
2.8038{
4.2617
1.6375
2.60
Total 138.3695 19
The P -value = 0.0791. The linear model is adequate at the level 0.05.
11.58 Using the value s = 50.225 and the fact that y\u2c60 = \$448.644 when x0 = \$45, and
x¯ = \$34.167, we have
(a) 488.644 ± (1.812)(50.225)
\u221a
1/12 + 10.833
2
1641.667
, which implies 452.835 < µY | 45 <
524.453.
168 Chapter 11 Simple Linear Regression and Correlation
(b) 488.644 ± (1.812)(50.225)
\u221a
1 + 1/12 + 10.833
2
1641.667
, which implies 390.845 < y0 <
586.443.
11.59 (a) y\u2c6 = 7.3598 + 135.4034x.
(b) SS(Pure Error) = 52, 941.06; fLOF = 0.46 with P -value = 0.64. The lack-of-fit
test is insignificant.
(c) No.
11.60 (a) Sxx = 672.9167, Syy = 728.25, Sxy = 603.75 and r =
603.75\u221a
(672.9167)(728.25)
= 0.862,
which means that (0.862)2(100%) = 74.3% of the total variation of the values of
Y in our sample is accounted for by a linear relationship with the values of X.
(b) To estimate and test hypotheses on \u3c1, X and Y are assumed to be random
variables from a bivariate normal distribution.
(c) The hypotheses are
H0 : \u3c1 = 0.5,
H1 : \u3c1 > 0.5.
\u3b1 = 0.01.
Critical regions: z > 2.33.
Computations: z =
\u221a
9
2
ln
[
(1.862)(0.5)
(0.138)(1.5)
]
= 2.26.
Decision: Reject H0; \u3c1 > 0.5.
11.61 s2 =
nP
i=1
(yi\u2212y\u2c6i)2
n\u22122 . Using the centered model, y\u2c6i = y¯ + b(xi \u2212 x¯) + \u1ebi.
(n\u2212 2)E(S2) = E
n\u2211
i=1
[\u3b1 + \u3b2(xi \u2212 x¯) + \u1ebi \u2212 (y¯ + b(xi \u2212 x¯))]2
=
n\u2211
i=1
E
[
(\u3b1\u2212 y¯)2 + (\u3b2 \u2212 b)2(xi \u2212 x¯)2 + \u1eb2i \u2212 2b(xi \u2212 x¯)\u1ebi \u2212 2y¯\u1ebi
]
,
(other cross product terms go to 0)
=
n\u3c32
n
+
\u3c32Sxx
Sxx
+ n\u3c32 \u2212 2\u3c3
2Sxx
Sxx
\u2212 2n\u3c3
2
n
= (n\u2212 2)\u3c32.
11.62 (a) The confidence interval is an interval on the mean sale price for a given buyer\u2019s
bid. The prediction interval is an interval on a future observed sale price for a
given buyer\u2019s bid.
(b) The standard errors of the prediction of sale price depend on the value of the
buyer\u2019s bid.
Solutions for Exercises in Chapter 11 169
(c) Observations 4, 9, 10, and 17 have the lowest standard errors of prediction. These
observations have buyer\u2019s bids very close to the mean.
11.63 (a) The residual plot appears to have a pattern and not random scatter. The R2 is
only 0.82.
(b) The log model has an R2 of 0.84. There is still a pattern in the residuals.
(c) The model using gallons per 100 miles has the best R2 with a 0.85. The residuals
appear to be more random. This model is the best of the three models attempted.
Perhaps a better model could be found.
11.64 (a) The plot of the data and an added least squares fitted line are given here.
150 200 250 300
75
80
85
90
95
Temperature
Yi
el
d
(b) Yes.
(c) y\u2c6 = 61.5133 + 0.1139x.
Source of Sum of Degrees of Mean Computed
Variation Squares Freedom Square f
Regression
Error{
Lack of fit
Pure error
486.21
24.80{
3.61
21.19
1
10{
2
8
486.21
2.48{
1.81
2.65
0.68
Total 511.01 11
The P -value = 0.533.
(d) The results in (c) show that the linear model is adequate.
11.65 (a) y\u2c6 = 90.8904\u2212 0.0513x.
(b) The t-value in testing H0 : \u3b2 = 0 is \u22126.533 which results in a P -value < 0.0001.
Hence, the time it takes to run two miles has a significant influence on maximum
oxygen uptake.
(c) The residual graph shows that there may be some systematic behavior of the
residuals and hence the residuals are not completely random
170 Chapter 11 Simple Linear Regression and Correlation
700 750 800 850 900 950 1000
\u2212
5
0
5
10
Time
R
es
id
ua
l
11.66 Let Y \u2217i = Yi \u2212 \u3b1, for i = 1, 2, . . . , n. The model Yi = \u3b1 + \u3b2xi + \u1ebi is equivalent to
Y \u2217i = \u3b2xi + \u1ebi. This is a \u201cregression through the origin\u201d model that is studied in
Exercise 11.32.
(a) Using the result from Exercise 11.32(a), we have
b =
n\u2211
i=1
xi(yi \u2212 \u3b1)
n\u2211
i=1
x2i
=
n\u2211
i=1
xiyi \u2212 nx¯\u3b1
n\u2211
i=1
x2i
.
(b) Also from Exercise 11.32(b) we have \u3c32B =
\u3c32
nP
i=1
x2i
.
11.67 SSE =
n\u2211
i=1
(yi \u2212 \u3b2xi)2. Taking derivative with respect to \u3b2 and setting this as 0, we
get
n\u2211
i=1
xi(yi \u2212 bxi) = 0, or
n\u2211
i=1
xi(yi \u2212 y\u2c6i) = 0. This is the only equation we can get
using the least squares method. Hence in general,
n\u2211
i=1
(yi \u2212 y\u2c6i) = 0 does not hold for a
regression model with zero intercept.
11.68 No solution is provided.
Chapter 12
Multiple Linear Regression and
Certain Nonlinear Regression Models
12.1 (a) y\u302 = 27.5467 + 0.9217x1 + 0.2842x2.
(b) When x1 = 60 and x2 = 4, the predicted value of the chemistry grade is
y\u2c6 = 27.5467 + (0.9217)(60) + (0.2842)(4) = 84.
12.2 y\u2c6 = \u22123.3727 + 0.0036x1 + 0.9476x2.
12.3 y\u2c6 = 0.7800 + 2.7122x1 + 2.0497x2.
12.4 (a) y\u2c6 = \u221222.99316 + 1.39567x1 + 0.21761x2.
(b) y\u2c6 = \u221222.99316 + (1.39567)(35) + (0.21761)(250) = 80.25874.
12.5 (a) y\u2c6 = 56.46333 + 0.15253x\u2212 0.00008x2.
(b) y\u2c6 = 56.46333 + (0.15253)(225)\u2212 (0.00008)(225)2 = 86.73333%.
12.6 (a) d\u2c6 = 13.35875\u2212 0.33944v \u2212 0.01183v2.
(b) d\u2c6 = 13.35875\u2212 (\u22120.33944)(70)\u2212 (0.01183)(70)2 = 47.54206.
12.7 y\u2c6 = 141.61178\u2212 0.28193x+ 0.00031x2.
12.8 (a) y\u2c6 = 19.03333 + 1.0086x\u2212 0.02038x2.
(b) SSE = 24.47619 with 12 degrees of freedom and SS(pure error) = 24.36667
with 10 degrees of freedom. So, SSLOF = 24.47619\u2212 24.36667 = 0.10952 with
2 degrees of freedom. Hence f = 0.10952/2
24.36667/10
= 0.02 with a P -value of 0.9778.
Therefore, there is no lack of fit and the quadratic model fits the data well.
12.9 (a) y\u2c6 = \u2212102.71324 + 0.60537x1 + 8.92364x2 + 1.43746x3 + 0.01361x4.
(b) y\u2c6 = \u2212102.71324+(0.60537)(75)+(8.92364)(24)+(1.43746)(90)+(0.01361)(98) =
287.56183.
171
172 Chapter 12 Multiple Linear Regression and Certain Nonlinear Regression Models
12.10 (a) y\u2c6 = 1.07143 + 4.60317x\u2212 1.84524x2 + 0.19444x3.
(b) y\u2c6 = 1.07143 + (4.60317)(2)\u2212 (1.84524)(2)2 + (0.19444)(2)3 = 4.45238.
12.11 y\u2c6 = 3.3205 + 0.42105x1 \u2212 0.29578x2 + 0.01638x3 + 0.12465x4.
12.12 y\u2c6 = 1, 962.94816\u2212 15.85168x1 + 0.05593x2 + 1.58962x3 \u2212 4.21867x4 \u2212 394.31412x5.
12.13 y\u2c6 = \u22126.51221 + 1.99941x1 \u2212 3.67510x2 + 2.52449x3 + 5.15808x4 + 14.40116x5.```