 285 pág.

# Solucionario Walpole 8 ED

Pré-visualização50 páginas
```forward selection, variable x1 is entered first, and no other variables are entered
at 0.05 level. Hence the final model is y\u2c6 = \u22126.33592 + 0.33738x1.
(b) For the backward elimination, variable x3 is eliminated first, then variable x4 and
then variable x2, all at 0.05 level of significance. Hence only x1 remains in the
model and the final model is the same one as in (a).
Solutions for Exercises in Chapter 12 177
(c) For the stepwise regression, after x1 is entered, no other variables are entered.
Hence the final model is still the same one as in (a) and (b).
12.49 Using computer output, with \u3b1 = 0.05, x4 was removed first, and then x1. Neither x2
nor x3 were removed and the final model is y\u2c6 = 2.18332 + 0.95758x2 + 3.32533x3.
12.50 (a) y\u2c6 = \u221229.59244 + 0.27872x1 + 0.06967x2 + 1.24195x3 \u2212 0.39554x4 + 0.22365x5.
(b) The variables x3 and x5 were entered consecutively and the final model is y\u2c6 =
\u221256.94371 + 1.63447x3 + 0.24859x5.
(c) We have a summary table displayed next.
Model s2 PRESS R2
\u2211
i
|\u3b4i|
x2x5
x1x5
x1x3x5
x3x5
x3x4x5
x2x4x5
x2x3x5
x3x4
x1x2x5
x5
x3
x2x3x4x5
x1
x2x3
x1x3
x1x3x4x5
x2
x4x5
x1x2x3x5
x2x3x4
x1x2
x1x4x5
x1x3x4
x2x4
x1x2x4x5
x1x2x3x4x5
x1x4
x1x2x3
x4
x1x2x3x4
x1x2x4
176.735
174.398
174.600
194.369
192.006
196.211
186.096
249.165
184.446
269.355
257.352
197.782
274.853
264.670
226.777
188.333
328.434
289.633
195.344
269.800
297.294
192.822
240.828
352.781
207.477
214.602
287.794
249.038
613.411
266.542
317.783
2949.13
3022.18
3207.34
3563.40
3637.70
3694.97
3702.90
3803.00
3956.41
3998.77
4086.47
4131.88
4558.34
4721.55
4736.02
4873.16
4998.07
5136.91
5394.56
5563.87
5784.20
5824.58
6564.79
6902.14
7675.70
7691.30
7714.86
7752.69
8445.98
10089.94
10591.58
0.7816
0.7845
0.8058
0.7598
0.7865
0.7818
0.7931
0.6921
0.7949
0.6339
0.6502
0.8045
0.6264
0.6730
0.7198
0.8138
0.5536
0.6421
0.8069
0.7000
0.6327
0.7856
0.7322
0.5641
0.7949
0.8144
0.6444
0.7231
0.1663
0.7365
0.6466
151.681
166.223
174.819
189.881
190.564
170.847
184.285
192.172
189.107
189.373
199.520
192.000
202.533
210.853
219.630
207.542
217.814
209.232
216.934
234.565
231.374
216.583
248.123
248.621
249.604
257.732
249.221
264.324
259.968
297.640
294.044
178 Chapter 12 Multiple Linear Regression and Certain Nonlinear Regression Models
(d) It appears that the model with x2 = LLS and x5 = Power is the best in terms of
PRESS, s2, and
\u2211
i
|\u3b4i|.
12.51 (a) y\u2c6 = \u2212587.21085 + 428.43313x.
(b) y\u2c6 = 1180.00032\u2212 192.69121x+ 35.20945x2.
(c) The summary of the two models are given as:
Model s2 R2 PRESS
µY = \u3b20 + \u3b21x 1,105,054 0.8378 18,811,057.08
µY = \u3b20 + \u3b21x+ \u3b211x
2 430,712 0.9421 8,706,973.57
It appears that the model with a quadratic term is preferable.
12.52 The parameter estimate for \u3b24 is 0.22365 with a standard error of 0.13052. Hence,
t = 1.71 with P -value = 0.6117. Fail to reject H0.
12.53 \u3c3\u2c62b1 = 20, 588.038, \u3c3\u2c6
2
b11
= 62.650, and \u3c3\u2c6b1b11 = \u22121, 103.423.
12.54 (a) The following is the summary of the models.
Model s2 R2 PRESS Cp
x2x3
x2
x1x2
x1x2x3
x3
x1
x1x3
8094.15
8240.05
8392.51
8363.55
8584.27
8727.47
8632.45
0.51235
0.48702
0.49438
0.51292
0.46559
0.45667
0.47992
282194.34
282275.98
289650.65
294620.94
297242.74
304663.57
306820.37
2.0337
1.5422
3.1039
4.0000
2.8181
3.3489
3.9645
(b) The model with ln(x2) appears to have the smallest Cp with a small PRESS.
Also, the model ln(x2) and ln(x3) has the smallest PRESS. Both models appear
to better than the full model.
12.55 (a) There are many models here so the model summary is not displayed. By using
MSE criterion, the best model, contains variables x1 and x3 with s
2 = 313.491.
If PRESS criterion is used, the best model contains only the constant term with
s2 = 317.51. When the Cp method is used, the best model is model with the
constant term.
(b) The normal probability plot, for the model using intercept only, is shown next.
We do not appear to have the normality.
\u22122 \u22121 0 1 2
\u2212
2.
0
\u2212
1.
5
\u2212
1.
0
\u2212
0.
5
0.
0
0.
5
1.
0
Normal Q\u2212Q Plot
Theoretical Quantiles
Sa
m
pl
e
Qu
an
tile
s
Solutions for Exercises in Chapter 12 179
12.56 (a) V\u302olt = \u22121.64129 + 0.000556 Speed \u2212 67.39589 Extension.
(b) P -values for the t-tests of the coefficients are all < 0.0001.
(c) The R2 = 0.9607 and the model appears to have a good fit. The residual plot
and a normal probability plot are given here.
5500 6000 6500 7000 7500 8000 8500
\u2212
40
0
\u2212
20
0
0
20
0
40
0
y^
R
es
id
ua
l
\u22122 \u22121 0 1 2
\u2212
40
0
\u2212
20
0
0
20
0
40
0
Normal Q\u2212Q Plot
Theoretical Quantiles
Sa
m
pl
e
Qu
an
tile
s
12.57 (a) y\u2c6 = 3.13682 + 0.64443x1 \u2212 0.01042x2 + 0.50465x3 \u2212 0.11967x4 \u2212 2.46177x5 +
1.50441x6.
(b) The final model using the stepwise regression is
y\u2c6 = 4.65631 + 0.51133x3 \u2212 0.12418x4.
(c) Using Cp criterion (smaller the better), the best model is still the model stated in
(b) with s2 = 0.73173 and R2 = 0.64758. Using the s2 criterion, the model with
x1, x3 and x4 has the smallest value of 0.72507 and R
2 = 0.67262. These two
models are quite competitive. However, the model with two variables has one less
variable, and thus may be more appealing.
(d) Using the model in part (b), displayed next is the Studentized residual plot. Note
that observations 2 and 14 are beyond the 2 standard deviation lines. Both of
those observations may need to be checked.
8 9 10 11 12
\u2212
2
\u2212
1
0
1
2
y^
St
ud
en
tiz
ed
R
es
id
ua
l
1
2
3
4
5
6
7
8
9
10 11
12 13
14
15
16
17
18
19
12.58 The partial F -test shows a value of 0.82, with 2 and 12 degrees of freedom. Conse-
quently, the P -value = 0.4622, which indicates that variables x1 and x6 can be excluded
from the model.
180 Chapter 12 Multiple Linear Regression and Certain Nonlinear Regression Models
12.59 (a) y\u2c6 = 125.86555 + 7.75864x1 + 0.09430x2 \u2212 0.00919x1x2.
(b) The following is the summary of the models.
Model s2 R2 PRESS Cp
x2
x1
x1x2
x1x2x3
680.00
967.91
650.14
561.28
0.80726
0.72565
0.86179
0.92045
7624.66
12310.33
12696.66
15556.11
2.8460
4.8978
3.4749
4.0000
It appears that the model with x2 alone is the best.
12.60 (a) The fitted model is y\u2c6 = 85.75037\u221215.93334x1+2.42280x2+1.82754x3+3.07379x4.
(b) The summary of the models are given next.
Model s2 PRESS R2 Cp
x1x2x4
x4
x3x4
x1x2x3x4
x1x4
x2x4
x2x3x4
x1x3x4
x2
x3
x2x3
x1
x1x3
x1x2
x1x2x3
9148.76
19170.97
21745.08
10341.20
10578.94
21630.42
25160.18
12341.87
160756.81
171264.68
183701.86
95574.16
107287.63
109137.20
125126.59
447, 884.34
453, 304.54
474, 992.22
482, 210.53
488, 928.91
512, 749.78
532, 065.42
614, 553.42
1, 658, 507.38
1, 888, 447.43
1, 896, 221.30
2, 213, 985.42
2, 261, 725.49
2, 456, 103.03
2, 744, 659.14
0.9603
0.8890
0.8899
0.9626
0.9464
0.8905
0.8908
0.9464
0.0695
0.0087
0.0696
0.4468
0.4566
0.4473
0.4568
3.308
8.831
10.719
5.000
3.161
10.642
12.598
5.161
118.362
126.491
120.349
67.937
68.623
69.875
70.599
When using PRESS as well as the s2 criterion, a model with x1, x2 and x4 appears
to be the best, while when using the Cp criterion, the model with x1 and x4 is the
best. When using the model with x1, x2 and x4, we find out that the P -value for
testing \u3b22 = 0 is 0.1980 which implies that perhaps x2 can be excluded from the
model.
(c) The model in part (b) has smaller Cp as well as competitive PRESS in comparison
to the full model.
12.61 Since H = X(X\u2018X)\u22121X\u2018, and
n\u2211
i=1
hii = tr(H), we have
n\u2211
i=1
hii = tr(X(X\u2018X)
\u22121X\u2018) = tr(X\u2018X(X\u2018X)\u22121) = tr(Ip) = p,
where```