Baixe o app para aproveitar ainda mais
Prévia do material em texto
Econ 210-Midterm Winter 2004 Answer Key Question 2 a) Since fX,Y (x, y) is positive, it suffices to check the condition: +∞Z 1 2Z 0 fX,Y (x, y) dxdy = 1 Note that 2Z 0 fX,Y (x, y) dx = 2Z 0 1 2 δxy−δ−1dx = 1 2 δy−δ−1 2Z 0 xdx = 1 2 x2c20 1 2 δy−δ−1 = δy−δ−1 and, hence, +∞Z 1 2Z 0 fX,Y (x, y) dxdy = +∞Z 1 δy−δ−1dy = −y−δc∞1 = 1 b) To compute the marginal density of y, we have to integrate fX,Y (x, y) with respect to X over the domain of X, that is, fY (y) = 2Z 0 fX,Y (x, y) dx = δy −δ−1 c) The variance of Y is given by V (Y ) = E ¡ Y 2 ¢ − [E (Y )]2 Let us calculate the first moment of Y: E (Y ) = +∞Z 1 yfY (y) dy = +∞Z 1 yδy−δ−1dy = +∞Z 1 δy−δdy = − δ δ − 1y −δ+1c∞1 = 0+ δ δ − 1 = δ δ − 1 1 Turning now to the second moment, we have: E ¡ Y 2 ¢ = +∞Z 1 y2fY (y) dy = +∞Z 1 y2δy−δ−1dy = +∞Z 1 δy−δ+1dy = − δ δ − 2y −δ+2c∞1 = −∞+ δ δ − 2 = −∞ The second moment is infinite for δ < 2! Thus, the variance does not exist in this case. d) Let us recall the formula for the conditional expectation: E (X|Y ) = 2Z 0 xfX|Y (x|y) dx The first step is to compute fX|Y (x|y), which is given by fX|Y (x|y) = fX,Y (x, y) fY (y) = δxy−δ−1 2δy−δ−1 = x 2 Then it is straightforward to show that E (X|Y ) = 2Z 0 x2 2 dx = x3 3.2 c20 = 4 3 e) The likelihood function of the sample is the joint density, given the N data points, taken as a function of δ, that is, L(y1, ..., yN ) = f(y1, ..., yN ) = NY i=1 f (yi) dxi = NY i=1 δy−δ−1i where in the second equality we made use of the independence assumption to write the joint density as the product of the marginal densities. Finally, taking logs on both sides of the expression above, we can express the log likelihood function as: lnL(y1, ..., yN ) = ln NY i=1 δy−δ−1i = NX i=1 ln δy−δ−1i = NX i=1 ln δ − (δ + 1) NX i=1 ln yi = N ln δ − (δ + 1) NX i=1 ln yi 2 To obtain the maximum likelihood estimator δˆ MLE of δ, we differentiate the log likelihood function with respect to δ and equate the result to 0, that is, δˆ MLE solves ∂ lnL ∂δ = 0⇔ N δ − NX i=1 ln yi = 0⇒ δˆ MLE = N NP i=1 ln yi f) The maximum likelihood estimator δˆ MLE is distributed asymptotically as √ N ³ δˆ MLE − δ ´ → N à 0,−E µ ∂2 ln f ∂δ2 ¶−1! Differentiating ∂ lnL ∂δ again with respect to δ, we get ∂2 lnL ∂δ2 = −N δ2 = 0⇒ −E µ ∂2 lnL ∂θ2 ¶ = N δ2 Since E ³ ∂2 lnL ∂θ2 ´ = NE ³ ∂2 ln f ∂θ2 ´ , it follows that −E µ ∂2 ln f ∂θ2 ¶ = 1 δ2 and, hence, that √ N ³ δˆ MLE − δ ´ → N ¡ 0, δ2 ¢ Question 3 Suppose that the relation between yi and xi is of the form: yNon-standardi = α˜+ β˜xi + γ 2x2i + θi non-standard model 3 We suppose in all questions that xi are non-stochastic. We also admit that θi˜N(0, σ2) and that θi is such that E[θi] = 0. However an econometrician uses the regression of yi on xi with an intercept. He thinks that the true model is the standard-linear model. Which means that he thinks that yi = yStandardi = α + βxi + εi , with E[εi] = 0. He computes the OLS estimators αˆ, βˆ. a) Give the formulas for αˆ, βˆ the OLS estimators of α and β in the standard- linear model. βˆ = P (yi−y¯)(xi−x¯)P (xi−x¯)2 and αˆ = y¯− βˆ1x¯ If the standard-linear model is the true one, are the OLS estimators αˆ, βˆ unbiased estimators of α and β resp.? No proof is needed, just answer yes or no and state what it means for αˆ, βˆ . Yes they are unbiased and E[βˆ] = α and E[αˆ] = β b) What is the estimated (forecasted) value yˆhi this econometrician would give using his OLS estimators if given only xi for an individual i. Just use αˆ and βˆ you don’t need to plug the formulas you found in a). yˆhi = αˆ+ βˆxi c) If the econometrician’s model (the standard-linear model) was true (yi = yStandardi ) what would be the error yi − yˆhi in terms of xi,α,β,εi,αˆ and βˆ. What would be its expectation. yStandardi − yˆhi = α+ βxi + εi − αˆ− βˆxi E[yStandardi − yˆhi ] = α+ βxi + 0−E[αˆ]−E[βˆ]xi = 0 d) Find the true systematic error yi− yˆhi , which is the difference between the true model of yi (using the non-standard model yi = yNon-standardi ) and what the econometrician did forecast given the xi (your result in question b) yˆhi ) in terms of xi,α˜,β˜,γ, εi,αˆ and βˆ. yNon-standardi − yˆhi = α˜+ β˜xi + γ2x2i + θi − αˆ− βˆxi e) What is the expectation of this error in terms of xi,α˜,β˜,γ, εi,E[αˆ] and E[βˆ]? (You don’t need to compute E[αˆ] and E[βˆ]). 4 E[yNon-standardi − yˆhi ] = α˜+ β˜xi + γ2x2i + 0−E[αˆ]−E[βˆ]xi f) Suppose α˜ = E[αˆ], β˜ = E[βˆ], 0 < γ and 0 < xi. What is the sign of E[yi − yˆhi ]? Compare your result to c). E[yNon-standardi − yˆhi ] = α˜+ β˜xi + γ2x2i + 0−E[αˆ]−E[βˆ]xi = γ2x2i > E[yStandardi − yˆhi ] = 0 Question 4 a) The OLS estimator of β1 solves min β1 nX i=1 (Yi − β1)2 Differentiating with respect to β1 and equating the result to zero, we get: −2 nX i=1 (Yi − β1) = 0⇒ βˆ ols 1 = Pn i=1 Yi n = Y¯ b) There was a typo in this question. You were supposed to show that V ³ βˆ ols 1 ´ = σ2 n2 nX i=1 x2i When I graded this question, I took the mistake into account. The expectation of βˆ ols 1 is equal to E ³ βˆ ols 1 ´ = E µPn i=1 Yi n ¶ = Pn i=1E (Yi) n = Pn i=1E (β1 + ui) n = β1+ Pn i=1E (ui) n = β1 where in the third equality we used the definition of Y and in the last equality we used the assumption that E (ui) = 0. Thus, the variance of βˆ ols 1 is given by 5 V ³ βˆ ols 1 ´ = E ³ βˆ ols 1 −E ³ βˆ ols 1 ´´2 = E ¡ Y¯ − β1 ¢2 = E (β1 + u¯− β1) 2 = E µPn i=1 (ui) n ¶2 = ÃPn i=1E (ui) 2 n2 ! = σ2 n2 nX i=1 x2i where in the third equality we used the definition of Y¯ and in the last equality we made use of the assumption that E (ui) 2 = σ2x2i . c) From the definition, we have: E (u∗i ) = E µ ui xi ¶ = 1 xi E (ui) = 1 xi 0 = 0 Using the fact that E (u∗i ) = 0, it follows that V (u∗i ) = E (u ∗ i ) 2 = E µ ui xi ¶2 = 1 x2i E ¡ u2i ¢ = 1 x2i σ2x2i = σ 2 d) The OLS estimator β˜ ols 1 solves min β1 nX i=1 (Y ∗i − β1x∗i )2 Differentiating with respect to β1 and equating the result to zero, we get: −2 nX i=1 (Y ∗i − β1x∗i )x∗i = 0⇒ β˜ ols 1 = Pn i=1 x ∗ iY ∗ iPn i=1 x ∗2 i e) Again we compute first the expectation of β˜ ols 1 . E ³ β˜ ols 1 ´ = E µPn i=1 x ∗ iY ∗ iPn i=1 x ∗2 i ¶ = Pn i=1 x ∗ iE (Y ∗ i )Pn i=1 x ∗2 i = Pn i=1 x ∗ iE (β1x ∗ i + u ∗ i )Pn i=1 x ∗2 i = β1 + Pn i=1 x ∗ iE (u ∗ i )Pn i=1 x ∗2 i = β1 6 Then we can compute the variance using the facts that β˜ ols 1 = β1+ Pn i=1 x ∗ i u ∗ iPn i=1 x ∗2 i , which follows from (d) and the definition of Y ∗i , and E ³ β˜ ols 1 ´ = β1, from V ³ β˜ ols 1 ´ = E ³ β˜ ols 1 −E ³ β˜ ols 1 ´´2 = E µ β1 + Pn i=1 x ∗ i u ∗ iPn i=1 x ∗2 i − β1 ¶2 = E µPn i=1 x ∗ i u ∗ iPn i=1 x ∗2 i ¶2 = 1 ( Pn i=1 x ∗2 i ) 2 nX i=1 x∗2i E (u ∗ i ) 2 = σ2Pn i=1 x ∗2 i = σ2Pn i=1 1 x2i where in the fourth equality we used the fact that E ¡ u∗i u ∗ j ¢ = 0 f) Using the hint, it follows that à nX i=1 1 x2i !à nX i=1 x2i ! > n2 ⇒ ¡Pn i=1 x 2 i ¢ n2 > 1³Pn i=1 1 x2i ´ ⇒ σ 2 n2 à nX i=1 x2i ! > σ2³Pn i=1 1 x2i ´ g) Since both estimators are unbiased and V ³ β˜ ols 1 ´ < V ³ βˆ ols 1 ´ , we prefer the OLS estimator for the model in question c. Note that in the transformed model, the errors have mean zero and are homoscedastic. Hence, from the Gauss Markov Theorem, we know that OLS in the transformed model is the best linear unbiased estimator. 7
Compartilhar