Table of Contents#### Download Safari Books Online apps: Apple iOS | Android | BlackBerry

Entire Site

Free Trial

Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.

184 CHAPTER | 5 Cost Models Conditions 1. is consistent (see previous section). 2. 0 lies in the interior of set . 3. g(Y, ) is continuously differentiable in some neighborhood N of 0 with probability one. 4. E[||g(Y t , )|| 2 ] < . 5. E[sup N || g(Y t , )||]< . 6. Matrix GWG is nonsingular. Efficiency So far we have said nothing about the choice of matrix W, except that it must be positive semidefinite. In fact, any such matrix will produce a consistent and asymptotically normal GMM estimator; the only difference will be in the asymptotic variance of that estimator. It can be shown that taking W -1 (5.108) will result in the most efficient estimator in the class of all asymptotically nor- mal estimators. Efficiency in this case means that such an estimator will have The properties are: Since F(a) = Pr(X a), the convergence in distribution means that the probability for X n to be in a given range is approximately equal to the probability that the value of X is in that range, pro- vided n is sufficiently large. In general, convergence in distribution does not imply that the sequence of corresponding prob- ability density functions will also converge. As an example, one may consider random variables with densities f n (x) = (1 - cos(2nx))1 {x(0,1)} . These random variables converge in distribution to a uniform U(0, 1), whereas their densities do not converge at all (Romano and Siegel, 1985). Portmanteau lemma provides several equivalent definitions of convergence in distribution. Although these definitions are less intuitive, they are used to prove a number of statistical the- orems. The lemma states that {X n } converges in distribution to X if and only if any of the fol- lowing statements are true: Ef(X n ) Ef(X) for all bounded, continuous functions f. Ef(X n ) Ef(X) for all bounded, Lipschitz functions f. limsup{Ef(X n )} Ef(X) for every upper semicontinuous function f bounded from above. liminf{Ef(X n )} Ef(X) for every lower semicontinuous function f bounded from below. imsup{Pr(X n C)} Pr(X C) for all closed sets C. liminf{Pr(X n U)} Pr(X U) for all open sets U. lim{Pr(X n A)} = Pr(X A) for all continuity sets A of random variable X. Continuous mapping theorem states that for a continuous function g(), if the sequence {X n } converges in distribution to X, then so does {g(X n )} converge in distribution to g(X). Lévy's continuity theorem states that the sequence {X n } converges in distribution to X if and only if the sequence of corresponding characteristic functions { n } converges pointwise to the characteristic function of X, and (t) is continuous at t = 0. Convergence in distribution is metrizable by the LévyProkhorov metric. A natural link to convergence in distribution is the Skorokhod's representation theorem.