Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore 5_6176921355498291555

5_6176921355498291555

Published by sknoorullah2016, 2021-08-31 18:00:21

Description: 5_6176921355498291555

Search

Read the Text Version

BEST QUADRATIC UNBIASED ESTIMATION 623 interaction and nested models the MIVQUE estimators and the analysis of variance estimators are the same. Using equation (10) in Chapter 9 for the one-way classification model, we have that the vector of observations y has a multivariate normal distribution with mean 1N������ and variance–covariance matrix V = ∑ +(������e2Ini + ���������2��� Jni ) . After simplifying the expressions (157)–(159), we write ∑ni yij ki = ������e2 ni , k = 1 and ȳi. = j=1 . (161) + ni���������2��� ∑a ni ki i=1 Then, the s’s and the u’s of (158) and (159) may be written ∑a ∑a (∑a )2 s11 = ki2 − 2k ki3 + k2 ki2 , (162) (163) i=1 i=1 i=1 (164) (165) s12 = ∑a ki2 ∑a ki3 ∑a ∑a ki2 , ni − 2k ni + k2 ki2 ni i=1 i=1 i=1 i=1 (∑a ki2 )2 , s22 = N−a ∑a ki2 ∑a ki3 + k2 ni ������e4 + n2i − 2k ni2 i=1 i=1 i=1 ∑a [ ∑a ]2 u1 = ki2 ȳi. − k kiȳi. , i=1 i=1 and u2 = 1 SSE + ∑a ki2 [ ∑a ]2 (166) ������e4 ni ȳ i. − k kiȳi. . i=1 i=1 From (160), under normality, the MIVQUE’s of ������e2 and ���������2��� are then ���̂���e2 = 1 [−s12 u1 + s11u2] (167) |S| and ���̂������2��� = 1 [s22 u1 − s12u2], (168) |S| where |S| = s11s22 − s122.

624 METHODS OF ESTIMATING VARIANCE COMPONENTS FROM UNBALANCED DATA The variances and covariances of these MIVQUE’s are v(���̂���e2) = 2s11 , (169) |S| (170) v(���̂������2��� ) = 2s22 , |S| and cov(���̂���e2, ���̂������2���) = −2s12 . (171) |S| Unfortunately, the MIVQUE’s are functions of the unknown variance components. Therefore, we must replace the unknown values of ���������2��� and ������e2 by some numbers ������e20 and ���������2���0 that are a priori estimates of ���������2��� and ������e2. Swallow and Searle (1978) give comparisons of the variances of the MIVQUE of ���������2��� for different a priori estimates and observe that in every case, the MIVQUE has a smaller variance than the analysis of variance method estimators. In a practical problem, one way to choose a priori estimates is to consider the results of a previous run of the experiment or process, if available, and use the analysis of variance estimates obtained from the past data. Example 8 Numerical Comparison of Variances of Analysis of Variance Estima- tors and MIVQUE Estimators The data for this example is taken from Swallow and Searle (1978). Five groups of several consecutive bottles each were snatched from a moving production line that was filling the bottles with vegetable oil. The oil in each bottle was weighed. The data appears in the table below. A multiple (24) head machine was being used in the filling. Different (unidentified) heads are represented in the five groups of bottles sampled. Thus, in part, variability among groups reflects variability among heads. Net Weights (oz.) of Vegetable Oil Fills 12345 15.70 15.69 15.75 15.68 15.65 15.68 15.71 15.82 15.66 15.60 15.64 15.75 15.59 15.60 15.71 15.84 We have the Analysis of variance table.

BEST QUADRATIC UNBIASED ESTIMATION 625 The SAS System The GLM Procedure Dependent Variable: weight Source DF Sum of Squares Mean Square F Value Pr > F 6.46 0.0063 Model 4 0.05530708 0.01382677 Error 11 0.02353667 0.00213970 Corrected Total 15 0.07884375 We also have the expected mean square of the model sum of squares. The SAS System The GLM Procedure Source Type III Expected Mean Square group Var(Error) + 3.0938 Var(group) We have that ���̂���e2 = 0.0021397 3.0938���̂������2��� + 0.0021397 = 0.01382677 ���̂������2��� = 0.00378921 are the analysis of variance estimates for the variance components. Suppose, from other data of the same type, we have analysis of variance estimates ���̂������2��� = 0.0028 and ���̂���e2 = 0.0025. Using these estimates, Swallow and Searle find that the MIVQUE’s are ���̂������2��� = 0.0021 and ���̂���e2 = 0.0032. The MIVQUE have approximate estimated variances and covariances v(���̂̂��� 2) = 0.0000113, v(���̂̂������2���) = 0.0000684, and cov(���̂���e2, ���̂̂��� 2 ) = −0.0000039. ������ e The variances and covariances of the analysis of variance estimators are v(���̂̂���e2) = 0.0000080, v(���̂̂������2���) = 0.00001084, and cov(���̂���e2, ���̂̂���2������) = −0.0000026. Swallow (1981) compares the variances of MIVQUE’s with the analysis of vari- ance estimators. He notes from numerical comparisons that when ���������2���∕������e2 > 1 and unless ���������2���0∕������e20 ≤ ���������2���∕������e2, where ���������2���0 and ������e20 are prior values of ���������2��� and ������e2: (i) the MIVQUE’s have variances near their lower bounds; (ii) the MIVQUE of ���������2��� is more efficient than the analysis of variance estimator. He also observes that: (i) when ���������2���∕������e2 < 1, the MIVQUE’s are more dependent on accurate specifica- tion of the ratio of the variance components ���������2���0∕������e20;

626 METHODS OF ESTIMATING VARIANCE COMPONENTS FROM UNBALANCED DATA (ii) the MIVQUE and the analysis of variance estimator of ������e2 have nearly equal variances unless ���������2���0∕������e20 ≤ ���������2���∕������e2 in which case the analysis of variance estimator has smaller variance. By doing a Monte Carlo study, Swallow and Monahan (1984) compare the biases and mean square errors of the analysis of variance estimators, the MIVQUE, the restricted maximum likelihood estimator and maximum likelihood estimators of variance components in the one-way classification model. Their results indicate that: (i) analysis of variance estimators perform well when ���������2���∕������e2 > 1; (ii) when ���������2���∕������e2 < 0.5, maximum likelihood estimators are excellent; (iii) MIVQUE with a priori estimators the analysis of variance estimators are adequate; (iv) MIVQUE with a priori values ���������2���0 = 0 and ������e20 = 1 performs poorly when ���������2���∕������e2 > 1. In our discussion of the MIVQUE, we have focused on the one-way classification model. P. S. R. S. Rao and Heckler (1997) consider the comparison of the variances and biases of analysis of variance, restricted maximum likelihood estimators and MIVQUE for a two-factor random-effects model with one factor nested. 11. SHRINKAGE ESTIMATION OF REGRESSION PARAMETERS AND VARIANCE COMPONENTS We shall consider shrinkage estimators of regression parameters and of variance components. First, we explain what shrinkage estimators are and how they can be more efficient than maximum likelihood estimators. We then discuss the celebrated James–Stein estimator in the linear models context. Finally, we give examples of some improved estimators of the variance and of variance components. a. Shrinkage Estimators Suppose we take one of the standard estimators of a vector of regression parameters or the variance–covariance matrix and multiply it by a constant between zero and one or a matrix M, where I – M is positive definite? The estimator we obtain in this way is called a shrinkage estimator. Usually shrinkage estimators, although biased, have a smaller variance than the estimators they shrink. In addition, in comparison to the standard estimators that are multiplied by the shrinkage factor they typically have a smaller mean square error for a range of the parameters. One example of such a shrinkage estimator is the ridge regression estimator of Hoerl and Kennard (1970) that was mentioned in Section 3 of Chapter 3. Notice that, for k > 0, we can write the

SHRINKAGE ESTIMATION OF REGRESSION PARAMETERS 627 ridge estimator as (172) p′������(r) = (X′X + kI)−1X′y = (X′X + kI)−1(X′X)(X′X)−1X′y = (X′X + kI)−1(X′X)b = (I + k(X′X)−1)−1b, where b is the least-square estimator. This is the product of a matrix whose difference between it and the identity matrix is positive definite and the least-square estimator. b. The James–Stein Estimator Consider the linear Bayes estimator (see Section 3 of Chapter 3) derived from a prior distribution, where ������ = 0 and F = ������2 (X′X)−1. The resulting estimator, as the reader k may show in Exercise 15, is p′������̂(c) = 1 ( k ) (173) b= 1− b. 1+k 1+k The shrinkage estimator in (173) is the contraction estimator of Mayer and Willke (1973). Dempster (1973) and Zellner (1986) suggested the prior distribution that resulted in the linear Bayes estimator of (173). Assuming a normal population, suppose we substitute the unbiased estimator (s − 2)���̂���2∕b′X′Xb for fraction k∕(1 + k). Then, we have in place of (173), p′ ������ (JS) = ( − (s − 2)���̂���2 ) b, (174) 1 b′X′Xb the celebrated James–Stein estimator in the context of a linear model. Gruber (1998) studies the James–Stein estimator for different linear model setups. Stein (1956) and later James and Stein (1961) showed that the usual maximum likelihood estimator of the mean of a multivariate normal distribution is inadmissible. An estimator is inadmissible if we can find an estimator whose mean square error is strictly less than it for at least one point and less than or equal to it for the entire parameter space. An admissible estimator is one that we cannot do this for. We can show that the mean square error of (174) is less than that of the least-square estimator (see for example Gruber (1998)). The technique used to obtain (174) from (173), namely, replacing a function of the prior parameters by an estimator based on the data is known as empirical Bayes (see, for example, Efron and Morris (1973)). c. Stein’s Estimator of the Variance Stein (1964) showed that the minimum mean square error estimator of the variance of a normal population is inadmissible. The inadmissible estimator is ���̂���2 = 1 ∑n − x̄)2. (175) n+1 (xi i=1

628 METHODS OF ESTIMATING VARIANCE COMPONENTS FROM UNBALANCED DATA Stein shows that the estimator (175) is inadmissible by establishing that ���̂���12 = { 1 1 ∑n − x̄ )2 , n 1 2 ∑n − } (176) min + (xi + (xi ������0)2 , n i=1 i=1 for any fixed number ������0, has a mean square error strictly smaller than that of (175) for at least one point of the parameter space and less than or equal to that of (175) for the entire parameter space. Both the estimators in (174) and (176) are also inadmissible. We can construct an estimator with smaller mean square error than (174) by truncating the shrinkage factor to be zero for values where it would be negative. Brewster and Zidek (1974), and Brown (1968) also produce estimators with smaller mean square error than (175). d. A Shrinkage Estimator of Variance Components We shall present an estimator of the variance components for a mixed linear model that has a smaller mean square error than that of the analysis of variance estimator. The estimator is due to Kubokawa (1995). We shall follow this paper in our presentation. We consider a general linear model y = X������ + Z������ + e, (177) where y is an n-vector of observations, X is an n × p1 known matrix with rank(X) = r and ������ is a p1-vector of parameters. In addition, Z is a given n × p2 matrix, ������ and e are independent random p2- and n-vectors, respectively with ������ ∼ Np2 (0, ������A2I) and e ∼ Nn(0, ������e2In). The random one-way analysis of variance model is a special case. Consider an (n − r) × n matrix P1 and an r × n matrix P2 such that P1X = 0, P1P′2 = 0, P1P1′ = In−r, and P2P′2 = Ir. Such matrices exist by the singular value decomposition. Let x1 = P1y and x2 = P2y. It follows that x1 ∼ ( ������A2 P1 ZZ′ P′1 + ������e2 In−r ) Nn−r 0, and x2 ∼ Nr ( X������ , ������A2 P2ZZ′P′2 + ������ 2Ir ) P2 . Consider the spectral decompositions P1ZZ′P′1 = ∑l and P2ZZ′P′2 = ∑k−1 ������iE1i ������jE2j, i=1 j=1

SHRINKAGE ESTIMATION OF REGRESSION PARAMETERS 629 where rank(E1i) = mi, ∑l mi = rank(P1ZZ′P′1), rank(E2j) = nj, and ∑k−1 satisfy i=1 j=1 = rank(P2ZZ′P2′ ). Assume that ������i > 0 and ������j > 0 nj 0 < ������1 < ⋯ < ������l and ������1 > ⋯ ������k−1 > 0. (178) Assume that ∑l (179) m = n − r − mi > 0. i=1 Let E1,l+1 = rIan−nrk(−E2∑,kil)==1 Er1−i a∑ndkj=−1E1 2n,kj = Ir − ∑k−1 E2j. We see that rank(E1,i+1) = m > 0 and = nk we obtain the quadratic j=1 ≥ 0. Thus, statistics S = x1′ E1,i+1x1 ∼(���������2���2+������m2������,1������A2)������m21 , (180a) S1 = x1′ E11x1 ∼ Sl = x1′ E1lx1 ∼ ⋯(������2 + ������l������A2)������m2l and T1 = x2′ E21x2 ∼ (������2 + ������1 ������A2 ) 2 ( ������′2X(′������P2′2+E2������,i1P������2A2X)������ ) ������ n1, ⋯ ( ������′2X(′���P���2′2+E2���,���kkP������2A2X) ������ ) . (180b) x2′ E2,kx2 nk, Tk = ∼ (������2 + ������k ������A2 )������ 2 The analysis of variance estimator that one derives by Henderson’s method 3 is ∑l ⎛ ∑l ⎞ mi ⎜ Si ⎟ ⎜ i=1 ⎟ ���̂���A2 = i=1 ⎜ − S ⎟ . (181) ⎜ ∑l m ⎟ ∑l i=1 ������imi ⎜⎝ i=1 mi ⎠⎟ Kubokawa (1995) shows that an estimator with a smaller mean square error than that of (181) takes the form ∑l ⎛ ∑l Si ⎞ mi ⎜ ⎟ ⎜ i=1 S⎟ ���̂���02A = i=1 ⎜ − ⎟ (182) ⎜ ∑l m ⎟ ∑l i=1 ������imi ⎝⎜ i=1 mi + 2 ⎠⎟

630 METHODS OF ESTIMATING VARIANCE COMPONENTS FROM UNBALANCED DATA and that an estimator with smaller mean square error than (182) takes the form ⎧ ∑l ⎫ ⎪ 2 mi ⎪ ���̂���12A = max ⎪⎨⎪���̂���12A, ∑l ⎪ i=1 ) S⎬ (183) m ������imi (∑l mi + 2 ⎪ ⎪⎩ i=1 i=1 ⎭⎪ More discussion of shrinkage estimators for variance components is available in Kubokawa (1999), Cui et al. (2005), An (2007), and the references therein. 12. EXERCISES 1 In Example 8, use the analysis of variance estimate for ���̂������2��� and ���̂���e2 as prior estimates to find the MIVQUE’s. How does the variance of these estimates compare with the analysis of variance estimates? 2 In the r-way classification random model, having all possible interactions, show that var(t) has 2r−1(2r + 1) different elements. 3 For balanced data, show that: ( ) (a) In equation (32), ������1 = bn . ∑a ������i2 − a���̄���.2 i=1 (b) For random ������, thatis,������i ∼ N(0, ���������2���), E(������1) = (a − 1)bn���������2���. 4 Establish the following for result (33) (a) For random ������’s E(������2) = ⎛ ∑a − ∑a ⎞ ���������2��� ⎜⎜∑b n2ij ni2. ⎟ ⎜ ⎟ ⎜ j=1 i=1 i=1 ⎟ ⎟ n.j N ⎜⎝ ⎠⎟ (b) For balanced data, ������2 = 0. 5 Consider the sample variance s2 = 1 ∑n − x̄)2, n−1 (xi i=1 where xi ∼ N(������, ������2). Assume that the xi’s are independent.

EXERCISES 631 (a) After making the transformation yi = xi − ������, show that s2 = n 1 y′ ( − 1 ) y −1 In n Jn (b) Using Corollaries 4.1 and 4.3 of Chapter 2, show that E(s2) = ������2 and var(s2) = 2������4 n−1 6 Show that v̂ = (I + A)−1A���̂��� is an unbiased estimator of v (see equations (56)– (58)). 7 Show that the estimators ���̂���e2 and ���̂������2��� given by (87)–(89) are the analysis of variance estimators for balanced data. 8 Find the variance of ���̂������2��� that can be derived from (88) and (89). 9 Show that for balanced data, the estimator in (90) simplifies to the analysis of variance estimator. 10 (a) Check the term below (91) for finding the variance of ���̂���e2 given in (90). (b) Find the variance of ���̂���e2 given in (90). (c) What does this variance simplify to, in the balanced case? 11 Consider the one-way random model yij = N������(+0,������������i���2���+),eeijij, i = 1, … , a, j = 1, 2, … , ni . ������i ∼ ∼ N(0, ������e2) Define T0 = ∑a ∑ni y2ij and T������ = y.. . N i=1 j=1 Show from first principles that (a) E(T0) = N(������2 + ���������2��� + ������e2) (b) E(T������) = N������2 + ∑a ni2 + ������e2 N i=1 12 Show that for a balanced one-way ANOVA model, the MIVQUE of the variance components are those obtained by the ANOVA method.

632 METHODS OF ESTIMATING VARIANCE COMPONENTS FROM UNBALANCED DATA 13 In fitting y = ������1 + Xf bf + X1b1 + X2b2 + e, (a) show that R(b1|bf ) equals R(b1)z when fitting z = Wy = WX1b1 + We, where W = I − Xf (Xf′ Xf )−1Xf′ . (b) Show that the reduction in the sum of squares due to fitting z = WX1b1 + We is R(b1)z. 14 (a) Show that the generalized inverse in (133) is indeed a generalized inverse by direct computation. (b) Use the result of (a) to establish (129). (c) Derive the last equation at the end of Section 9. 15 (a) Show how to derive (173) as a linear Bayes estimator from the given prior assumptions. (b) Find a range of values of the ������ parameters where (173) has a smaller mean square error than the least-square estimator.

REFERENCES Adke, S. R. (1986). One-way ANOVA for dependent observations. Commun. Stat. Theory Methods, 15, 1515–1528. Ahrens, H. (1965). Standardfehler geschatzter varianzkomponenten eines unbalanzie versuch- planes in r-stufiger hierarchischer klassifikation. Monatsb. Deut. Akad. Wiss. Berlin, 7, 89–94. Al Sarraj, R., and D. von Rosen. (2008). Improving Henderson’s method 3 approach when estimating variance components in a two-way mixed model. In: Statistical Inference Econo- metric Analysis and Matrix Algebra, B. Schipp and W. Kra¨er (Eds.), pp. 125–142. Physica- Verlag, Heidelberg. An, L. (2007). Shrinkage estimation of variance components with applications to microarray data. Ph.D. Thesis, University of Windsor. Anderson, R. D. (1978). Studies on the estimation of variance components. Ph.D. Thesis, Cornell University, Ithaca, NY. Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis, 3rd edition. John Wiley & Sons, New York. Anderson, R. L., and T. A. Bancroft. (1952). Statistical Theory in Research. McGraw-Hill, New York. Anderson, R. L., and P. P. Crump. (1967). Comparison of designs and estimation procedures for estimating parameters in a two stage nested process. Technometrics, 9, 499–516. Andrews, D. F., and A. M. Herzberg. (1985). Data: A Collection of Problems from Many Fields for the Student and Research Worker. Springer-Verlag, New York. Anscombe, F. J., and J. W. Tukey. (1963). The examination and analysis of residuals. Techno- metrics, 5, 141–160. Linear Models, Second Edition. Shayle R. Searle and Marvin H. J. Gruber. © 2017 John Wiley & Sons, Inc. Published 2017 by John Wiley & Sons, Inc. 633

634 REFERENCES Banerjee, K. S. (1964). A note on idempotent matrices. Ann. Math. Stat., 35, 880–882. Bartlett, M. S. (1937). Some examples of statistical methods in research in agriculture and applied biology. J. R. Stat. Soc. Suppl., 4, 137–183. Bartlett, M. S., and D. G. Kendall. (1946). The statistical analysis of variance heterogeneity and the logarithmic transformation. J. R. Stat. Soc. B, 8, 128–138. Ben-Israel, A., and T. N. E. Greville. (2003). Generalized Inverses Theory and Applications, 2nd edition. Springer-Verlag, New York. Bennett, C. A., and N. L. Franklin. (1954). Statistical Analysis in Chemistry and the Chemical Industry. John Wiley & Sons, New York. Blischke, W. R. (1966). Variances of estimates of variance components in a three-way classi- fication. Biometrics, 22, 553–565. Blischke, W. R. (1968). Variances of moment estimators of variance components in the unbal- anced r-way classification. Biometrics, 24, 527–540. Boullion, T. L., and P. L. Odell (Eds.). (1968). Theory and Application of Generalized Inverse Matrices. Mathematics Series No. 4, Texas Technical College, Lubbock, TX. Brewster, J. F., and J. V. Zidek. (1974). Improving on equivariant estimators. Ann. Stat., 2(1), 21–38. Broemeling, L. D. (1969). Confidence regions for variance ratios of random models. J. Am. Stat. Assoc., 64, 660–664. Brorsson, B., J. Ifver, and H. Rydgren. (1988). Injuries from single vehicle crashes and snow depth. Accid. Anal. Prev., 19, 367–377. Brown, K. G. (1977). On estimation of a diagonal covariance matrix by MINQUE. Commun. Stat. A, 6(5), 471–484. Brown, K. H. (1968). Social class and family investment. Ph.D. Thesis, Cornell University, Ithaca, NY. Brown, L. (1968). Inadmissibility of the usual estimators of the scale parameter in problems with unknown location and scale parameters. Ann. Math. Stat., 39, 29–48. Brown, M. B., and A. B. Forsyth. (1974a). Robust tests for the equality of variances. J. Am. Stat. Assoc., 69, 364–367. Brown, M. B., and A. B. Forsyth. (1974b). The small sample behavior of some statistics which test the equality of several means. Technometrics, 16(1), 129–132. Bulmer, M. G. (1957). Approximate confidence intervals for components of variance. Biometrika, 44, 159–167. Bulmer, M. G. (1980). The Mathematical Theory of Quantitative Genetics. Oxford University Press, Oxford. Bush, N., and R. L. Anderson. (1963). A comparison of three different procedures for estimating variance components. Technometrics, 5, 421–440. Cameron, E., and L. Pauling. (1976). Supplemental ascorbate in the supportive treatment of cancer: prolongation of survival times in human terminal cancer. Proc. Natl. Acad. Sci. U. S. A., 73, 3685–3689. Casella, G., and R. L Berger. (2002). Statistical Inference, 2nd edition. Duxbury, Pacific Grove, CA. Chaubey, Y. B. (1984). On the comparison of some non-negative estimators of variance components for two models. Commun. Stat. B Simul. Comp., 13, 619–633.

REFERENCES 635 Chen, C. J., K. Hiramatsu, T. Ooue, K. Takagi, and T. Yamamoto. (1991). Measurement of noise-evoked blood pressure by means of averaging method: relation between blood pressure rise and SPL. J. Sound Vib., 151, 383–394. Chipman, J. S. (1964). On least squares with insufficient observations. J. Am. Stat. Assoc., 69, 1078–1111. Cochran, W. G. (1934). The distribution of quadratic forms in a normal system with applications to the analysis of variance. Math. Proc. Camb. Philos. Soc., 30, 178–191. Cohen, A. (1991). Dummy variables in stepwise regression. Am. Stat., 45, 226–228. Conover, W. J. (1998). Practical Non-parametric Statistics, 3rd edition. John Wiley & Sons, New York. Conover, W. J., and R. L. Iman. (1981). Rank transformations as a bridge between parametric and nonparametric statistics. Am. Stat., 55, 124–129. Corbeil, R. R., and S. R. Searle. (1976). A comparison of variance component estimators. Biometrics, 32, 779–791. Cornfield, J., and J. W. Tukey. (1956). Average values of mean squares as factorials. Ann. Math. Stat., 27, 907–949. Courant, R. (1988). Differential and Integral Calculus, vol. 2. John Wiley & Sons, New York. Crump, S. L. (1947). The estimation of components of variance in multiple classifications. Ph.D. Thesis, Iowa State College Library, Ames, IA. Cui, X., J. T. G Hwang, J. Qui, N. Blades, and G. A. Churchill. (2005). Improved statistical tests for gene expression by shrinking variance components. Biostatistics, 6(1), 59–75. Cunningham, E. P. (1969). A note on the estimation of variance components by the method of fitting constants. Biometrika, 56, 683–684. Cunningham, E. P., and C. R. Henderson. (1968). An iterative procedure for estimating fixed effects and variance components in mixed model situations. Biometrics, 24, 13–25. Corri- genda: Biometrics, 25, 777–778 (1969). Daniel, W. W. (1990). Applied Nonparametric Statistics. PWS-KENT Publishing Company, Boston, MA. Dempster, A. P. (1973). Alternatives to least squares in multiple regression. In: Multivariate Statistical Inference, R. P. Kabe and D. G. Gupta (Eds.). North Holland, Amsterdam. Draper, N. R. (2002). Applied regression analysis bibliography update 2000–2001. Commun. Stat. Theory Methods, 31, 2051–2075. Draper, N. R., and H. Smith. (1998). Applied Regression Analysis, 3rd edition. John Wiley & Sons, New York. Dudewicz, E. J., and T. A. Bishop. (1981). Analysis of variance with unequal variances. J. Qual. Technol., 13(2), 111–114. Eccleston, J., and K. Russell. (1975). Connectedness and orthogonality in multi-factor designs. Biometrika, 62(2), 341–345. Efron, B., and C. Morris. (1973). Stein’s estimation rule and its competitors. J. Am. Stat. Assoc., 65, 117–130. Eisenhart, C. (1947). The assumptions underlying the analysis of variance. Biometrics, 20, 681–698. Elston, R. C., and N. Bush. (1964). The hypotheses that can be tested when there are interactions in an analysis of variance model. Biometrics, 20, 681–698.

636 REFERENCES Evans, D. A. (1969). Personal communication with S. R. Searle. Federer, W. T. (1955). Experimental Design. Macmillan, New York. Gaylor, D. W., and T. D. Hartwell. (1969). Expected mean squares for nested classifications. Biometrics, 25, 427–430. Gaylor, D. W., and F. N. Hopper. (1969). Estimating the degrees of freedom for linear combi- nations of mean squares by Satterthwaite’s formula. Technometrics, 11, 691–706. Gaylor, D. W., H. L., Lucas, and R. L. Anderson. (1970). Calculation of expected mean squares by the abbreviated Doolittle and square root methods. Biometrics, 26, 641–656. Giesbrecht, F. G. (1983). An efficient procedure for computing MINQUE of variance compo- nents and generalized least square estimators of fixed effects. Comm. Stat. A, 12, 2169– 2177. Godolphin, J. D. (2013). On the connectivity problem for m-way designs. J. Stat. Theory Pract., 7(4), 732–744. Goldman, A. J., and M. Zelen. (1964). Weak generalized inverses and minimum variance unbiased estimators. J. Res. Natl. Bur. Stand., 68B, 151–172. Golub, G. H., and C. F. Van Loan. (1996). Matrix Computations, 3rd edition. Johns Hopkins University Press, Baltimore, MD. Goslee, D. G., and H. L. Lucas. (1965). Analysis of variance of disproportionate data when interaction is present. Biometrics, 21, 115–133. Graybill, F. A. (1954). On quadratic estimates of variance components. Ann. Math. Stat., 25, 367–372. Graybill, F. A. (1961). An Introduction to Linear Statistical Models, vol. 1. McGraw-Hill, New York. Graybill, F. A. (1976). Theory and Application of the Linear Model. Wadsworth, Boston, MA. Graybill, F. A., and R. A. Hultquist. (1961). Theorems concerning Eisenhart’s Model II. Ann. Math. Stat., 32, 261–269. Graybill, F. A., C. D. Meyer, and R. J. Painter. (1966). Note on the computation of the generalized inverse of a matrix. SIAM Rev., 8, 522–524. Graybill, F. A., and G. Marsaglia. (1957). Idempotent matrices and quadratic forms in the general linear hypothesis. Ann. Math. Stat., 28, 678–686. Graybill, F. A., and A. W. Wortham. (1956). A note on uniformly best unbiased estimators for variance components. J. Am. Stat. Assoc., 51, 266–268. Greville, T. N. E. (1966). Note on the generalized inverse of a matrix product. SIAM Rev., 8, 518–521. Erratum: SIAM Rev., 9, 249 (1967). Grubbs, F. E. (1948). On estimating precision of measuring instruments and product variability. J. Am. Stat. Assoc., 43, 243–264. Gruber, M. H. J. (1998). Improving Efficiency by Shrinkage: The James-Stein and Ridge Regression Estimators. Marcel Dekker, New York. Gruber, M. H. J. (2010). Regression Estimators: A Comparative Study, 2nd edition. Johns Hopkins University Press, Baltimore, MD. Gruber, M. H. J. (2014). Matrix Algebra for Linear Models. John Wiley & Sons, Hoboken, NJ. Hartley, H. O. (1967). Expectation, variances and covariances of ANOVA mean squares by ‘synthesis’. Biometrics, 23, 105–114. Corrigenda: p. 853. Hartley, H. O., and J. N. K. Rao. (1967). Maximum likelihood estimation for the mixed analysis of variance model. Biometrika, 54, 93–108.

REFERENCES 637 Hartley, H. O., and S. R. Searle. (1969). On interaction variance components in mixed models. Biometrics, 25, 573–576. Harville, D. A. (1967). Estimability of variance components for the 2-way classification with interaction. Ann. Math. Stat., 38, 1508–1519. Harville, D. A. (1969a). Quadratic unbiased estimation of variance components for the one-way classification. Biometrika, 56, 313–326. Harville, D. A. (1969b). Variance component estimation for the unbalanced one-way ran- dom classification—a critique. Aerospace Research Laboratories, ARL-69-0180, Wright Patterson Air Force Base, Ohio. Harville, D. A. (1969c). Variances of the variance-component estimators for the unbalanced 2-way cross-classification with application to balanced incomplete block designs. Ann. Math. Stat., 40, 408–416. Harville, D. A. (1977). Maximum-likelihood approaches to variance component estimation and to related problems. J. Am. Stat. Assoc., 72, 320–340. Harville, D. A. (2008). Matrix Algebra from a Statistician’s Perspective. Springer, New York. Henderson, C. R. (1953). Estimation of variance and covariance components. Biometrics, 9, 226–252. Henderson, C. R. (1959, 1969). Design and analysis of animal husbandry experiments. In: Techniques and Procedures in Animal Science Research (1st edition in 1959; 2nd edition in 1969), American Society of Animal Science. Henderson, C. R. (1963). Selection index and expected genetic advance. In: Statistical Genetics in Plant Breeding. National Academy of Sciences, National Research Council Publication No. 982. Henderson, C. R. (1968). Personal communication with S. R. Searle. Henderson, C. R. (1975). Best linear unbiased estimation and prediction under a selection model. Biometrics, 31, 423–448. Henderson, C. R., O. Kempthorne, S. R. Searle, and C. N. Von Krosigk. (1959). Estima- tion of environmental and genetic trends from records subject to culling. Biometrics, 15, 192–218. Herbach, L. H. (1959). Properties of type II analysis of variance tests. Ann. Math. Stat., 30, 939–959. Hill, B. M. (1965). Inference about variance components in the one-way model. J. Am. Stat. Assoc., 60, 806–825. Hill, B. M. (1967). Correlated errors in the random model. J. Am. Stat. Assoc., 62, 1387–1400. Hirotsu, C. (1966). Estimating variance components in a two-way layout with unequal numbers of observations. Rept. Stat. Appl. Res., Union of Japanese Scientists and Engineers, 13(2), 29–34. Hochberg, Y., and A. C. Tamhane. (1987). Multiple Comparison Procedures. John Wiley & Sons. Hocking, R. R., and F. M. Speed. (1975). A full rank analysis of some linear model problems. J. Am. Stat. Assoc., 70, 706–712. Hoerl, A. E., and R. W. Kennard. (1970). Ridge regression: biased estimation for non- orthogonal problems. Technometrics, 12, 55–67. Hogg, R. V., J. W. Mc Kean, and A. T Craig. (2014). Introduction to Mathematical Statistics. Pearson Prentice Hall, Upper Saddle River, NJ.

638 REFERENCES Houf, R. E., and D. B. Berman. (1988). Statistical analysis of power module thermal test equipment. IEEE Trans. Compon. Hybrids Manuf. Technol., 11(4), 516–520. Hsu, J. C. (1996). Multiple Comparisons: Theory and Methods. Chapman & Hall/CRC, Boca Raton, FL. James, W., and C. Stein. (1961). Estimation with quadratic loss. In: Proceedings of the Fourth Berkeley Symposium on Mathematics Statistics and Probability, pp. 361–379. University of California Press, Berkeley, CA. John, P. W. M. (1964). Pseudo-inverses in analysis of variance. Ann. Math. Stat., 35, 895–896. Kaplan, J. S. (1983). A method for calculating MIVQUE estimators of variance components. J. Am. Stat. Assoc., 78, 476–477. Karathanasis, A. D., and J. R. V. Pils. (2005). Solid-phase chemical fractionation of selected trace metals in some northern Kentucky soils. Soil Sediment Contam. Int. J., 14(4), 293–308. Kelly, R. J., and T. Mathew. (1994). Improved nonnegative estimation of variance components in some mixed models with unbalanced data. Technometrics, 36(2), 171–181. Kempthorne, O. (1952). The Design and Analysis of Experiments. John Wiley & Sons, New York. Kempthorne, O. (1968). Discussion of Searle [1968]. Biometrics, 24, 782–784. Kirk, R. E. (1968). Experimental Design: Procedures for the Behavioral Sciences. Brooks/Cole, Belmont, CA. Kleffee, J., and B. Seifert. (1984). Matrix free computation of C. R. Rao’s MINQUE for unbalanced nested classification models. Comp. Stat. Data Anal., 2(3), 692–698. Koch, G. G. (1967a). A general approach to the estimation of variance components. Techno- metrics, 9, 93–118. Koch, G. G. (1967b). A procedure to estimate the population mean in random-effects models. Technometrics, 9, 577–586. Koch, G. G. (1968). Some further remarks on “A general approach to the estimation of variance components.” Technometrics, 10, 551–558. Kubokawa, T. (1995). Estimation of variance components in mixed linear models. J. Multivar. Anal., 53, 210–236. Kubokawa, T. (1999). Shrinkage and modification techniques in estimation of variance and related problems: a review. Commun. Stat. Theory Methods, 28(3&4), 613–650. Kussmaul, K., and R. L. Anderson. (1967). Estimation of variance components in two stage nested designs with complete samples. Technometrics, 9, 373–390. LaMotte, L. R., and R. R. Hocking. (1970). Computational efficiency in the selection of regression variables. Technometrics, 12, 83–94. Lancaster, H. O. (1954). Traces and cumulants of quadratic forms in normal variables. J. R. Stat. Soc. B, 16, 247–254. Lancaster, H. O. (1965). The Helmert matrices. Am. Math. Mon., 72, 4–12. Lee, S., and C. H. Ahn. (2003). Modified ANOVA for unequal variances. Comm. Stat. Simul. Comput., 32(4), 987–1004. Lehmann, E. L., and G. Casella. (1998). Theory of Point Estimation. Springer, New York. Lehmann, E. L., and J. P. Romano. (2005). Testing Statistical Hypotheses, 3rd edition. Springer, New York. Leone, F. C., L. S. Nelson., N. L. Johnson, and S. Eisenstat. (1968). Sampling distributions of variance components. II. Empirical studies of unbalanced nested designs. Technometrics, 10, 719–738.

REFERENCES 639 Levene, H. (1960). Robust tests for equality of variances. In: Contributions to Prob- ability and Statistics, I. Olkin (Ed.), pp. 278–292. Stanford University Press, Palo Alto, CA. Li, C. C. (1964). Introduction to Experimental Statistics. McGraw-Hill, New York. Lou, L. M., and J. Senturia. (1977). Computation of MINQUE variance coefficient estimates. J. Am. Stat. Assoc., 72, 867–868. Low, L. Y. (1964). Sampling variances of estimates of components of variance from a non- orthogonal two-way classification. Biometrika, 51, 491–494. Loynes, R. M. (1966). On idempotent matrices. Ann. Math. Stat., 37, 295–296. Lum, M. D. (1954). Rules for determining error terms in hierarchal and partially hierarchal models. Wright Air Development Center, Dayton, OH. Mahamunulu, D. M. (1963). Sampling variances of the estimates of variance components in the unbalanced 3-way nested classification. Ann. Math. Stat., 34, 521–527. Mannheim, B., and A. Cohen. (1978). Multivariate analysis of factors affecting work role centrality of occupational categories. Hum. Relat., 31, 525–553. Mayer, L. W., and T. A. Willke. (1973). On biased estimation in linear models. Technometrics, 15, 497–508. Mc Elroy, F. W. (1967). A necessary and sufficient condition that ordinary least square esti- mators be best linear unbiased. J. Am. Stat. Assoc., 62, 1302–1304. Miller, R. G. (1981). Simultaneous Statistical Inference, 2nd edition. Springer-Verlag, Berlin and Heidelberg. Miller, I., and M. Miller. (2012). John Freund’s Mathematical Statistics with Applications, 6th edition. Pearson Prentice Hall, Upper Saddle River, NJ. Millman, J., and G. V. Glass. (1967). Rules of thumb for writing the ANOVA table. J. Educ. Meas., 4, 41–51. Mitra, S. K. (1971). Another look at Rao’s MINQUE for variance components. Bull. Int. Stat. Inst., 44, 279–283. Montgomery, D. C. (2005). Design and Analysis of Experiments, 6th edition. John Wiley & Sons, Hoboken, NJ. Montgomery, D. C., and G. C. Runger. (2007). Applied Statistics and Probability for Engineers, 4th edition. John Wiley & Sons, Hoboken NJ. Mood, A. M. (1950). Introduction to the Theory of Statistics. McGraw-Hill, New York. Mood, A. M., and F. A. Graybill. (1963). Introduction to the Theory of Statistics, 2nd edition. McGraw-Hill, New York. Mood, A. M., F. A. Graybill, and C. Boes. (1974). Introduction to the Theory of Statistics, 3rd edition. McGraw-Hill, New York. Moore, E. H. (1920). On the reciprocal of the general algebraic matrix. Bull. Am. Math. Soc., 26, 394–395. Navidi, W. (2011). Statistics for Engineers and Scientists, 3rd edition. McGraw-Hill, New York. Nelder, J. A. (1954). The interpretation of negative components of variance. Biometrika, 41, 544–548. Park, D. R., and K. R. Shah. (1995). On connectedness of row column designs. Commun. Stat. Theory Methods, 24(1), 87–96. Patterson, H. D., and R. Thompson. (1971). Recovery of inter-block information when block sizes are unequal. Biometrika, 58, 545–554.

640 REFERENCES Penrose, R. A. (1955). A generalized inverse for matrices. Proc. Camb. Philos. Soc., 51, 406–413. Plackett, R. L. (1960). Principles of Regression Analysis. Oxford University Press. Pourahmadi, M. (2013). High-Dimensional Covariance Estimation. John Wiley & Sons, Hoboken, NJ. Raghavarao, D., and W. T. Federer. (1975). On connectedness in two-way elimination of heterogeneity designs. Ann. Stat., 3(3), 730–735. Rao, C. R. (1962). A note on a generalized inverse of a matrix with application to problems in mathematical statistics. J. R. Stat. Soc. B, 24, 152–158. Rao, C. R. (1970). Estimation of heteroscedatic variances in linear models. J. Am. Stat. Assoc., 65, 161–172. Rao, C. R. (1971a). Estimation of variance and covariance components—MINQUE theory. J. Multivar. Anal., 1, 257–275. Rao, C. R. (1971b). Minimum variance quadratic unbiased estimation of variance components. J. Multivar. Anal., 1, 445–456. Rao, C. R. (1972). Estimation of variance and covariance components in linear models. J. Am. Stat. Assoc., 67, 112–115. Rao, C. R. (1973). Linear Statistical Inference and Its Applications. John Wiley & Sons, London. Rao, C. R. (1984). Optimization of functions of matrices with application to statistical prob- lems. In: W. G. Cochran’s Impact on Statistics, P. S. R. S. Rao and J. Sedransk (Eds.), pp. 191–202. John Wiley & Sons, New York. Rao, C. R., and S. K. Mitra. (1971). Generalized Inverses of Matrices and Its Applications. John Wiley & Sons, New York. Rao, C. R., and H. Toutenburg. (1999). Linear Models: Least Squares and Alternatives, 2nd edition. Springer, New York. Rao, J. N. K. (1968). On expectations, variances and covariances of ANOVA mean squares by ‘synthesis’. Biometrics, 24, 963–978. Rao, P. S. R. S. (1997). Variance Component Estimation. Chapman & Hall, UK. Rao, P. S. R. S., and Y. P. Chaubey. (1978). Three modifications of the principle of the MINQUE. Commun. Stat., A7, 767–778. Rao, P. S. R. S., and C. Heckler. (1997). Estimators for the three-fold nested random effects model. J. Stat. Plan. Inference, 64(2), 341–352. Rao, P. S. R. S., J. Kaplan, and W. G. Cochran. (1981). Estimators for the one way random effects with unequal variances. J. Am. Stat. Assoc., 76, 89–97. Rayner, A. A., and P. M. Pringle. (1967). A note on generalized inverses in the linear hypothesis not of full rank. Ann. Math. Stat., 38, 271–273. Rice, W. R., and S. Gaines. (1989). Analysis of variance with unequal variances. Proc. Natl. Acad. Sci. U. S. A., 86(21), 8183–8184. Robinson, J. (1965). The distribution of a general quadratic form in normal random variables. Aust. J. Stat., 7, 110–114. Robinson, G. K. (1991). That BLUP is a good thing: estimation of random effects. Stat. Sci., 6(1), 15–51. Robson, D. S. (1957). Personal communication with S. R. Searle.

REFERENCES 641 Rohde, C. A. (1964). Contributions to the theory, computations and applications of generalized inverses. Institute of Statistics, Mimeo Series No. 392, University of North Carolina at Raleigh. Rohde, C. A. (1965). Generalized inverses of partitioned matrices. J. Soc. Ind. App. Math., 13, 1033–1035. Rohde, C. A. (1966). Some results on generalized inverses. SIAM Rev., 8, 201–205. Rohde, C. A., and G. M. Tallis. (1969). Exact first- and second-order moments of estimates of components of covariance. Biometrika, 56, 517–526. Russell, T. S., and R. A. Bradley. (1958). One-way variances in the two way classification. Biometrika, 45, 111–129. Sahai, H. (1974). On negative estimates of variance components under finite population models. S. Afr. Stat. J., 8, 157–168. Satterthwaite, F. E. (1946). An approximate distribution of estimates of variance components. Biometrics Bull., 2, 110–114. Schauder, M., and F. Schmid. (1986). How robust is one way ANOVA with respect to within group correlation? COMPSTAT: Proceedings in Computational Statistics, 7th symposium held at Rome. Scheffe, H. (1959). The Analysis of Variance. John Wiley & Sons, New York. Schultz, E. F. Jr. (1955). Rules of thumb for determining expectations of mean squares in analysis of variance. Biometrics, 11, 123–135. Searle, S. R. (1956). Matrix methods in variance and covariance components analysis. Ann. Math. Stat., 27, 737–748. Searle, S. R. (1958). Sampling estimates of components of variance. Ann. Math. Stat., 29, 167–178. Searle, S. R. (1961). Estimating the variability of butterfat production. J. Agric. Sci., 57, 289–294. Searle, S. R. (1966). Matrix Algebra for the Biological Sciences. John Wiley & Sons, New York. Searle, S. R. (1968). Another look at Henderson’s method of estimating variance components. Biometrics, 24, 749–788. Searle, S. R. (1969). Variance component estimation: proof of equivalence of alternative method 4 to Henderson’s method 3. Paper No. BU-260-M in the Mimeo Series, Biometrics Unit, Cornell University, Ithaca, NY. Searle, S. R. (1970). Large sample variances of maximum likelihood estimators of variance components. Biometrics, 26, 505–524. Searle, S. R. (1971a). Linear Models, 1st edition. John Wiley & Sons, New York. Searle, S. R. (1971b). Topics in variance component estimation. Biometrics, 27, 1–76. Searle, S. R. (1995). An overview of variance component estimation. Metrika, 42, 215– 230. Searle, S. R. (2006). Matrix Algebra Useful for Statistics. John Wiley & Sons, New York. Searle, S. R., and R. F. Fawcett. (1970). Expected mean squares in variance components models having finite populations. Biometrics, 26, 243–254. Searle, S. R., and W. H. Hausman. (1970). Matrix Algebra for Business and Economics. John Wiley & Sons, New York.

642 REFERENCES Searle, S. R., and C. R. Henderson. (1961). Computing procedures for estimating compo- nents of variance in the two-way classification mixed model. Biometrics, 17, 607–616. Corrigenda: Biometrics, 23, 852 (1967). Searle, S. R., and J. G. Udell. (1970). The use of regression on dummy variables in management research. Manag. Sci., 16, B397–B409. Searle, S. R., G. Casella, and C. E. Mc Culloch. (1992). Variance Components. John Wiley & Sons, New York. Seelye, C. J. (1958). Conditions for a positive definite quadratic form established by induction. Am. Math. Mon., 65, 355–356. Shah, K. R., and Y. Dodge. (1977). On the connectedness of designs. Sankya B, 39(3), 284–287. Shah, K. R., and C. G. Khatri. (1973). Connectedness in row column designs. Commun. Stat., 2(6), 571–573. Smith, H., Jr. (1988). Stepwise regression. In: Encyclopedia of Statistical Sciences (9 vols. plus Supplement), vol. 8, D. L. Banks, C. B. Read, and S. Kotz (Eds.), pp. 764–768. John Wiley & Sons, Hoboken, NJ. Snedecor, G. W., and W. G. Cochran. (1967). Statistical Methods, 6th edition. Iowa State University Press, Ames, IA. Speed, F. M. (1969). A new approach to the analysis of linear models. Ph.D. Thesis, Texas A&M University, College Station, TX. Steel, R. G. D., and J. H. Torrie. (1960). Principles and Procedures of Statistics. McGraw-Hill, New York. Stein, C. (1956). Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In: Proceedings of the Third Berkeley Symposium on Mathematics, Statistics and Probability, pp. 197–206. University of California Press. Stein, C. (1964). Inadmissibility of the usual estimator for the variance of a normal distribution with unknown mean. Ann. Inst. Stat. Math., 16, 155–160. Stewart, F. (1963). Introduction to Linear Algebra. Van Nostrand, New York. Swallow, W. H. (1981). Variances of locally minimum variance quadratic unbiased estimators (MIVQUE) of variance components. Technometrics, 23(3), 271–283. Swallow, W. H., and J. F. Monahan. (1984). Monte Carlo comparison of ANOVA, MINQUE, REML, and ML estimators of variance components. Technometrics, 26(1), 47–57. Swallow, W. H., and S. R. Searle. (1978). Minimum variance quadratic unbiased estimation of variance components. Technometrics, 20(3), 265–272. Tang, P. C. (1938). The power function of the analysis of variance tests with tables and illustrations of their use. Stat. Res. Mem., 2, 126–149. Theil, H., and A. S. Goldberger. (1961). On pure and mixed estimation in economics. Int. Econ. Rev., 2, 65–78. Thompson, R. (1969). Iterative estimation of variance components for non-orthogonal data. Biometrics, 25, 767–773. Thompson, W. A., Jr. (1961). Negative estimates of variance components: an introduction. Bull. Int. Inst. Stat., 34, 1–4. Thompson, W. A., Jr. (1962). The problem of negative estimates in variance components. Ann. Math. Stat., 33, 273–289. Thompson, W. A., Jr. (1963). Precision in simultaneous measurement procedures, J. Am. Stat. Assoc., 58, 474–479.

REFERENCES 643 Thompson, W. A., Jr., and J. R. Moore. (1963). Non-negative estimates of variance components. Technometrics, 5, 441–450. Tiao, G. C., and G. E. P. Box. (1967). Bayesian analysis of a three-component hierarchal design model. Biometrika, 54, 109–125. Tiao, G. C., and W. Y. Tan. (1965). Bayesian analysis of random-effects model in the analysis of variance. I. Posterior distribution of variance components. Biometrika, 52, 37–53. Tiao, G. C., and W. Y. Tan. (1966). Bayesian analysis of random-effects model in the analysis of variance. II. Effect of autocorrelated errors. Biometrika, 53, 477–495. Townsend, E. C. (1968). Unbiased estimators of variance components in simple unbalanced designs. Ph.D. Thesis, Cornell University, Ithaca, NY. Townsend, E. C. (1969). Lecture notes, Statistics 363, University of West Virginia, Morgan- town, WV. Townsend, E. C., and S. R. Searle. (1971). Best quadratic unbiased estimation of variance components from unbalanced data in the 1-way classification. Biometrics, 27, 643–657. Tukey, J. (1949). One degree of freedom for non-additivity. Biometrics, 5(3), 232–242. Urquhart, N. S. (1968). Computation of generalized inverses which satisfy special conditions. SIAM Rev., 10, 216–218. Urquhart, N. S. (1969). The nature of the lack of uniqueness of generalized inverse matrices. SIAM Rev., 11, 268–271. Urquhart, N. S., D. L. Weeks, and C. R. Henderson. (1970). Estimation associated with linear models: a revisitation. Paper BU-195, Biometrics Unit, Cornell University, Ithaca, NY. Urquhart, N. S., D. L. Weeks, and C. R. Henderson. (1973). Estimation associated with linear models: a revisitation. Commun. Stat., 1, 303–330. Wald, A. (1943). Tests of statistical hypothesis concerning several parameters when the number of variances is large. Trans. Am. Math. Soc., 54, 426–482. Wang, Y. Y. (1967). A comparison of several variance component estimators. Biometrika, 54, 301–305. Weeks, D. L., and D. R. Williams. (1964). A note on the determination of connectedness in an N-way cross classification. Technometrics, 6, 319–324. Errata: Technometrics, 7, 281 (1965). Welch, B. L. (1951). On the comparison of several mean values: an alternative approach. Biometrika. 38, 330–336. Welch, B. L. (1956). On linear combinations of several variances. J. Am. Stat. Assoc., 50, 132–148. Wilk, M. B., and O. Kempthorne. (1955). Fixed, mixed and random models. J. Am. Stat. Assoc., 50, 1144–1167. Wilk, M. B., and O. Kempthorne. (1956). Some aspects of the analysis of factorial experiments in a completely randomized design. Ann. Math. Stat., 27, 950–985. Williams, E. J. (1959). Regression Analysis. John Wiley & Sons, New York. Wilson, R. (1993). Review of development and application of CRSTER and MPTER models. Atmos. Environ., 27, 41–57. Winer, B. J., D. R. Brown, and K. M. Michels. (1991). Statistical Principles in Experimental Design, 3rd edition. McGraw-Hill, New York. Woodbury, M. A. (1950). Inverting modified matrices. Statistical Research Group, Memo, Rep. No. 42, Princeton University, Princeton, NJ.

644 REFERENCES Wright, R. L., and S. R. Wilson. (1979). On the analysis of soil variability with an example from Spain. Geoderma, 22, 297–313. Yates, F. (1933). The analysis of replicated experiments when the field results are incomplete. Empire J. Exp. Agric., 1, 129–142. Yates, F. (1934). The analysis of multiple classifications with unequal numbers in the different classes. J. Am. Stat. Assoc., 29, 51–66. Young, C. W., J. E. Legates, and B. R. Farthing. (1965). Pre-and post-natal influences on growth, prolificacy and maternal performance in mice. Genetics, 52, 553–561. Zelen, M. (1968). Discussion of Searle [1968]. Biometrics, 24, 779–780. Zellner, A. (1986). On assessing prior distributions and Bayesian regression analysis with g-prior distributions. In: Bayesian Inference and Decision Techniques: Essays in Honor of Bruno di Finetti, P. Goel and A. Zellner (Eds.). North Holland, Amsterdam. Zyskind, G., and F. B. Martin. (1969). A general Gauss-Markov theorem for linear models with arbitrary non-negative covariance structure. SIAM J. Appl. Math., 17, 1190–1202.

AUTHOR INDEX Adke, R. D., 331 Brorsson, B., 536 Ahn, C. H., 326 Brown, D. R., 472 Ahrens, H., 582 Brown, K. G., 440, 550 Al Sarraj, R., 5 Brown, K. H., 440 An, L., 630 Brown, L., 628 Anderson, R. D., 544 Brown, M. B., 326 Anderson, R. L., 523, 544, 604, 632 Bulmer, M., 537 Anderson, T. W., 86 Bush, N., 431, 604 Anscombe, F. J., 167 Cameron, E., 490 Bancroft, T. A., 523, 544 Casella, G., 528, 530, 542–544, 554, 603 Banerjee, K. S., 83 Chaubey, Y. B., 530, 553 Bartlett, M. S., 326–327, 476 Chen, C. J., 139 Ben-Israel, A., 26, 47 Chipman, J. S., 37 Bennett, C. A., 500 Cochran, W. G., 82, 86, 90, 523, 551 Berger, R. L., 544 Cohen, A., 442 Berman, D. B., 518 Conover, W. J., 323, 325 Bishop, T. A., 557 Corbeil, R. R., 545 Blischke, W. R., 579–580 Cornfield, J., 525 Boes, C., 63 Courant, R., 61 Boullion, T. L., 26 Craig, A. T., 58, 62, 71, 316 Box, G. E. P., 530 Crump, P. P., 604 Bradley, R. A., 544 Crump, S. L., 583, 585 Brewster, J. F., 628 Cui, X., 630 Broemeling, L. D., 536 Cunningham, E. P., 615–617 Linear Models, Second Edition. Shayle R. Searle and Marvin H. J. Gruber. © 2017 John Wiley & Sons, Inc. Published 2017 by John Wiley & Sons, Inc. 645

646 AUTHOR INDEX Daniel, W. W., 324–325 Hill, B. M., 530 Dempster, A. P., 627 Hochberg, Y., 321 Dodge, Y., 427 Hocking, R. R., 428, 442, 525 Draper, N. R., 140, 167, 442 Hoerl, A. E., 114, 160, 162, 626 Dudewicz, E. J., 557 Hogg, R. V., 58, 62, 71, 316 Hopper, F. N., 535 Eccleston, J., 427 Houf, R. E., 518 Efron, B., 627 Hsu, J. C., 321 Eisenhart, C., 494–495 Hultquist, R. A., 528 Eisenstat, S., 538 Elston, R. C., 431 Iman, R. L., 323 Evans, D. A., 88 Ilver, J., 442 Fawcett, R. F., 500, 529 James, W., 477, 626, 627 Federer, W. T., 206, 348, 427, 446, 459, John, P. W. M., 35, 37 471, 476 Kaplan, J. S., 551 Forsyth, A. B., 326–327, 329 Karanthanasis, A. D., 345 Franklin, N. L., 500 Kelly, R. J., 598 Kempthorne, O., 37, 165, 374, 525, Gaines, S., 326 Gaylor, D. W., 500, 535, 592 603 Giesbrecht, F. G., 551 Kendall, D. G., 326 Glass, G. V., 507, 510 Kennard, R. W., 114, 160, 162, 626 Godolphin, J. D., 427, 443 Khatri, G. C., 427 Goldberger, A. S., 159, 260 Kirk, R. E., 523 Goldman, A. J., 30 Kleffe, J., 551 Golub, G. H., 434 Koch, G. G., 599–601, 621 Goslee, D. G., 481 Kubokawa, T., 628–630 Graybill, F. A., 30, 37, 54, 63, 83, 165, 340, Kussmaul, K., 604 459, 492, 523, 528, 536, 537 La Motte, L. R., 442 Greville, T. N. E., 26, 44, 47 Lancaster, H. O., 52, 77 Grubbs, F. E., 498 Lee, S., 326 Gruber, M. H. J., 8, 14, 21, 28, 31, 42, 58, Lehmann, E. L., 163, 166, 603 Leone, F. C., 538 70, 74, 93, 102, 106, 111, 114, 116, 162, Levene, H., 326–327 374, 415, 417, 428, 434, 477, 498, 505, Lou, L. M., 551 532, 560, 584, 610, 627 Low, L. Y., 596, 598 Loynes, R. M., 83–85 Hartley, H. O., 523, 576–577, 585–586, Lucas, H. L., 481 592, 612–613 Lum, M. D., 507 Hartwell, T. D., 500 Mahamunulu, D. M., 582 Harville, D. A., 47, 528, 530, 545, 596, 598, Mannheim, B., 442 Martin, F. B., 279 603 Mathew, T., 598 Heckler, C., 626 Mayer, L. W., 627 Henderson, C. R., 429, 507, 510, 512, 523, 525, 567, 569, 588–590, 597–598, 605–606, 610, 615–617, 629 Herbach, L. H., 530, 543, 545

Mc Culloch, C. E., 528, 530, 542–544, 554 AUTHOR INDEX 647 Mc Elroy, F. W., 112 Mc Kean, J. W., 58, 62, 71 Sahai, H., 500 Michels, K. M., 472 Satterthwaite, F. E., 534–535 Miller, I., 58, 62, 69, 80 Schauder, M., 330 Miller, M., 58, 62, 69, 80 Scheffe, H., 37, 54, 316–317, 321–322, 325, Miller, R. G., 132, 321 Millman, J., 507, 510 330, 415, 523, 525, 537 Mitra, S. K., 31, 550 Schmid, F., 330 Monahan, J. F., 626 Schultz, E. F., Jr., 507 Montgomery, D. C., 139, 348, 417, 430, Searle, S. R., 9, 18, 28, 42, 68, 86, 102, 165, 446, 459, 489, 558 178, 206, 293, 440, 500, 528–530, 532, Mood, A. M., 63, 523 542–545, 554, 579, 581, 583, 585, Moore, E. H., 26 588–590, 597, 603, 605, 613–614, 620, Moore, J. R., 529 622, 624–625 Morris, C., 627 Seelye, C. J., 55 Seifert, B., 551 Nelder, J. A., 529 Senturia, J., 551 Shah, K. R., 427 Odell, P. L., 16 Smith, H., 140, 167, 442 Smith, H., Jr., 442 Park, D. R., 427 Snedecor, G. W., 523 Patterson, H. D., 544–545 Speed, F. M., 428, 445 Pauling, L., 490 Steel, R. G. D., 136, 446, 471, 523 Penrose, R. A., 26 Stein, C., 627–628 Pils, J. R. V., 345 Stewart, F., 14 Plackett, R. L., 37 Swallow, W. H., 620, 622, 624–626 Pourahmadi, M., 603 Pringle, P. M., 37 Tallis, G. M., 597 Tamhane, A. C., 321 Raghavarao, D., 427 Tan, W. Y., 530 Rao, C. R., 18, 23, 28, 31, 54, 72–73, 112, Tang, P. C., 165 Theil, H., 159, 260 114, 160, 194, 226, 280, 446, 477, 491, Thompson, R., 618–620 530, 548, 550, 553, 622 Thompson, W. A., Jr., 498, 529–530, Rao, J. N. K., 523, 577, 612 Rao, P. S. R. S., 530, 545, 550–553, 626 544–545 Rayner, A. A., 37 Tiao, G. C., 530 Rice, W. R., 326 Torrie, J. H., 136, 446, 471, 523 Robinson, G. K., 610 Toutenburg, H., 477 Robinson, J., 533 Townsend, E. C., 488, 528, 620, 622 Robson, D. S., 596 Tukey, J. W., 167, 194, 525 Rohde, C. A., 30, 44, 597 Romano, J. P., 163, 166 Udell, J. G., 178 Runger, G. C., 139, 417 Urquhart, N. S., 30, 37, 81, 428–429, 445 Russell, K., 427 Russell, T. S., 544 Van Loan, C. F., 434 Rydgren, H., 442 Von Rosen, D., 598 Wald, A., 613 Wang, Y. Y., 533

648 AUTHOR INDEX Wortham, A. W., 528 Wright, R. L., 489 Weeks, D. L., 426, 442–443 Welch, B. L., 326, 328, 536 Yates, F., 477, 479, 484 Wilk, M. B., 525 Young, C. W., 494 Williams, D. R., 426, 442, 537 Williams, E. J., 131 Zelen, M., 30, 597 Willke, T. A., 627 Zellner, A., 627 Wilson, R., 345 Zidek, J. V., 628 Wilson, S. R., 489 Zyskind, G., 279 Winer, B. J., 472 Woodbury, M. A., 114, 434

SUBJECT INDEX 1-way classification, see one-way with interaction, 380–422 2-way classification, see two-way without interaction, 347–380 3-way classification, see three-way unbalanced data, 178–180 analysis of variance method, 501–506, 520, absorbing equations, 352–354 adjusting for effects, 362–363 526–530, 563, 567–583, Chapters admissible estimator, 627 9–10 Aitken’s integral, 64–65 balanced data, Chapter 9 all cells filled, data with, 414–415 analysis of covariance, 5, 437, 445–474 balanced data, 178–179, 250, 252, 287, 303, 349, 363, 382 intra-class correlation model, 464–470 one-way classification, 454–464 analysis of variance, rules for estimating two-way classification, 470–474 variance components, 507–512. analysis of means, 479–487, 598–599, 605 See also two-way classification; unweighted data analysis, 479–483 variance components weighted squares of means, 484–487 analysis of variance, 4, 49, 82, 86, 92, 145, normal equations for, 340–341 nested model, 339 178, 265, 267, 314, 316, 429 one-way classification, 312 one-way classification, 291–313 two-way classification, 374–380, regression, 121–122, 129–130, 282–289 robustness to assumptions, 321–331 415–420, 512–526 table, 50, 130, 132, 151–153, 220–222, Bayes estimator, 114, 558, 610, 627, 240–241, 291, 315, 335 632 two-way classification best linear unbiased estimation (b.l.u.e.), nested model, 331–339 110–112, 115, 171–172, 345 balanced data, 312, 374–375 confidence intervals for, 227–228 Linear Models, Second Edition. Shayle R. Searle and Marvin H. J. Gruber. © 2017 John Wiley & Sons, Inc. Published 2017 by John Wiley & Sons, Inc. 649

650 SUBJECT INDEX simultaneous, 325–326. See also Scheffe; Bonferonni best linear unbiased estimation (b.l.u.e.) (Continued) variance components, 536–538, 558 connected data, 426 constrained models, 265, 270, 310–312, connectedness, 354, 422–427, 442–443 373–374 constraints, 255–264, 339, 489, 549, covariance models, 444, 447, 620 551–552, 562, 615 of estimable function, 225–237, quadratic, 160–162 stochastic, 158–160 299–304, 336, 368–369, “usual constraints”, 255, 264–276, 311, 399–402, 415, 427–429, 444 non-full rank model, 252 374–375 nested model, 336–337 contrasts, 284, 291, 304, 308–309, 344–345 one-way classification, 299–304 two-way classification, 368–369, independent and orthogonal, 248–255, 374–375, 399–402, 415. See also 308–310 Gauss–Markov theorem best quadratic unbiased estimator correlation, 60, 112, 161, 167, 530, 592 (b.q.u.e), 620–626. See also coefficient, 330 minimum invariant norm intra-class, 429 quadratic unbiased estimator matrix, 60, 92 (MINQUE); minimum invariant multiple, 122–126, 178 variance quadratic unbiased serial, 331 estimator (MIVQUE) biased estimator, 115, 556, 588, 605 covariance, 530–531, 571, 573, 579, 585, bilinear form, 52, 87–88 598 Bonferonni, 316, 318–320, 343–345, 432, 490 analysis of, 5, 437, 445–474, 477, 488, 490 chi-square distribution, 82, 84, 324 central, 70–71 between estimators, 110, 227, 300, non-central, 72–73, 79–80, 219 582–583, 592, 597, 605, 614, 624–625 classification, 177 crossed, 194–198, 507, Chapter 7 bilinear forms, 87–88 nested, 194–198, 397, 507, 545, 604 for normal random variables, 66 classification models, see one-way; and independence, 69 two-way; three-way matrix, 109, 115, 169, 211, 217, 252, Cochran’s theorem, 86, 90 286, 501–502, 540, 547, 566, 579, coefficient of determination, 169–170, 588, 608, 613, 623, 626 model, 450–454, 458, 476, 488–489, 215–217, 289, 297, 313 492 conditional distribution, 59, 67–68 residuals, 167 conditional inverse, 8 quadratic forms, 74–75, 77, 89 confidence interval, 3, 101, 126, 133–139, between variables, 59–60, 97, 112, 171, 186, 495, 521, 567, 571, 573, 580 186–188, 206, 287, 330, 343 variance components, 581, 584 Bonferonni, 317–321, 343–344, 432, zero, 80, 205, 540, 564–565, 577–578 covariate, 5, 446–447, 454–456, 460, 462, 490 464, 470, 472, 476 estimable function, 227–228, 299–302, fitting, 449–451, 458, 461, 466, 468, 471 importance of, 472–474, 488, 490, 304, 345 492 multiple comparisons, 316–321 crossed classification, 194–198, 509 regression, 169–170, 290 two-way, 474, 546–547, 563, Chapter 7 Scheffe, 317–321, 343–345, 432

SUBJECT INDEX 651 data estimators all cells filled, 414–420, 474–487 Bayes, 114, 558, 610, 627, 632 balanced, 178–179, 250, 252, 287, 303, best linear unbiased (b.l.u.e), see best 349, 363, 382 linear unbiased estimation analysis of variance, rules for (b.l.u.e.) estimating variance components, biased, 115, 556, 588, 605 507–512. See also two-way generalized least square, 109 classification; variance James–Stein, 477, 626, 627 components linear Bayes, 114 normal equations for, 340–341 maximum likelihood, 110, 112, 530, nested model, 339 542–545 one-way classification, 312 minimum invariant norm unbiased two-way classification, 374–380, (MINQUE), 545–550 415–420, 512–526 minimum invariant variance unbiased big, 16, 427, 437–445 (MIVQUE), 550–553, 622–626 multicollinear, 5, 114, 156, 160 mixed, 158–159, 260–263 survey-type, 437–445 ordinary least squares, 109 unbalanced, 89, 178–180, 248–255, 363, ridge, 5, 114, 116, 160–162, 626–627 366, 374, 376, 382, 422–427, variance components, Chapters 9–11. 474–487, Chapter 10 See also variance components estimation design matrix, 207 design models, 336, 340 estimating and examining residuals, determination, coefficient of, 169, 215–217, 166–168 289, 315 factor, 180–181 diagonal matrix, 9, 14, 56, 58, 162, 286, F-distribution, 70–71, 444, 485 295, 354, 385, 403, 445, 465, 550, central, 73, 131, 533–553 579, 621 non-central, 73, 129, 219, 221 disconnected data, 422–427 finite population, 500, 529 distributions, Chapter 2 fitting constants, 179 chi-square, 70–71, 73, 80, 82, 84, 219, in variance component estimation, 324 gamma, 69, 565–566 590–598, 602, 605, 509, 615, 619 normal, 3, 64–69 fixed effect, 494 of quadratic forms, 530–532 full model, 148–149, 150–151, 240, 459, of variance component estimators, 531 590–591, 593 dummy variable, 3–5, 180–184, 445 full rank, 2, 42, 95, 102, 120 effect, 4, 148 Gauss-Markov Theorem, 110–112, 225–227 fixed effect model, 494 g-inverse, 8 mixed, 497–499 generalized inverse, Chapter 1 random effect model, 494–496 least-square, 31 equal numbers data, 178–179 Moore–Penrose, 279 errors, type I and type II, 164–165 normalized, 30, 31 estimable function, 200–201, 223– of partitioned matrix, 43–44 of product matrix, 44 236 pseudo inverse, 8, 30 restrictions involving, 257–259 reflexive, 31 estimated expected value, 117, 119, 212, weak, 30 generalized or weighted least squares, 109 215, 448

652 SUBJECT INDEX Helmert matrix, 51, 70, 79, 521 null, 38, 128, 129 Henderson’s methods, 567–587, 589–599, orthogonal, 49, 51–52, 58, 244, 621 permutation, 14, 38, 267 Chapter 10 positive definite, 53–58, 160, 277 hypotheses, see tests of hypotheses positive semi-definite, 55, 57 symmetric, 32–37, 65, 74, 106, 208, identity matrix, 267, 530, 548, 566, 627 incidence matrix, 184, 207, 614 622 independent and orthogonal contrasts, maximum likelihood, 110, 112 248–255, 308–310 variance components, 530, 542–545 interaction, 190–194, 197–198, 380–420 mean square error (MSE), 116, 161–162, intra-class regression, 464–470 556, 598, 626–630 Jacobian, 61, 66 minimum invariant norm quadratic unbiased James–Stein estimator, 477, 626–627 estimation (MINQUE), 545–550 Kruskal–Wallis test, 324–325 minimum invariant variance quadratic lack of fit, 139–141 unbiased estimation (MIVQUE), large-scale data, 437–445, Chapter 8 550–553, 622–626 least squares, generalized, 109 minimum variance estimation, 110–114, 225–227 generalized inverse, 31 variance component estimators, ordinary, 109 527–528, 620–626 level, 180 mixed estimators, 158–159, 260–263, likelihood ratio test, 163–164 497–499 linear Bayes estimator, 114 mixed model, 4, 497–499, 511–513, 564, linear dependence, 21 602 linear equations, 17–26 adjusting for the bias, 588–590 combination of solutions, 22 analysis of means, 598–599 invariance property of solutions, 23 expected mean squares, 500, 521–527, linearly independent solutions, 22 534, 573–574, 602 solutions and generalized inverses, 23 expected value of quadratic form, linear independence (LIN), 21 565–566 linear transformations, 60–61 fitting constants method, 590–598 Loewner ordering, 57, 116 maximum likelihood, 544–547, 605–614 one random factor, 614–620 main effect, 188–190 two-way classification, 521–526 main-effects-only models, 440 matrix model classification models, see one-, two-, design, 207–209, 234, 477 three-way; survey data diagonal, 9, 14, 56, 58, 162, 286, 295, covariance, 445–474 Eisenhart I, 494 354, 385, 403, 445, 465, 550, 579, Eisenhart II, 495 621 fixed effects, 4, 494–503, 511, 515–516, generalized inverse, 8, 30–31, 43–44, 542, 564–565 Chapter 1 full, 148–151, 240, 459, 590–591, 593 Helmert, 8, 30–31, 43–44 full rank, 2, 42, 95, 102, 120 identity, 267, 530, 548, 566, 627 intra-class regression, 464–470 incidence, 184, 207, 614 main-effects-only, 440–442 Jacobian, 61, 66 mixed effects, see mixed models non-singular, 16, 18, 35, 55, 57, 65, 267 ������ij, 443–445

SUBJECT INDEX 653 non-full rank, 7, 214, 240–241, 264, design models, 183, 198–201, 292, 276–277, Chapter 5 294–296, 312, 331–334, 339, 349–356, 383–338 random effects, 493–496, 517–521, 566, 626 many solutions (invariants), 210–213, 216, 223, 225, 227 finite population, 529 reduced, 148–158, 240–241, 248, 308, models with constraints, 255–256, 259–260, 264–267, 271, 274, 316, 386, 410, 459 291, 311 reduced restricted, 258 regression, 97, 112, 123, 138, 156, 159, ������ij models, 428 non-full rank model, 205–210, 288 216, 225, 317, 442, 450, 464, 467, normalized generalized inverse, 30–31 Chapter 3 notation, diagonal matrix, 9 restricted, 255–264 b, 106 unrestricted, 256, 310–312, 373 interactions, 193 within-class regression, 456, 458, 461, Kronecker (direct) product, 560 466–468 null matrix, 9 moment generating functions, 62, R( ), 313 80–81 reduction in sum of squares, 313–316 moments, 59–60, 62–63, 75, 77, 481, 586, regression and effects, 184–185 592 variance covariance matrix, 60 Moore–Penrose inverse, 26–31, 32–42, 211, X, 109 230, 279 null matrix, 9 ������ij models, 443–445 multiple comparisons, 132, 316–321 one-way classification, 184–186, 291–312 Bonferonni, 316, 318–320, 343–345, covariance model, 454–470 432, 490 fixed effects model, 291–312, Chapter 6 Scheffe, 317–321, 343–345, 432 maximum likelihood estimators, multivariate distribution, 58–59 542–544, 611–613 normal, 64–69 random effects model, 526–542, 581–585, 599–602, 620–626 nearly identical, 442 negative estimates of variance components, ordinary least squares, 109 orthogonal and independent contrasts, 528–530, 538–539, 604 nested classification, 195–197, 331–339, 248–255, 308–310 orthogonal matrix, 49, 51–52, 58, 244, 621 397, 507, 545, 604 non-full rank model, 5, 7, 214, 240–241, p-values, 156, 325, 331, 366, 373, 520 Penrose inverse, 26–30, 33–34, 42, 211 264, 276–277, Chapter 5 permutation matrix, 14, 38, 267 non-singular covariance matrix, positive definiteness, 53–58, 160, 277 power of test, 165–166 277 prediction interval, 136 non-testable hypotheses, 241–243 predicted y-values, 116–118 normal distribution, 3, 64–69 pseudo inverse, 8, 30 pure error, 139–141 multivariate, 64–69 singular, 86 quadratic forms, 49–58, Chapter 2 normal equations, 4, 98–99, 102, 120, 125, covariance of two forms, 89 culmulants, 75–77 143, 154, 156, 159, 217, 228, 272, distribution of, 78–80 277–279, 395 balanced data, 340–341, 374, 415 connectedness, 423, 426 covariance, 446, 448, 475, 597, 605–608, 617

654 SUBJECT INDEX quadratic forms (Continued) regression, 140–141, 156–158, 281–283 expected values, 75, 563–567 two-way ANOVA, 366–367, 379–380, independence of, 80–86 mean, 75, 87, 563–567 391–394, 419–420 non-central chi-square, 78–80 non-negative definite, 83–85 shrinkage estimator, 477, 603, 626–630 positive-definite, 30–34, 74, 85 solutions to normal equations, 200, 210, positive semi-definite, 53, 74, 85 variance of, 77 212 stepwise fitting, 442 R statistical software (computer outputs) survey data, 175, 439, Chapter 8 covariance, 462, 468 “synthesis”, 576–577, 585–586, 592, 599 one-way ANOVA, 298 nested ANOVA, 343–344 t-distribution, 70–71 regression, 105–106, 125, 136, 138–139 non-central, 73 two-way ANOVA, 368, 378, 380, 390–391, 393, 418–419 testability, 236, 243–345, 277 testable hypothesis, 236–240, 262, 271, random effect, 4, 325, 493–496, 517–521, 566, 626 310, 337, 370 tests of hypotheses, 315–316, 331, reduced model, 148–158, 240–241, 248, 308, 316, 386, 410, 459 370–373, 402–413, 451–453 general linear hypothesis, 241, 141–143, restricted, 255–264 reductions in sums of squares, 313–316 145–148, 236–241 regression, 97, 112, 123, 138, 156, 159, likelihood ratio test, 163–164 non-full rank model, 221–223, 261–262 216, 225, 317, 442, 450, 464, 467, non-testable hypotheses, 241–243 Chapter 3 power of, 165–166 dummy variables, 3, 5, 175, 178–184, p-values, 156, 325, 331, 366, 373, 520 442, 445, 448 regression, 131–133 intra-class, 464–470 testable hypotheses, 236–240, 262, 271, ridge, 5, 114, 116, 160–162, 626–627 within class, 456, 458, 461, 466–468 310, 337, 370 residuals, examining, 167 type I and type II errors, 163–165 plots, 167–168 variance components, 4, Chapters 9–11 restricted model, 255–264 Texas Instrument calculators, 131, 137–138, restrictions on the model estimable functions, 200–201, 205, 297 223–236, 241, 243, 249, three-way crossed classification, 188, 440, 251–252, 257–259, 261–264 non-estimable functions, 241, 259–260 441, 596 “usual constraints”, 264–276 variance components estimation, ridge estimator, 5, 114, 116, 160–162, 626–627 formulae, Chapter 11 (see www.wiley.com∖go∖Searle∖ SAS statistical software (computer outputs) LinearModels2E) covariance, 462–464, 468–469, three-way nested classification, variance 473–474, 506, 519–520 components, Chapter 11 (see nested ANOVA, 341–342 www.wiley.com∖go∖Searle∖ one-way ANOVA, 298–299, 317–318 LinearModels2E) random or mixed model, 525–526, 625 tolerance interval, 136 two-way crossed classification, balanced data fixed effects model, 274–379, 415–420, 515–518 mixed model, 521–526 random model, 518–521

two-way crossed classification, unbalanced SUBJECT INDEX 655 data analysis of variance method, fixed effects model, Chapter 7 479–487, 598–599, Chapter 10 analysis of means, 479–487, 598–599 covariance, 471–474 balanced data, Chapter 9 with interaction, 380–415 best quadratic unbiased estimator without interaction, 347–374 (b.q.u.e.), 620–626. See also mixed model, 497–498 minimum invariant norm with interaction, formulae, Chapter quadratic unbiased estimation 11 (see www.wiley.com\\go\\ (MINQUE); minimum invariant Searle\\LinearModels2E) variance quadratic unbiased without interaction, formulae, estimator (MIVQUE) Chapter 11 (see www.wiley.com\\ fitting constants method, 590–598, go\\Searle\\LinearModels2E) Chapter 10 formulae, Chapter 11 (see random effects model, with interaction www.wiley.com\\go\\Searle\\ analysis of variance method, LinearModels2E) 567–588 Henderson’s methods, 567–587, fitting constants method, 590–598 589–599, Chapter 10 variance components estimation, maximum likelihood, 542–545 formulae, Chapter 11 (see minimum invariant norm quadratic www.wiley.com\\go\\Searle\\ unbiased estimator (MINQUE), LinearModels2E) 545–550 minimum invariant variance random effects model, without quadratic unbiased estimator interaction, formulae, Chapter 11 (MIVQUE), 550–553, 622–626 (see www.wiley.com∖go∖Searle∖ shrinkage, 603, 626–630 LinearModels2E) symmetric sums method, 599–602 “synthesis” method, 576–577 two-way nested classification too many equations, 595–596 fixed effects model, Chapter 6 unbalanced data, methods, Chapter variance component estimation, 10. See also Henderson’s formulae, Chapter 11 (see methods, Chapter 10; minimum www.wiley.com\\go\\Searle\\ invariant norm quadratic unbiased LinearModels2E) estimation (MINQUE); minimum invariant variance quadratic type I and type II errors, 163–165 unbiased estimator (MIVQUE); symmetric sums method unbalanced data, definition, 178 fitting constants method, 590–598 unequal-numbers data, 179 Henderson’s methods, Chapter 10 unrestricted model, 256–263, 310–312 I (analysis of variance method), unweighted means analysis, 479–484 598–599 “usual constraints”, 255, 264–276 II (adjusting for bias), 588–590 III (fitting constants), 590–598 and restrictions, 270–276 infinitely many quadratics, 602–605 maximum likelihood, 542–545 variance components maximum likelihood estimation, confidence intervals, 536–538, 558 542–545 estimation negative estimates, 528–530 adjusting for bias in mixed models, probability of, 538–539 588–590 analysis of means method, 479–487, 598–599

656 SUBJECT INDEX (see www.wiley.com\\go\\Searle\\ LinearModels2E) variance components (Continued) variance–covariance matrix, 60 sampling variances of estimators, balanced data, 539–542 weighted least squares, 109 tests of hypotheses, Chapters 9–11 weighted squares of means, 484–487 unbalanced data, methods, Chapter 10 within-class regression, 456, 458, 461, formulae, Chapter 11 (see www.wiley.com\\go\\Searle\\ 466–468 LinearModels2E) variance components, estimation, results, Chapters 10–11

WILEY END USER LICENSE AGREEMENT Go to www.wiley.com/go/eula to access Wiley’s ebook EULA.


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook