14.4 AN ILLUSTRATIVE EXAMPLE 435 EFFECTS OF EXOGENOUS CONSTRUCTS ON INDICATORS OF THE ENDOGE- NOUS CONSTRUCTS. The y indicators of 171 are indirectly affected by gh and the indirect effect is given by the product of YII times the respective loading. For example. the indirect effect of ~I on )'1 is given by I'll A{'I and is equal to 0.900 (Le., .90 x 1.00) [Ia, Id] which is also equal to the total effect of ~I on)'1 [6k]. The indicators of 1'/2 are indirectly affected by ~I through 171 and 17~, and also through 1'/2. For instance. the total effect of gl on)'4 is given by and is equal to [la, Ie, Id, 6k] .90 x .40 x 1.0 + .225 x 1.0 == .585. The first tenn in this expression gives the indirect effect of ~I on Y4 through 711 and 712, and the second term gives the indirect effect of ~I on y~ through 172. Completely Standardized Solution In the completely standardized solution the estimates are standardized with respect to the variances of the constructs and also with respect to the variances of the indicators [7]. In reporting the results. most researchers typically provide a summary of the results, which includes the fit statistics, and results of the measurement and structural model. Table 14.7 presents an example of such a reporting. 14.4 AN ILLUSTRATIVE EXAMPLE In this section we present an application of structural equation modeling with unob- servable constructs by discussing its application to coupon usage behavior. Shimp and Kavas (1984) postulated a model to study this facet of consumer behavior. The model is presented in Figure 14.4 and a brief discussion follows (for the time being ignore the dotted paths). The model suggests that actual coupon usage behavior (B) is affected by behavioral intentions (BI), which in turn is affected by attitude towards the act (AACT) and subjective norm (SN). AACT is the outcome of cognitive structures (AACTCOG) and SN is the outcome of nonnative structures (SNCOG). The cognitive and normative structures are measured by single items. Data collec.ed from a two-state consumer panel resulted in a total sample size of 533 respondents. Exhibit 14.3 gives partial LISREL output. 14.4.1 Assessing the Overall ~lodel Fit rThe statistic is significant; howeveor. as discussed in Chapter 6. one typically resorts to other fit indices such as the GFI, AGFI. NCP. MDN. RNI, and TLI for assessing model fit [1 J. Table 14.8 gives the values of the fit indices. Since the fit indices are less orthan the recommended cutoff value 0.90. the researchers concluded that the model could be improved based on theory and the modification irIdices given in the output. Model Respecification The modification indices can be used to respecify the hypothesized model. As noted rin Chapter 6, the modification index of a fixed parameter gives the approximate de- crease in the if the fixed parameter is freed (i.e., it is estimated). Examination of the
t til £IJ CD £14 tiS £1 '\"2 t) £4 81 &2 Figure 14.4 1'6 \" Coupon usage model. Source: Shimp, T. A. and A. Kavas (1984). \"The Theory of Reasoned Action Applied to Coupon Usage,\" Journal of Consumer Research; 11 (December). p. 797.
14.4 &'1 ILLUSTRATIVE EXAMPLE 437 Exhibit 14.3 LISREL output for coupon usage model ~o =CHI-SQUARE WI7H 115 DEGREES OF FREEDCM 775.43 =IF .000) o GOOCNESS OF FI T INDEX :0.8 -; 4 ADJUSTED GCODNESS CF FIT INCEX =0.d32 =ROO~ MEAN SQUARE RESIDUAL 0.183 (~)O MODIFICATION INDICES FOR BE7A o fuI.CT SN BI B + MCT -------- -------- -------- -------- a (LOOO 210.298 176.803 o 25.541 SN 165.862 0.000 64.462 19.978 + BI 0.000 0.000 O.COO 0.5!? 0.000 B 0.096 0.477 0.000 MODI=ICATION INDICES FOR GA.\"1MA AACTCOG SNCOG MCT -------- -------- 0.000 53.053 SN 9.059 0.000 BI 2.742 11.192 B 0.026 2.210 OND NON-ZERO MODIFICATION INDICES FOR PHI ONO NON-ZERO MODIFICATICN INDICES FOR PSI ONO NON-ZERO I'~ODIFICATICN INDICES FCR 7~ETA E!?S o MODIFICATION INDICES FOR THETA DELTA o Xl X2 + 53.051 9.059 o MAXIMUM ~ODIFICATION INDEX IS 210.30 FOR ELEMENT :, 2) OF BETA modification indices suggests that inclusion of the crossover paths between AACT and SN (i.e., /312 and fhd. and a crossover path between BI and AACT (i.e.• 1313) would improve model fit [2]. Obviously, these paths must have a theoretical support. Shimp and Kavas (1984) provide a theoretical reasoning for the inclusion of 1hz and /311, The dotted paths in Figure 14.4 represent these crossover paths. It is important that all model extensions be well grounded in theory. The analysis was rerun by freeing (i.e., estimating) the two parameters. We do not provide the LISREL output as all the relevant information can be summarized in a table. Table 14.9 gives the overall goodness-of- fit measures, the measurement model results, and the structural model results for the respecified model. The fit indices suggest a good model fit, implying that the data fit the hypothesized model. 14.4.2 Assessing the Measurement Model All the factor loadings are quite high and statistically significant. The reliabilities of the constructs and their indicators are also acceptable. The construct reliabilities were computed using Eq. 6.17. Note that reliabilities of single-item constructs are one, as they are assumed to be measured without error. Overall, the measurement model appears to be acceptable.
· 4S8 CHAPTER 14 COVARIANCE STRUCTURE MODELS Table 14.8 Goodness-of-Fit Indices for the Coupon .Usage Model Statistic Value Statistic Value Chi-square 775.430 df 115 0.874 .896 GFI 0.832 RGFI .860 AGFI 1.239 RAGFI 0.892 0.538 NCP 0.183 MDN 0.872 RNI TLI RMR Table 14.9 Summary of the Results for the Respecified Coupon Usage Model Overall Model Fit Statistic Value Statistic Value Chi-square 446.280 df 113 0.914 .936 GFI 0.884 RGFI .913 AGFI 0.625 RAGFI 0.945 0.732 NCP 0.044 MDN 0.934 RJ'fI TLI RMR Structural Model Results Parameters Standardized Estimate Exogenous paths 0.025 0.387° I'll 0.343° 1'21 0.387° 0.696° Endogenous paths 0.262° 0.683° (331 (332 0.482 f343 0.631 (321 0.501 0.480 f312 0.484 Coefficient of detennination (continued) All structural equations 1'/1 172 1'/3 1'/4 Assessing the Structural Model All the hypothesized paths are statistically significant, supporting the hypotheses re- lated to the structural equations. The variances accounted for by structural equations range from a low of 48% to a high of 63.1 % and the overall variance accounted for by the system of structural equations is 48.2%. These results suggest that all the relation- ships are quite strong.
14.4 AN ILLUSTRATIVE EXAMPLE 439 Table 14.9 (continued) Measurement Model ResulJs Constructs and Indicators Completely Standardized Loadings Reliabilities 7}1 (AACD 0.868\" 0.941 YI 0.796\" 0.754 Y'! 0.8154 0.634 Y3 0.8934 0.766 0.929\" 0.797 )'4 0.863 0.821\" Ys 0.74-4-\" 0.760 0.673 1'/2 (SN) 0.76C)'l 0.554 0.784\" )'6 0.851\" 0.873 0.837\" 0.591 y., 0.615 0.758\" 0.725 7}3 (BI) 0.7934 0.701 YII 0.7734 0.689\" 0.840 )'9 0.574 YIO 1.000 0.629 0.597 }'JI 1.000 0.475 0.085a 714 (B) 1.000 )'12 1.000 YI3 Yl4 YI!l EI (AACTCOG) 6. (SNCOG) cP aSignificant at p < .01. An obvious question is whether the improvement in the fit of the respecified model is statistically significant. That is, are the additional parameters statistically significant? The statistical significance of the improvement in fit, and therefore the estimates of the radditional parameters, can be assessed by the difference tests for nested models. Two models are said to be nested if one of the models can be obtained by placing re- strictions on the parameters of the other modeL In the present case the original model is nested within the respecified model. That is, the original model can be obtained from the respecified model by constraining the parameters f321 and f312 to be equal to zero. The chi-square difference test is described below. The difference in the,r values and the degrees of freedom for the two models are equal to 329.15 (775.430 - 446.280) and 2. respectively. For nested models, the differ- rence in the chi squares is distributed as a statistic with degree of freedom equal to rthe difference in the degrees of freedom of the two models. If the difference value is statistically significant, then the respecified model with the additional paths is assumed x:to have a statistically significant improvement over the original model. Since the of 329.150 with 2 df is statistically significant. the respecified model has a statistically significant improvement over the previous model. Furthermore, each of the crossover paths included in the respecified model is statistically significant, suggesting that there are significant crossover effects between AACf and SN.
440 CHAPTER 14 COVARIANCE STRUCTURE MODELS 14.5 SUMMARY This chapter discussed structural models. Structural or path models depict the relationship among a number of constructs. If the constructs in the structural model are measured without measure- ment error, then the parameters can be estimated using the standard statistical packages (e.g., SAS). This chapter. however, discussed the use of USREL, a well-known computer program for estimating the parameters of the model. In the case when model constructs are measured with error, the resulting model is a combination of a factor model and the struct!Jral model and is typically referred to as the structural model with unobservable variables. The factor model depicts the relationship between the unobservable constructs and its measures (i.e., indicators) and the structural model presents the relationships between the unobservable constructs. Once again. the use ofLlSREL for estimating the parameters of the structural model with unobservable constructs is illustrated. . QUESTIONS 14.1 What are the assumptions that must be made (about observed variables, error terms. causal relationships, etc.) for an effective use of structural models? 14.2 Figure Q14.1 shows a structural model. Substitute standard notation for the letters A through D and a through w. Use the standard notation to represent the model in equa- lion form. Classify the parameters as belonging to either the measurement pan or the structural pan of the model. k de r Figure Ql4.1 14.3 File INTPERF.DAT gives the covariances between 12 indicators used to measure various aspects of intelligence and class performance of 347 high school juniors. Indicators 1 to 4 are scores from tests designed to test the quantitative ability of the students. Indicators 5 to 8 are scores from tests designed to test the verbal ability of stu- dents. Indicators 9 to 12 are scores on four variations of a general intelligence test. It is believed that the performances of students on the general intelligence tests are a function of the students' quantitative and verbal abilities. The structural model shown in Figure Q14.2 was proposed to test the above theory and determine the relative strength of the effects of quantitative .and verbal abilities on
QUESTIONS 441 Figure Q14.2 Note that Xl to Xa correspond to indicators 1 to 8 and Y 1 to Y\" correspond to indicators 9 to 12. general intelligence levels. Estimate the parameters of the model shown in the figure and interpret the results. How can you modify the model to improve the overall fit? Provide support for any modifications suggested by you. 14.4 In a study conducted to examine consumer ethnocentric tendencies (CET). 667 subjects were asked to indicate their attitudes toward importing products. The ethnocentric ten- dencies of these consumers were measured using seven indicators. File CE1NEW.DAT gives the covariances among the indicators (X1-X7 measure ethno'centric tendencies and Y1-Ys are attitudinal measures). It is proposed that CET affect the attitudes toward imponing producrs. Draw a struc- tural model that represents the relationship between CET and consumer attitudes toward importing foreign products. Use the covariance data to estimate the parameters of the model. Interpret the results. 14.5 Compare the results obtained in Question 14.4 with those obtained using canonical corre- lation analysis in Question 13.7. What are the conceptual differences between canonical correlation analysis and structural equation modeling? 14.6 File PERFSAT.DAT gives the Pearson correlations for eight observed variables. The data carne from a study on perfonnance and satisfaction. Bagozzi (1980) formulated a struc- tural equation model to study the relationship between performance and satisfaction in an 'industrial sales force. His model was designed to answer such questions as: \"Is there a link between perfonnance and job satisfaction? Does perfo11llance influence satisfaction. or does satisfaction influence perfonnance?\" Figure Q14.3 presents the path diagram for the causal model finally adopted by Bagozzi. The latent constructs shown in the figure are as follows: €l == achievement motivation €z = task specific self-esteem €J = verbal intelligence .\" 1 = perfonnance '12 = job satisfaction.
442 CHAPTER 14 COVARIANCE STRUCTURE MODELS Os Figure Q14.3 According to the modeL ~l is measured by two indicators (Xl and X:!), S2 is measured by two indicators (X) and X4), g is measured by a single indicator (Xs), 111 is measured by a single indicator (Yd. and TI:! is measured by two indicators (r2 and l'3). Estimate the model parameters and interpret the results. 14.7 In a study designed to determine the predictors of drinking and driving behavior among 18- to 24-year-old males, the model shown in Figure Q14.4 was proposed. The constructs shown in the figure are as follows: ~I = attitude toward drinking and driving Q = social norms pertaining to drinking and driving g - perceived control over drinking and driving TIl = intentions to drink and drive 112 = drinking and driving behavior. Attitude is measured by five indicators (Xj-Xs). social norms are measured by three indi- cators (X6-XS). perceived control is measured by four indicators (X9-XI2). intentions are measured by two indicators (YI-Yz). and behavior is measured using two indicators (Y)- l'4). File DRINKD.DAT presents the covariance matrix between the indicators (sample size = 356). Use the covariance data (0 estimate the parameters of the structural model. Comment on the model fit. What modifications can you make to the model to improve the model fit? Interpret the results.
QUESTIONS 443 &. &z ~ 'I&~ '2 tJ~1 &S Figure Q14.4 14.8 XYZ National Bank conducted a survey of 423 customers to detennine how satisfied they are with their credit cards. The bank believes that overall satisfaction with credit cards is a function of customer satisfaction with the following four processes: application, billing, customer service, and late payment handling. The bank also believes that ol'eralJ satisfaction in tum determines whether the customer intends to continue using the credit card (intent to continue) and whether the customer will recommend the card to a friend (recommendation). Draw a structural model representing the relationships between th,e consUUcts, as proposed by the bank. In its survey, the bank used four indicators to measure satisfaction with application (XI-X.j.), three indicators to measure satisfaction with billing lXs-X7), four indicators to measure satisfaction with customer service (Xs-XII ), two indicators to measure satisfac- tion with late payment handling (XI'l-X13). two indicators to measure overall satisfac- tion (Yl-Y2). two indicators to measure recommendation (Y3-Y.j.), and two indicators to measure intent to continue tYS-Y6). Fil~ BANKSAT.DAT gives the covariance matrix between the indicators. Use the covariance data to estimate the parameters ofthe structural model and interpret the results.
444 CHAPTER 14 COVARIANCE STRUCTURE MODELS 14.9 Assume that the study described in Question 14.4 was replicated in Korea and in the United States using a sample size of3oo. File CEf.DAT gives the covariance matrices for the two samples. Do a group analysis to compare the structural model of the two samples. What conclusions can you draw from your analysis? Appendix In this appendix we discuss the procedures for computing the implied covariance matrix and model effects (e.g.. direct. indirect, and total effects). A14.1 IMPLIED COVARIANCE MATRIX In this section the computational procedures for obtaining the implied covariance matrix from the parameters of a given model are discussed. We first discuss models with observable constructs and then discuss models with unobservable constructs. A.14. 1.1 Models with Observable Constructs Consider th·e model given in Figure 14.3. which is represented by the following structural equa- tions (also see Eqs. 14.1 and 14.2), and assume that the error terms are uncorrelated with the latent constructs. 1 111 = 'YlJfJ + ~I (AI4.I) (A14.2) 112 = 'Y:!l~l +f3:!I111 +~2. The variance of 11 1 is given by F(11d = E(11i) (AI4.3) = E[('YIJ~r + (ri] = E['Yil~f + (f + 2Yl1~I(d = 'YiIE(~f) + EU'r) + 2'YIIE(~I(I) ::::: 'YircPlI +'1'11 +0 = 'YIJtPII + 'I'll. The covariance between ~r and 111 can be obtained by taking the expected value of Eq. A 14.1 after multiplying it by fJ. That is, CO\"(~I'11d - E['YJI~f + (l~rJ (AI4.4) = 'YIIE(e) + E(lr~l) :: 'YJJ<b\" + 0 = 'YJI<bIJ. ITo be consistent with most £e:>\\rbooks. we use Greek Jeners 10 represent conSlrUC'rs and Roman Jeners (0 represent measures or indicatorll of the constructs. Therefore. T/ I. T/l. and ~l. respectively. represent )'1 • y~. and XI'
A14.1 IMPLIED COVARIANCE MATRIX .us The variance of 112 is given by V(772) = E(11i) = E[(')'21~1 + 1321111 + '2)2] ,i= E[rll~l + f3?I11I + + 2')'2If321~1111 + 2')'21fl(2 + 21321111(2) f3i= ')'~IE(~r) + IE(11T) + E(,/> + 2')'1If32IE(~1111) + 21'2IE(~1!:-2) + 2f32IElTJI(2) = ')'~lcPll + f3i\\ V(11d + 'i'21 + 2'Y:H,821COV(t'I11d + 0 + 0 = ')'~lcPII + MI()'IlcPlI + 'I'll) + 2')';zdhI1utPlI + 'I':!:!. (A14.5) The covariance between 771 and 771 can be obtained by mUltiplying Eq. A14.2 by 771 and taking its expected value COv(111111) ;: E[')'21~ITJI + f311TJr + '2TJd (A14.6) == 'Y2I E (gl111) + i32I E(TJT) + E(C2TJI) \"'\" ')'21111cPII + 1321 V(111) + 0 = ')'21111tPll + f321(1rl¢1I + 'I'll). The covariance matrix between ~1 and 112 is obtained by taking the expected value ofEq. A14.2 after multiplying it by fl. That is, COV(~1112) = E('Y21~l + i321~ITJI + fl'2) (AI4.7) = ')'2IE(~f) + /32IE(~1 TIl) + E(~1'2) = ')'2ltPlI + f3z1')'lIcPlI. Equations A14.3 to A14.7 give the necessary elements of the covariance matrix implied by the model parameters, and are the same as given by Eq. 14.5, except that }'I. )\"2. and XI. respectively. represent TJ I. 112. and ~I' Implied Covariance Matrix Using Matrix Algebra Equations A14.1 and A14.2 can be represenred in matrix form as (A14.8) or 'l) = ~ + BTJ + t (A14.9) 'l) - B'l) = ~ + t (I - B)'l) = ~ + t \"l - (1 - B)-I ~ + (1 - B)-It. The covanance matrix. :Il1l1' between the endogenous constructs (Le., 111 and TJ~) is given by :I\".\". :: E(\"l'l) ') (A14.1O) = Er(1 - B)-Ill + (I - B)-ltU(1 - B)-Irs + (1 - B)-Itl' = (I - B)-lrE(~~')r'(I- B)-I' + (I - B)-IE(tt')(1 - B)-I' = (I - B)-I:fc)f'(I- B)-I' + (I - B)-I\"'(I - B)-\" = (I - B)-I[:fc)f' + \"'](1- B)-I'. The covariance matrix between the exogenous constructs, ~!i~' is given by :I~~, \", E(~~') (A14.11) :: ell.
448 CHAPTER 14 COVARIANCE STRUCTURE MODELS The covariance matrix, :I~, between exogenous and endogenous constructs is given by :I~ = E(1]~') (AI4.12) = E[(I - B)-Illf + (1- B)-It~'] = (1- B)-lrE(~~') + (1- B)-lE(t~') = (I - B)-I£'fI) + 0 = (I - B)-IJlI). The covariance matrix, I. of the model is equal to :I = (~\"l\"l :I~). (A14.13) :It\"l Iu or _((1- \"'](1-B)-I [J)I)r' + B)-I' (I - Bl-IJ)I)) (A14.14) 1: - r'(I- B)-I'cJ) cJ) •. The preceding equations can be used to obtain the covariance matrix of any structural model with observable constructs. The following section presents an example. An Illustrative Example Figure A14.l gives the structural model represented by Eqs. A14.1 and A14.2. The figure also gives a hypothetical set of parameter values. The values of these parameters were used to gen- erate the hypothetical covariance matrix given in Table 14.2. The parameter matrices are (1.60).4> = (4); r= • = ( 5.76 0) 0 10.752' 0.40 ' The PROC IML procedure in SAS can be used for computing the covariance matrix. Exhibit A14.1 gives the resulting output. Note that the covariance matrix is the same as that given in Table 14.2. A14.1.2 Models with Unobservable Constructs The model given in Figure A14.2. which is the same as that given by Figure 14.3, can be repre- sented by the following equations: 1] = q + B'I) + t (AI4.15) x == A.~ + 8 =Y Ay1] + E. The first equation gives the structural model and the last two equations give the measurement part of the model. Since the constructs in the model are unobservable and are measured by their respective indicators. the implied covariance matrix contains covariances among the indicators of the constructs. The covariance matrix. ~n. among the indicators of the exogenous constructs \"21 = 0.40 \\'c;:!) =10.752 fJ~1 = 0.40 \\'(';1) = S.76 Figure A14.1 Structural model with observable constructs.
A14.1 IMPLIED COVARIANCE MATRIX 447 Exhibit A14.1 1 Covariance matrix for structural model with observable constructs SAS 14:43 TUESDAY, DECEMBER 22, 1992 CYY COL1 COL2 ROW1 16.0000 8.9600 ROW2 8.9600 16.0000 CYX COLl ROWI 6.4000 ROW2 4.1600 CXX COL1 4.0000 COY COLI COL2 COL3 ROWI 16.0000 8.9600 6.4000 ROW2 8.9600 16.0000 4.160(J ROW3 6.4000 4.1600 4.0000 Note: CYY: Covariance matrix among the endogenous constructs. CXX: Covariance matrix among the exogenous constructs. CYX: Covariance matrix between the exogenous and the endogenous constructs. V(O.) V(~) V(03) V(E.) VeE:!) V(EJ) V(E,) V(ES) V(e6) 1.440 1.440 1.440 .760 .760 .760 .760 .760 .760 Figure A14.2 Structural model with unobservable constructs.
448 CHAPrER 14 COVARIANCE STRUCTURE MODELS is equal to :In = Coven:) = E(n:') (A14.16) = E[(A](~ + 8)(A](~ + 8)'] = A](E(~f)A~ + A](E(~8') + A](E(~) + E(88') = AxCP~ + 0 + 0 + 8. = AxCP~ + 9 •. The covariance matrix. :Iyy • among the indicators of the endogenous construct is given by :Iyy = C ov(yy) = E(yy') = EI(Ay'l} + E)(Ay'l} + E)'] = AyE('I}'I}')A;. + AyE('I}E') + AyE(E'I}) + E(EE') = Ay:I\"\",A; + 0 + 0 + e~. Substituting Eq. A I4. lOin this equation. we get (AI4.17) :Iyy = Ay[(I - B)-I (nt»r' + '1')(1 - B)-I'lA; + e •. And the covariance matrix. :Ix,.. among the indicators c:f exogenous and endogenous constructs is given by ~y := C ov(xy) \"'\" E(xy') = E[(Ax~ + 8)(Ay'l} + E)'J = A](E(~'I}')A; + AlIE(~E') + E(8'1}')A; + E(8E') = Ax:I~\"A;. + 0 + 0 + O. (A14.18) Substituting E.... A14.12 in this equation we get (AI4.19) Axcpr'(I- B)-I' A;.. Therefore. the covariance matrix, :I, for the model with unobservable constructs is equal to (A14.20) or AyA(Ix-~B~)-+I8r4.»~). (AI4.21) = (Ar[(I- B)-I(r4»r' + ,.,)(1 - B)-I'lA;. + e4[ :I A](cpr'(1 - B)-I'A; The preceding equations can be used to obtain the covariance matrix for any model given the parameter values. Following is an illustrative example. An Rlustrative Example Consider the model given in Figure A14.2 along with a hypothetical set of parameter values. The parameter matrices for the model in Figure 14.4 that were used to generate the hypothetical covariance matrix given in Table 14.6 are ~=(l 1 1). (!A' := 1 1 0 0 0) y 000111' se = (1.440 1.440 1.440), e; = (0.760 0.760 0.760 0.760 0.760 0.760). 560' (1.166 0). (0.90 ).do. .... _~ (') 'I' _ 0 r_ - -). 2.177' - 0.225 .
A14.2 MODEL EFFECTS 449 Exhibit A14.2 Covariance matrix for structural model with unobservable constructs 1 SAS 14:40 TUESOAY, DEC~~ER 22, 1992 1 CYY COLI COL2 C0L3 COL4 eOLS COL6 ROWI 3.9996 3.2396 3.2396 1.B142 1. 8142 1. B142 ROW2 3.2396 3.9996 3.2396 1.B142 1. B142 1. 8142 ROW 3 3.2396 3.2396 3.9996 1.B142 1.8142 1.8142 ROW 4 1.8142 1.8142 1.B142 3.9997 3.2397 3.2397 ROWS 1.B142 1.8142 1.8142 3.2397 3.9997 3.2397 ROW6 1.8142 1.8142 1.8142 3.2397 3.2397 3.9997 CXY COLI COL2 COL3 eOL4 eOLS COL6 ROW1 2.3040 2.3040 2.3040 1.4976 1.4976 1.4976 ROW2 2.3040 2.3040 2.3040 1.4976 1.4976 1.4976 ROW3 2.3040 2.3040 2.3040 1.4976 1.4976 1.4976 exx COLI COL2 eOL3 ROWI 4.0000 2.5600 2.5600 ROW2 2.5600 4.0000 2.5600 ROW3 2.5600 2.5600 4.0000 eov COLI COL2 eOL3 COL4 eOLS eOL6 eOL7 COLB COL9 ROW 1 3.9996 3.2396 3.2396 1.B142 1.8142 1.B142 2.3040 2.3040 2.3040 ROW2 3.2396 3.9996 3.2396 1. B142 1. 8142 1.8142 2.3040 2.3040 2.3040 ROW 3 3.2396 3.2396 3.9996 1. 8142 1. 8142 1.8142 2.3040 2.3040 2.3040 ROW4 1.8142 1.8142 1.8142 3.9997 3.2397 3.2397 1.4976 1.4976 1.4976 ROWS 1.8142 1.8142 1.8142 3.2397 3.9997 3.2397 1.4976 1.4976 1.4976 ROW 6 1.8142 1.8142 1.8142 3.2397 3.2397 3.9997 1.4976 1.4976 1.4976 ROW7 2.3040 2.3040 2.3040 1.4976 1.4976 1.4976 4.0000 2.5600 2.5600 ROWB 2.3040 2.3040 2.3040 1.4976 1.4976 1.4976 2.5600 4.0000 2.5600 ROW9 2.3040 2.3040 2.3040 1.4976 1.4976 1.4976 2.5600 2.5600 4.0000 Note: CYY: Covariance matrix among the indicators of the endogenous Constructs. CYX: Covariance manix between the indicators of the exogenous and the endogenous constructs. CXX: Covariance matrix among the indicators of the exogenous constructs. COY: Covariance matrix among the indicators. Exhibit A14.2 gives the PROC IML output. The covariance matrix given in the table is, within rounding errors, the same as that given in Table 14.6. A14.2 MODEL EFFECTS In many instances the researcher is interested in determining a number of effects in the struc- tural model, which can be classified as: (1) effects among the endogenous constructs (i.e., how one endogenous construct affects other endogenous construct(s»; (2) effects of the exogenous
450 CHAPrER 14 COVARIANCE STRUCTURE MODELS constructs on the endogenous consoucts (i.e., how the exogenous constructs affect the various endogenous constructs); and (3) effects of the constructs on the indicators. Each of these effects is discussed in the following sections using simple models. However, the fonnulae given are general and can be used to obtain the various effects for any structural models. A14.2.1 Effects among the Endogenous Constructs Consider the structural model given in Figure A I4.3. The figure, depicting:only the relationships among the constructs, can be represented by the following equations T71 = 'Yll~J + (I (A14.22) T72 :=: 'Y22~ + 1321 Til + (2 (A14.23) T73 = /331 T71 + /332T]2 + (3. (A14.24) Concentrating on the paths among the endogenous constructs, it can be seen from Figure A14.3 that some of the endogenous constructs are directly and/or indirectly affected by other endoge- nous constructs. For example, T72 is directly affected by Til, and it is not indirectly affected by any other endogenous consOUct. On the other hand, T73 is directly affected by T71 and T72, and it is also indirectly affected by T71 via 712. The total effect of a given construct is the sum of all its direct and indirect effects. The following section discusses the direct and indirect effects in greater detail and illustrates the computational procedures. Direct Effects As discussed above, direct effects result when one construct directly affects another construct. The direct effects can be obtained directly from the structural equations. In Eq. A14.23 the direct effect of T71 on T72 is given by the respective structural coefficient, /321, Similarly, in Eq. AI4.24 the direct effect of T/ J and T72 on T73 are, respectively, given by /331 and /332. It is obvious that the direc! effects among the endogenous constructs are given by the B matrix. For the model given in Figure A14.3, the following B matrix gives the effect of the column construct On the row construct: 0o 00). (A14.25) /332 0 Indirect Effects The indirect effect of one construct on another construct must be through one or more other consbllct(s). For example, in Figure A14.3 the indirect effect of T71 on T]3 is via T72. That ':5, T71 indirectly affects T73 through \"\"2, or in Figure A 14.4 the indirect effect of T71 on T74 is through T72 and T73. The order of an indirect effect is denoted by the length of the effect, and the length Figure A14.3 Structural model.
A14.2 MODEL EFFECTS 451 Figure A14.4 Indirect effects of length three. Figure A14.5 Multiple indirect effects. of an effect is defined as the number of links or paths between the two constructs. For instance, in Figure A14.4 the indirect effect of 171 on 713 is of length two as there are two links between 111 and 7]3. One link is between 111 and 712 and the other link is between 712 and 713. Also. the indirect link between.\" 1 and 714 is of length three as there are three links between 71 1 and 71-l. It is also possible that a given construct may have multiple indirect effects. As an example, in Fig- ure A14.5 711 indirectly affects 716 through 1]3 and TIs, as well as through 71\", The total indirect effect of a construct, therefore. is equal to the sum of all its indirect effects. I!ldirect effects are equal [0 the product of the strucrural coefficients of the links between [he effects. As an example, the indirect effects of 71 1 on 716 in Figure A14.5 are given by f331 f3S3 f36S and f341 f36... The total indirect effect of 171 on 716 is therefore equal to f331 f3S3f365 + /341 f364. In general, it has been shown that indirect effects of length or order k are given by Bk, where B is the matrix of beta coefficients. That is, the indirect effect of length two for the model given in Figure A14.3 is given by 0)oo 0 . o 0, And the indirect effects oflength 3 are given by That is, as is obvious from Figure A14.3, there are no indirect effects of length three. The total indirect effects are given by which has been shown to be equal to (A14.26) (I - B)-r - 1 - B. Total Effects The total effect is the sum of direct and indirect effects. For example, the total effect of 71 1 on 112 is the sum of the direct effect and the indirect effects of 711 on 712. From Eqs. A14.25 and A14.26. the total effects are (I - B)-l - 1- B + B
452 CHAPTER 14 COVARIANCE STRUCTURE MODELS Or (I-B)-I_I. (A 14.27) A14.2.2 Effects of Exogenous Constructs on Endogenous Constructs Direct Effects From Figure A14.3 it can be seen that the exogenous construct, tl. directly affects 1]r and the exogenous construct, ~, directly affects 712. The direct effects of ~J and 6, respectively, are 'Y11 and 'Y22. The direct effects of exogenous constructs on the endogenous constructs are given by the r matrix. and are equal to (A14.28) Indirect Effects The following indirect effects can be identified in Figure A14.3: 1. tl indirectly affec'ts Tl2 through 1] I and the indirect effect is given by 'Y II {321. 2. tl indirectly affects 'T13 through 711. and through 711 and 712. These effects are, respectively, given by 'Yllft31 and 'Y1I{321f332. 3.. The indirecl effect of E2 on 713 is through 7J2 and is given by 'Y'12f332. In general, it can be shown that the indirect effects of the exogenous constructs on the endogenous constructs are given by [(1 - B)-I - IJr. (A 14.29) and, therefore, the total effects are given by (I - B)-Ir - r + r or (A 14.30) A14.2.3 Effects of the Constructs on Their Indicators Consider the model given in Figure A 14.2. The effects of the constructs on the indicators can be classified as: (I) the direct effect of each construct on its respectiv!\" indicators; (2) the indirect effect of exogenous constructs on indicators of endogenous constructs; and (3) the indirect effect of the endogenous construct on the indicators of other endogenous constructs. These effects are discussed below. Direct Effects The direct effect of the exogenous construct on its indicarors is given by the respective>. coef- ficient. For example, the effect of tr on XI is given by >'f, . and the indirect effect of 71 I on )'1 is given by >\";'1' Therefore. the direct effects of exogenous and endogenous constructs on their indicators are. respectively. given by the Ax and Ay matrices. Indirect Effects The indicators of the exogenous constructs are not indirectly affected. Only the indicators of endogenous constructs are indirectly affected. In Figure A 14.2, YI is indirectly affected by tl via 711 and the effect is equal to 'YIIA;I. SimilarlY.)'4 is indirectly affected by tl via 711 and 712
A14.2 MODEL EFFECTS 453 and the effect is given by )'11 fh lA.!2. The indicator Y4 is al~ indirectly affected by TIl through m. and this effect is given by /321A.~2. The total indirect effect of Y4 is equal to 'Yll/htA.:1 + ~tA.~2. The total effect of ]4 is equal to the sum of all indirect and direct effects and is equal to A~2 + 'YlllhlA.~2 + f321~1. In general, th,e indirect effects on the indicators of endogenous constructs are given by A,[(I - B)-I - I], and the to,tal effects are given by Ay[(1 - B)-l - I] + Ay or (A14.31)
Statistical Tables ·· ··
srATISTlCAL TABLES 457 Table T.l Standard Normal Probabilities Example Pr (0 :S =:S 1.96) = 0.4750 Pr (= 2: 1.96) = 0.5 - 0.4750 ~ 0.025 0 1.96 = Z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09 0.0 .0000 .0040 .0080 .0120 .0160 .0199 .0239 .0279 .0319 .0359 0.1 .0398 .0438 .0478 .0517 .0557 .0596 .0636 .0675 .0714 .0753 0.2 .0793 .0832 .0871 .0910 .0948 .0987 .1026 .1064 .1103 .1141 0.3 .1179 .1217 .1255 .1293 .1331 .1368 .1406 .1443 .1480 .1517 0.4 .1554 .1591 .1628 .1664 .1700 .1736 .1772 .1808 .1844 .1879 0.5 .1915 .1950 .1985 .2019 .2054 .2088 .2123 .2157 .2190 .2224 0.6 .2257 .2291 .2324 .2357 .2389 .2422 .2454 .2486 .2517 .2549 0.7 .2580 .2611 .2642 .2673 .2704 .2734 .2764 .2794 .2823 .2852 0.8 .2881 .2910 .2939 .2967 .2995 .3023 .3051 .3078 .3106 .3133 0.9 .3159 .3186 .3212 .3238 .32()..1. .3289 .3315 .3340 .3365 .3389 1.0 .3413 .3438 .3461 .3485 .3508 .3531 .3554 .3577 .3599 .3621 1.1 .3643 .3665 .3686 .3708 .3729 .3749 .3770 .3790 .3810 .3830 1.2 .3849 .3869 .3888 .3907 .3925 .3944 .3962 .3980 .3997 .4015 1.3 .4032 .4049 .4066 .4082 .4099 .4115 .4131 .4147 .4162 .4177 1.4 .4192 .4207 .4222 .4236 .4251 .4265 .4279 ...292 .4306 .4319 1.5 .4332 .4345 .4357 .4370 .4382 .4394 .4406 .4418 .4429 .4441 1.6 .4452 .4463 .4474- .4484 .4495 .4505 .4515 .4525 .4535 .4545 1.7 .4554 .4564 .4573 .4582 .4591 .4599 .4608 .4616 .4625 .4633 1.8 .4641 .4649 .4656 .4664 .4671 .4678 .4686 .4693 .4699 .4706 1.9 .4713 .4719 .4726 .4732 .4738 .4744 .4750 .4756 .4761 .4767 2.0 .4772 .4778 .4783 .4788 .4793 .4798 .4803 .4808 .4812 .4817 2.1 .4821 .4826 .4830 .4834 .4838 .4842 .4846 .4850 .4854 .4857 2.2 .4861 .4864- .4868 .4871 .4875 .4878 .4881 .4884 .4887 .4890 2.3 .4893 .4896 .4898 .4901 .4904 .4906 .4909 .4911 .4913 .4916 2.4 .4918 .4920 .4922 .4925 .49~7 .4929 .4931 .4932 .4934 .4936 2.5 .4938 .4940 .4941 .4943 .4945 .4946 .4948 .4949 .4951 .4952 2.6 .4953 .4955 .4956 .4957 .4959 .4960 .4961 .4962 .4963 .4964 2.7 .4965 .4966 .4967 .4968 .4969 .4970 .4971 .4972 .4973 .4974 2.8 .4974 .4975 .4976 .4977 .4977 .4978 .4979 .4979 .4980 .4981 2.9 .4981 .4982 .4982 .4983 .4984 .4984 .4985 .4985 .4986 .4986 3'.0 .4987 .4987 .4987 .4988 .4988 .4989 .4989 .4989 .4990 .4990
458 STATISTICAL TABLES Table T.2 Students' t-Distribution Critical Points Example Pr (t > 2.086) = 0.025 Pr (t > l.n5) = 0.05 ford! == 20 Pr (ItI > 1.725) == 0.10 0 1.725 0.25 0.10 0.05 0.025 0.01 0.005 0.001 0.50 0.20 0.10 0.05 0.02 0.010 0.002 1 1.000 3.078 6.314 12.706 31.821 63.657 318.31 2 0.816 1.886 2.920 4.303 6.965 9.925 22.327 3 0.765 1.638 2.353 3.182 4.541 5.841 10.214 4 0.741 1.533 2.132 2.776 3.747 4.604 7.173 5 0.727 1.476 2.015 2.571 3.365 4.032 5.893 6 0.718 1.440 1.943 2.447 3.143 3.707 5.208 7 0.711 1.415 1.895 2.365 2.998 3.499 4.785 8 0.706 1.397 1.860 2.306 2.896 3.355 4.501 9 0.703 1.383 1.833 2.262 2.821 3.250 4.297 -10 0.700 1.372 1.812 2.228 2.764 3.169 4.144 11 0.697 1.363 1.796 2.201 2.718 3.106 4.025 12 0.695 1.356 1.782 2.179 2.681 3.055 3.930 13 0.694 1.350 1.771 2.160 2.650 3.012 3.852 14 0.692 1.345 1.761 2.145 2.624 2.977 3.787 15 0.691 1.341 1.753 2.131 2.602 2.947 3.733 16 0.690 1.337 1.746 2.120 2.583 2.921 3.686 17 0.689 1.333 1.740 2.110 2.567 2.898 3.646 18 0.688 1.330 1.734 2.101 2.552 2.878 3.610 19 0.688 1.328 1.729 2.093 2.539 2.861 3.579 20 0.687 1.325 1.725 2.086 2.528 2.845 3.552 21 0.686 1.323 1.721 2.080 2.518 2.831 3.527 22 0.686 1.321 1.717 2.074 2.508 2.819 3.505 23 0.685 1.319 1.714 2.069 2.500 2.807 3.485 24 0.685 1.318 1.711 2.064 2.492 2.797 3.467 25 0.684 1.316 1.708 2.060 2.485 2.787 3.450 26 0.684 1.315 1.706 2.056 2.479 2.779 3.435 27 0.684 1.314 1.703 2.052 2.473 2.771 3.421 28 0.683 1.313 1.701 2.048 2.467 2.763 3.408 29 0.683 1.311 1.699 2.045 2.462 2.756 3.396 30 0.683 1.310 1.697 2.042 2.457 2.750 3.385 40 0.681 1.303 1.684 2.021 2.423 2.704 3.307 60 0.679 1.296 1.671 2.000 2.390 2.660 3.232 120 0.677 1.289 1.658 1.980 2.358 2.167 3.160 oe 0.674 1.282 1.645 1.960 2.326 2.576 3.090 Note: The smaller probability ShO.....l1 at the head of each column is lhe area in one tail; the larger probability is the area in both lails. Source: From E. S. Pearson and H. O. Hartley. eds.• Biometrika TablesJorStatisticians. vol. I, 3d cd., tabl,e 12. Cambridge University Press. New York.. 1966. Reproduced by permission of the editors and trustees of BiometrikA
Table T.3 X 2 Critical Points Example crPr > 23.8277) = 0.25 crPr > 31.4104) = 0.05 crford! = 10 Pr > 37.5662) = 0.01 0 31.41 37.37 Z2 >z 0.250 0.100 0.050 0.025 0.010 0.005 0.001 1.32330 2.70554 3.84146 5.02389 6.63490 7.87944 10.828 2 2.71259 4.60517 5.99146 7.37776 9.21034 10.5966 13.816 3 4.10834 6.25139 7.81473 9.34840 11.3449 12.8382 16.266 4 5.38527 7.77944 9.48773 11.1433 13.2767 14.8603 18.461 5 6.62568 9.23636 11.0705 12.8325 15.0863 16.7496 20.515 6 7.84080 10.6446 11.5916 14..+494 16.8119 18.5476 22.458 7 9.03715 12.0170 14.0671 16.0128 18.4753 20.2177 24.322 8 10.2189 13.3616 15.5073 17.5345 20.0902 21.9550 16.125 9 11.3888 14.6837 16.9190 19.0228 21.6660 23.5894 27.877 10 12.5489 15.9872 18.3070 20..+832 23.2093 25.1882 29.588 11 13.7007 17.2750 19.6751 21.9200 24.7250 26.7568 31.264 12 14.8454 18.5493 21.0261 23.3367 26.2170 28.2995 32.909 13 15.9839 19.8119 22.3620 24.7356 27.6882 29.8195 34.528 14 17.1169 21.0641 23.6848 26.1189 29.1412 31.3194 36.123 15 18.2451 22.3071 24.9958 27.4884 30.5779 32.8013 37.697 16 19.3689 23.5418 28.8454 31.9999 34.2672 39.252 I7 20.4887 24.7690 ~6.2962 30.1910 33.4087 35.7185 40.790 18 21.6049 25.9894 31.5264 34.8053 37.1565 42.312 19 22.7178 27.2036 27.5871 32.8523 36.1909 38.5823 43.820 ~8.8693 34.1696 37.5662 39.9968 45.315 35.4789 38.9322 41.4011 46.797 30.1435 36.7807 40.2894 42.7957 48.268 38.0756 41.6384 44.1813 49.728 20 23.8277 28.4120 31.4104 39.3641 42.9798 45.5585 51.179 21 24.9348 29.6151 32.6706 22 26.0393 30.8133 33.9244 40.6465 44.3141 46.9279 52.618 23 27.1413 32.0069 35.1715 41.9232 45.6417 48.2899 54.052 24 28.2412 33.1962 36.4150 43.1945 46.9629 49.6449 55.476 44.4608 48.2782 50.9934 56.892 25 29.3389 34.3816 37.6525 45.7:23 49.5879 52.3356 58.301 35.5632 38.8851 .2.,,67 30.4346 36.7412 40.1133 46.9792 50.8922 53.6720 59.703 31.5284 37.9159 41.3371 59.3417 63.6907 66.7660 73.402 ~, 39.0875 42.5570 71.4202 76.1539 79.4900 86.661 83.2977 88.3794 91.9517 99.607 28 32.6205 95.0232 100.425 104.215 112.317 29 33.7109 106.629 112.329 116.321 124.839 II 8.136 124.116 128.299 137.208 30 34.7997 40.2560 43.7730 129.561 135.807 140.169 149.449 40 45.6160 51.8051 55.7585 50 56.3336 63.1671 . 67.5048 +1.9600 +2.3263 +25758 +3.0902 60 66.9815 74.3970 79.0819 70 77.5767 85.5270 90.5312 96.5782 101.879 80 88.1303 107.565 .113.145 118.498 '124.342 90 98.6499 + 1.2816 +1.6449 100 109.141 zt +0.6745 'For df greater than 100. the expression ,,'2X! - ,,'(2k - 1) = Z follows the standardized normal distribution, where k represents the degrees of freedom. Sourc:e: From E. S. Pearson and H. O. Hartley. eds., Biometrika Tables for Statisticians. vol. I, 3d ed.. table 8, Cambridge University Press. New York. 1966. Reproduced by pennission of the editors and austees of Biometrika. 459
Table T.4 F -Distribution Example Pr (F > 1.59) = 0.25 Pr (F > 2.42) == 0.10 ford! NJ = 10 Pr (F > 3.14) = 0.05 andN2 = 9 Pr (F > 5.26) = 0.01 3.14 5.26 F dffor Denom- df for Numerator Nl iutor N2 Pr 1 2 3 4 5 6 7 8 9 10 11 12 .25 5.83 7.50 8.20 8.58 8.82 8.98 9.10 9.19 9.26 9.32 9.36 9.41 1 .10 39.9 49.5 53.6 55.8 57.2 58.2 58.9 59.4 59.9 60.2 60.5 60.7 .05 161 200 216 225 230 234 237 239 241 242 243 244 .25 2.57 3.00 3.15 3.23 3.28 3.31 3.34 3.35 3.37 3.38 3.39 3.39 2 .10 8.53 9.00 9.16 9.24 9.29 9.33 9.35 9.37 9.38 9.39 9.40 9.41 .05 18.5 19.0 19.2 19.2 19.3 19.3 19.4 19.4 19.4 19.4 19.4 19.4 .01 98.5 99.0 99.2 99.2 99.3 99.3 99.4 99.4 99.4 99.4 99.4 99.4 .25 2.02 2.28 2.36 2.39 2.41 2.42 2.43 2.44 2.44 2.44 2.45 2.45 .3 .10 5.54 5.46 5.39 5.34 5.31 5.28 5.27 5.25 5.24 5.23 5.22 5.22 .05 10.1 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 8.76 8.74 .01 34.1 30.8 29.5 28.7 28.2 27.9 27.7 27.5 27.3 27.2 27.1 27.1 .25 1.81 2.00 2.05 2.06 2.07 2.08 2.08 2.08 2.08 2.08 2.08 2.08 4 .10 4.54 4.32 4.19 4.11 4.05 4.01 3.98 3.95 3.94 3.92 3.91 3.90 .05 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.94 5.91 .01 21.2 18.0 16.7 16.0 15.5 15.2 15.0 14.8 14.7 14.5 14.4 14.4 .25 1.69 1.85 1.88 1.89 1.89 1.89 1.89 1.89 1.89 1.89 1.89 1.89 5 .10 4.06 3.78 3.62 3.52 3.45 3.40 3.37 3.34 3.32 3.30 3.28 3.27 .05 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.71 4.68 .01 16.3 13.3 12.1 11.4 11.0 10.7 10.5 10.3 10.2 10.1 9.96 9.89 25 1.62 1.76 1.78 1.79 1.79 1.78 1.78 1.78 1.71 1.77 1.77 1.77 6 .10 3.78 3.46 3.29 3.18 3.11 3.05 3.01 2.98 2.96 2.94 2.92 2.90 .05 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.03 4.00 .01 13.7 lD.9 9.78 9.15 8.75 8.47 8.26 8.10 7.98 7.87 7.79 7.72 .25 1.57 1.70 1.72 1.72 1.71 1.71 1.70 1.70 1.69 1.69 1.69 1.68 7 .10 3.59 3.26 3.07 2.96 2.88 2.83 2.78 2.75 2.72 2.70 2.68 2.67 .05 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.60 3.57 .01 12.2 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72 6.62 6.54 6.47 .25 1.54 1.66 1.67 1.66 1.66 1.65 1.64 1.64 1.63 1.63 1.63 1.62 8 .10 3.46 3.11 2.92 2.81 2.73 2.67 2.62 2.59 2.56 2.54 2.52 2.50 .05 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.31 3.28 .01 11.3 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91 5.81 5.73 5.67 .25 1.51 1.62 1.63 1.63 1.62 1.61 1.60 1.60 1.59 1.59 1.58 1.58 9 .lD 3.36 3.01 2.81 2.69 2.61 2.55 2.51 2.47 2.44 2.42 2.40 2.38 .05 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.10 3.07 .01 10.6 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35 5.26 5.18 5.11 Sourr:e: E. S. Pearson and H. O. Hartley. cds. Biometrika Tables for Sratisticians, vol. 1. 3d ed., table 18. p. 558. Cambridge University Press. New Yort, 1966. Reproduced by permission of the edilors and trustees of Biometrika. 460
STATISTICAL TABLES 481 Table T.4 (Continued) dfCor df for Numerator Nl Denom- inator 15 20 24 30 40 50 60 100 120 200 500 CXI Pr Nl 9.49 9.58 9.63 9.67 9.71 9.74 9.76 9.78 9.80 9.82 9.84 9.85 .25 1 61.2 61.7 62.0 62.3 62.5 62.7 62.8 63.0 63.1 63.2 63.3 63.3 .10 2 246 248 249 250 251 252 252 253 253 254 254 254 .05 3 4 3.41 3.43 3.43 3.44 3.45 3.45 3.46 3.47 3.47 3.48 3.48 3.48 .25 5 9.42 9.44 9.45 9.46 9.47 9.47 9.47 9.48 9.48 9.49 9.49 9.49 .10 6 19.4 19.4 19.5 19.5 19.5 19.5 19.5 19.5 19.5 19.5 19.5 19.5 .05 7 99.4 99.4 99.5 99.5 99.5 99.5 99.5 99.5 99.5 99.5 99.5 99.5 .01 8 9 2.46 2.46 2.46 2.47 2.-1-7 2.47 2.47 2.47 2.47 2.47 2.47 2.47 .25 5.20 5.18 5.18 5.17 5.16 5.15 5.15 5.14 5.14 5.14 5.14 5.13 .10 8.70 8.66 8.M 8.62 8.59 8.58 8.57 8.55 8.55 8.54 8.53 8.53 .05 26.9 26.7 26.6 26.5 26.4 26.4 26.3 26.2 26.2 26.2 26.1 26.1 .01 2.08 2.08 2.08 2.08 2.08 2.08 2.08 2.08 2.08 2.08 2.08 2.08 .25 3.87 3.84 3.83 3.82 3.80 3.80 3.79 3.78 3.78 3.77 3.76 3.76 .10 5.86 5.80 5.77 5.75 5.72 5.70 5.69 5.66 5.66 5.65 5.64 5.63 .05 14.2 14.0 13.9 13.8 13.7 13.7 13.7 13.6 13.6 13.5 13.5 13.5 .01 1.89 1.88 1.88 1.88 1.88 1.88 1.87 1.87 1.87 1.87 1.87 1.87 .25 3.24 3.21 3.19 3.17 3.16 3.15 3.14 3.13 3.12 3.12 3.11 3.10 .10 4.62 4.56 4.53 4.50 4.46 4.44 4.43 4.41 4.40 4.39 4.37 4.36 .05 9.72 9.55 9.47 9.38 9.29 9.24 9.20 9.13 9.11 9.08 9.04 9.02 .01 1.76 1.76 1.75 1.75 1.75 1.75 1.74 1.74 1.74- 1.74 l.74 1.74 .25 2.87 2.84 2.82 2.80 2.78 2.77 2.76 2.75 2.74 2.73 2.73 2.72 .10 3.94 3.87 3.84 3.81 3.77 3.75 3.74- 3.71 3.70 3.69 3.68 3.67 .05 7.56 7.40 7.31 7.23 7.14 7.09 7.06 6.99 6.97 6.93 6.90 6.88 .01 1.68 1.67 1.67 1.66 1.66 1.66 1.65 1.65 1.65 1.65 1.65 1.65 .25 2.63 2.59 2.58 2.56 2.54 2.52 2.51 2.50 2.49 2.48 2.48 2.47 .10 3.51 3.44 3.41 3.38 3.34 3.32 3.30 3.27 3.27 3.25 3.24 3.23 .05 6.31 6.16 6.07 5.99 5.91 5.86 5.82 5.75 5.74 5.70 5.67 5.65 .01 1.62 1.61 1.60 1.60 l.59 1.59 1.59 1.58 1.58 1.58 1.58 1.58 .25 2.46 2.42 2.40 2.38 2.36 2.35 2.34 2.32 2.32 2.31 2.30 2.29 .10 3.:p. 3.15 3.12 3.08 3.04 3.02 3.01 2.97 2.97 2.95 2.94 2.93 .05 5.52 5.36 5.28 5.20 5.12 5.07 5.03 4.96 4.95 4.91 4.88 4.86 .01 1.57 1.56 1.56 1.55 1.55 1.54 1.54 1.53 1.53 1.53 1.53 1.53 .25 2.34 2.30 2.28 2.25 2.23 2.22 2.21 2.19 218 2.17 2.17 2.16 .10 3.01 2.94 2.90 2.86 2.83 2.80 2.79 2.76 2.75 2.73 2.72 2.71 .05 4.96 4.81 4.73 4.65 4.57 4.52 4.48 4.42 4.40 4.36 4.33 4.31 .01 (continued)
462 STATISTICAL TABLES Table T.4 (Continued) dffor Denom- df for Numerator Nt inator N2 Pr 1 2 3 4 5 6 7 8 9 10 11 12 .25 1.49 1.60 1.60 1.59 1.59 1.58 1.57 1.56 1.56 1.55 1.55, 1.54 10 .10 3.29 2.92 2.73 2.61 252 2.46 2.41 2.38 2.35 2.32 2.30 2.28 .05 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.94 2.91 .01 10.0 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94 4.85 4.77 4.71 .25 1.47 1.58 1.58 1.57 1.56 1.55 1.54 1.53 1.53 1.52 1.52 1.51 11 .10 3.23 2.86 2.66 2.54 2.45 2.39 2.34 2.30 2.27 2.25 2.23 2.21 .05 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 2.82 2.79 .01 9.65 7.21 6.22 5.67 5.32 5.07 4.89 4.74 4.63 4.54 4.46 4.40 .25 1.46 1.56 1.56 1.55 1.54 1.53 1.52 1.51 1.51 1.50 1.50 1.49 12 .10 3.18 2.81 2.61 2.48 2.39 2.33 2.28 2.24 2.21 2.19 2.17 2.15 .05 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.72 2.69 .01 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 4.30 4.22 4.16 .25 1.45 1.55 1.55 1.53 1.52 1.51 1.50 1.49 1.49 1.48 1.47 1.47 13 .10 3.14 2.76 2.56 2.43 2.35 2.28 2.23 2.20 2.16 2.14 2.12 2.10 .05 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 2.63 2.60 .01 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 4.19 4.10 4.02 3.96 .25 1.44 1.53 1.53 1.52 1.51 1.50 1.49 1.48 1.47 1.46 1.46 1.45 14 .10 3.10 2.73 2.52 2.39 2.31 2.24 2.19 2.15 2.12 2.10 2.08 2.05 .05 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 2.57 2.53 .01 8.86 6.51 5.56 5.04 4.69 4.46 4.28 4.14 4.03 3.94 3.815 3.80 .....25 1.43 1.52 1.52 1.51 1.49 1.48 1.47 1.46 1.46 1.45 1.44 1 '-1. 15 .10 3.07 2.70 2.49 2.36 2.27 2.21 2.16 2.12 2.09 2.06 2.04 2.02 .05 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.51 2.48 .01 8.68 6.36 5.42 4.89 4.56 4.32 4.14 4.00 3.89 3.80 3.73 3.67 .25 1.42 1.51 1.51 1.50 1.48 1.47 1.46 1.45 1.44 1.44 1.44 1.43 16 .10 3.05 2.67 2.46 2.33 2.24 2.18 2.13 2.09 2.06 2.03 2.01 1.99 .05 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 2.46 2.42 .01 853 6.23 5.29 4.77 4.44 4.20 4.03 3.89 3.78 3.69 3.62 3.55 .25 1.42 1.51 1.50 1.49 l.47 1.46 1.45 1.44 1.43 1.43 1.42 1.41 17 .10 3.03 2.64 2.44 2.31 2.22 2.15 2.10 2.06 2.03 2.00 1.98 1.96 .05 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.4,) 2.41 2.38 .01 8.40 6.11 5.18 4.67 4.34 4.10 3.93 3.79 3.68 3.59 3.52 3.46 .25 1.41 1.50 1.49 1.48 1.46 1.45 1.44 1.43 1.42 1.42 1.41 1.40 18 .10 3.01 2.62 2.42 2.29 2.20 2.13 2.08 2.04 2.00 1.98 1.96 1.93 .05 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 2.37 2.34 .01 8.29 6.01 5.09 4.58 4.25 4.01 3.84 3.71 3.60 3.51 3.43 3.37 .25 1.41 1.49 1.49 1.47 1.46 1.44 1.43 1.42 1.41 1.41 1.40 1.40 19 .10 2.99 2.61 2.40 2.27 2.18 2.11 2.06 2.02 1.98 1.96 1.94 1.91 .05 438 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 2.34 2.31 .01 8.18 5.93 5.01 4.50 4.17 3.94 3.77 3.63 3.52 3.43 3.36 3.30 .25 1.40 1.49 1.48 1.46 1.45 1.44 1.43 1.42 1.41 1.40 1.39 1.39 20 .10 2.97 2.59 2.38 2.25 216 2.09 2.04 2.00 1.96 1.94 1.92 1.89 .05 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.31 2.28 .01 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46 3.37 3.29 3.23
STATISTICAL TABLES 463 Table T.4 (Continued) dffor df for Numerator NJ Denom- inator IS 20 24 30 40 50 60 100 120 200 SOO CD Pr Nl 1.53 1.52 1.52 1.51 1.51 1.S0 1.50 1.49 1.49 1.49 1.48 1..+8 .25 10 2.24 2.20 2.18 2.16 2.13 2.12 2.11 2.09 2.08 2.07 2.06 2.06 .10 11 2.85 2.77 2.74 2.70 2.66 2.64 2.62 2.59 2.58 2.56 2.55 2.54 .05 12 4.56 4.41 4.33 4.25 4.17 4.12 4.08 4.01 4.00 3.96 3.93 3.91 .01 13 14 1.50 1.49 1.49 1048 1.47 1.47 1.47 1.46 1.46 1.46 1.45 1.45 .25 15 2.17 2.12 2.10 2.08 2.05 2.04 2.03 2.00 2.00 1.99 1.98 1.97 .10 16 2.72 2.65 2.61 2.57 2.53 2.51 2.49 2.46 2.45 2.43 2.42 2.40 .05 17 4.25 4.10 4.02 3.94 3.86 3.81 3.78 3.71 3.69 3.66 3.62 3.60 .01 18 19 1.48 1.47 1.46 1.45 1.45 1.44 1.44 1.43 1.43 1.43 1.42 1.42 .25 20 2.10 2.06 2.04 2.01 1.99 1.97 1.96 1.94 1.93 1.92 1.91 1.90.10 2.62 2.54 2.51 2.47 2..+3 2.40 2.38 2.35 2.34 2.32 2.31 2.30 .05 4.01 3.86 3.78 3.70 3.62 3.57 3.54 3..+7 3.45 3.41 3.38 3.36 .01 1.46 1.45 1.44 1.43 1.42 1.42 IA2 1..+1 1.41 lAO 1.40 lAO .25 2.05 2.01 1.98 1.96 1.93 1.92 1.90 1.88 1.88 1.86 1.85 1.85 .10 2.53 2.46 2.42 2.38 2.34 2.31 2.30 2.26 2.25 2.23 2.22 2.21 .05 3.82 3.66 3.59 3.51 3.43 3.38 3.3-l 3.27 3.25 3.22 3.19 3.17 .01 1.44 1.43 1.42 1.41 1.41 1.40 1.40 1.39 1.39 1.39 1.38 1.38 .25 2.01 1.96 1.94 1.91 1.89 1.37 1.86 1.83 1.83 1.82 1.80 1.80 .10 2.46 2.39 2.35 2.31 2.27 2.24 2.22 2.19 2.18 2.16 2.14 2.13 .05 3.66 3.51 3.43 3.35 3.27 3.22 3.18 3.11 3.09 3.06 3.03 3.00 .01 1.43 1.41 1.41 lAO 1.39 1.39 1.38 1.38 1.37 1.37 1.36 1.36 .25 1.97 1.92 1.90 1.87 1.85 1.83 1.82 1.79 1.79 1.77 1.76 1.76 .10 2.40 2.33 2.29 2.25 2.20 2.18 2.16 2.12 2.11 2.10 2.08 2.07 .05 3.52 3.37 3.29 3.21 3.13 3.08 3.05 2.98 2.96 2.92 2.89 2.87 .01 1.41 1.40 1.39 1.38 1.37 1.37 1.36 1.36 1.35 1.35 1.3-l 1.34 .25 1.94 1.89 1.87 1.84 1.81 1.79 1.78 1.76 1.75 1.74 1.73 1.72.10 2.35 2.28 2.24 2.19 2.15 2.12 2.11 2.07 2.06 2.04 2.02 2.01 .05 3.41 3.26 3.18 3.10 3.02 2.97 2.93 2.86 2.84 2.81 2.78 2.75 .01 1.40 1.39 1.38 1.37 1.36 1.35 1.35 1.34 1.34 1.34 1.33 1.33 .25 1.91 1.86 1.84 1.81 1.78 1.76 1.75 1.73 1.72 1.71 1.69 1.69 .10 2.31 2.23 2.19 2.15 2.10 2.08 2.06 2.02 2.01 1.99 1.97 1.96 .05 3.31 3.16 3.08 3.00 2.92 2.87 2.83 2.76 2.75 2.71 2.68 2.65 .01 1.39 1.38 1.37 1.36 1.35 1.34 1.34 1.33 1.33 1.32 1.32 1.32 .25 1.89 1.84 1.81 1.78 1.75 1.7-l 1.72 1.70 1.69 1.68 1.67 1.66 .10 2.27 2.19 2.15 2.11 2.06 2.04- 2.0~ 1.98 1.97 1.95 1.93 1.92 .05 3.23 3.08 3.00 2.92 2.84 2.78 2.75 2.68 2.66 262 2.59 2.57 .01 1.38 1.37 1.36 1.35 1.34 1.33 1.33 1.32 1.32 1.31 1.31 1.30 .25 1.86 1.81 1.79 1.76 1.73 1.71 1.70 1.67 1.67 1.65 1.64 1.63 .10 2.23 2.16 2.11 2.07 2.03 2.00 1.98 1.94 1.93 1.91 1.89 1.88 .05 3.15 3.00 2.92 2.84 2.76 2.71 2.67 2.60 2.58 2.55 2.51 2.49 .01 1.37 1.36 1.35 1.34 1.33 1.33 1.32 1.31 1.31 1.30 1.30 1.29 .~5 1.84 1.79 1.77 1.74 1.71 1.69 1.68 1.65 1.64 1.63 1.62 1.61 .10 2.20 2.12 2.08 2.04 1.99 1.97 1.95 1.91 1.90 1.88 1.86 1.84 .05 3.09 2.94 2.86 2.78 2.69 2.64 2.61 2.54 2.52 2.48 2.44 2.42 .01 (continued)
484 STATISTICAL TABLES Table T.4 (Continued) dlfor Denom- df for Numerator Nl mator Nl Pr 1 2 3 4 S 6 7 8 9 10 11 12 .25 1.40 1.48 1.47 1.45 1.44 1.42 1.41 1.40 1.39 1.39 1.38 1.37 22 .10 2.95 2.56 2.35 2.22 2.13 2.06 2.01 :1.97 1.93 1.90 1.88 1.86 .05 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 2.30 2.26 2.23 .01 7.95 5.72 4.82 4.31 3.99 3.76 3.59 3.45 3.35 3.26 3.18 3.12 .25 1.39 1.47 1.46 1.44 1.43 1.41 1.40 ·1.39 1.38 1.38 1.37 1.36 24 .10 2.93 2.54 2.33 2.19 2.10 2.04 1.98 '1.94 1.91 1.88 1.85 1.83 .05 4.26 3.40 3.01 2.78 2.62 2.5] 2.42 2.36 2.30 2.25 2.21 2.18 .01 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26 3.17 3.09 3.03 .25 1.38 1.46 1045 1.44 1.42 1.4] 1.39 1.38 1.37 1.37 1.36 1.35 26 .10 2.91 2.52 2.31 2.17 2.08 2.01 1.96 1.92 1.88 1.86 1.84 1.81 .05 4.23 3.37 2.98 2.74 2.59 2.47 2.39 2.32 2.27 2.22 2.18 2.15 .01 7.72 5.53 4.64 4.14 3.82 3.59 3.42 3.29 3.18 3.09 3.02 2.96 .25 1.38 1.46 lAS 1.43 1.41 lAO 1.39 1.38 1.37 1.36 1.35 1.34 28 .10 2.89 2.50 2.29 2.16 2.06 2.00 l.94 1.90 1.87 1.84 1.81 l.79 .05 4.20 3.34 2.95 2.71 2.56 2.45 2.36 2.29 2.24 2.19 2.15 2.12 .01 7.64 5.45 4.57 4.07 3.75 3.53 3.36 3.23 3.12 3.03 2.96 2.90 .25 1.38 1.45 1.44 1.42 1.41 1.39 1.38 1.37 1.36 1.35 1.35 1.34 30 .10 2.88 2049 2.28 2.14 2.05 1.98 1.93 1.88 1.85 1.82 1.79 1.77 .05 4.17 3.32 2.92 2.69 2.53 2042 2.33 2.27 2.21 2.16 2.13 2.09 .01 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.e? 2.98 2.91 2.84 .25 1.36 1.44 1.42 1.40 1.39 1.37 1.36 1.35 1.34 1.33 1.32 1.31 40 .10 2.84 2.44 2.23 2.09 2.00 1.93 1.87 1.83 1.79 1.76 1.73 1.71 .05 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 2.04 2.00 .01 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89 2.80 2.73 2.66 .25 1.35 1.42 1.41 1.38 1.37 1.35 1.33 1.32 1.31 1.30 1.29 1.29 60 .10 2.79 2.39 2.18 2.04 1.95 1.87 1.82 1.77 1.74 1.71 1.68 1.66 .05 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.95 1.92 .01 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72 2.63 2.56 2.50 .25 1.34 lAO 1.39 1.37 1.35 1.33 1.31 1.30 1.29 1.28 1.27 1.26 120 .10 2.75 2.35 2.13 1.99 1.90 1.82 1.77 1.72 1.68 1.65 1.62 1.60 .05 3.92 3.07 2.68 2.45 2.29 2.17 2.09 2.0: 1.96 1.91 1.87 1.83 .01 6.85 4.79 3.95 3.48 3.17 2.96 2.79 2.66 2.56 2.47 2.40 2.34 .25 1.33 1.39 1.38 1.36 1.34 1.32 1.31 1.29 1.28 1.27 1.26 1.25 200 .10 2.73 2.33 2.11 1.97 1.88 1.80 1.75 1.70 1.66 1.63 1.60 1.57 .05 3.89 3.04 2.65 2.42 2.26 2.14 2.06 1.98 1.93 1.88 1.84 1.80 .01 6.76 4.71 3.88 3.41 3.11 2.89 2.73 2.60 2.50 2.41 2.34 2.27 .25 1.32 1.39 1.37 1.35 1.33 1.31 1.29 1.28 1.27 1.25 1.24 1.24 QC .10 2.71 2.30 2.08 1.94 1.85 l.77 1.72 1.67 1.63 l.60 1.57 1.55 .05 3.84 3.00 2.60 2.37 2.21 2.10 2.01 1.94 1.88 1.83 1.79 1.75 .01 6.63 4.61 3.78 3.32 3.02 2.80 2.64 2.51 2.41 2.32 2.25 2.18
STATISTICAL TABLES 465 Table T.4 (Connnrud) dfCor df for Numerator NI Denom- inator 15 20 24 30 40 SO 60 100 120 200 500 co Pr Hz 136 1.34 1.33 1.32 1.31 1.31 1.30 1.30 1.30 1.29 1.29 1.28 .25 22 1.81 1.76 1.73 1.70 1.67 1.65 1.64 1.61 1.60 1.59 1.58 1.51 .10 24 2.15 2.07 2.03 1.98 1.94 1.91 1.89 1.85 1.84 1.82 1.80 1.78 .05 26 28 2.98 2.83 2.75 2.67 2.58 2.53 2.50 2.42 2AO 2.36 2.33 2.31 .01 30 40 1.35 1.33 1.32 1.31 1.30 1.29 1.29 1.28 1.28 1.27 1.21 1.26 .25 60 1.78 1.73 1.70 1.67 1.64 1.62 1.61 1.58 1.51 1.56 1.54 1.53 .10 120 2.11 2.03 1.98 1.94 1.89 1.86 1.84 1.80 1.79 1.71 1.75 1.73 .05 200 2.89 2.74 2.66 2.58 2.49 2.44 2.40 2.33 2.31 2.27 2.24 2.21 .01 co 1.34 1.32 1.31 1.30 1.29 1.28 1.28 1.26 1.26 1.26 1.25 1.25 .25 1.76 1.11 1.68 1.65 1.61 1.59 1.58 1.55 1.54 1.53 1.51 1.50 .10 2.07 1.99 1.95 1.90 1.85 1.82 1.80 1.76 1.75 1.13 1.11 1.69 .05 2.81 2.66 2.58 2.50 2.42 2.36 2.33 2.25 2.23 2.19 2.16 2.13 .01 1.33 1.31 1.30 1.29 1.28 1.27 1.27 1.26 1.25 1.25 124 1.24 .25 1.14 1.69 1.66 1.63 1.59 1.57 1.56 1.53 1.52 1.50 1.49 1.48 .10 2.04 1.96 1.91 1.87 1.82 1.19 1.77 1.13 1.71 1.69 1.67 1.65 .05 2.15 2.60 2.52 2.44 2.35 2.30 2.26 2.19 2.17 2.13 2.09 2.06 .01 1.32 1.30 1.29 1.28 1.27 1.26 1.26 1.25 1.24 1.14 1.23 1.23 .25 1.72 1.67 1.64 1.61 1.57 1.55 1.54 1.51 1.50 1A8 1.41 1.46 .10 2.01 1.93 1.89 1.84 1.19 1.16 1.74 1.70 1.68 1.66 1.64 1.62 .05 2.70 2.55 2.47 2.39 2.30 2.25 2.21 2.13 2.11 2.07 2.03 2.01 .01 1.30 1.28 1.26 1.25 1.24 1.23 1.22 1.21 1.21 1.20 1.19 1.19 .25 1.66 1.61 1.51 1.54 1.51 1.48 1.41 1.43 1.42 1.41 1.39 1.38 .10 1.92 1.84 1.79 1.14 1.69 1.66 1.64 1.59 1.58 1.55 1.53 1.51 .05 2.52 2.37 2.29 2.20 2.11 2.06 2.02 1.94 1.92 1.81 1.83 1.80 .01 1.27 1.25 1.24 1.22 1.21 1.20 1.19 1.17 1.17 1.16 1.15 1.15 .25 1.60 1.54 1.51 1.48 1.~ 1.41 1.40 1.36 1.35 1.33 1.31 1.29 .10 1.84 1.75 1.70 1.65 1.59 1.56 1.53 1048 IA7 1.44- 1041 1.39 .05 2.35 2.20 2.12 2.03 1.94 1.88 1.84 1.75 1.73 1.68 1.63 1.60 .01 1.24 1.22 1.21 1.19 1.18 1.11 1.16 1.14 1.13 1.12 1.11 1.10 .25 1.55 1.48 1.45 1.41 1.37 1.34 1.32 1.21 1.26 1.24 1.21 1.19 .10 1.15 1.66 1.61 1.55 1.50 1.46 1.43 1.37 1.35 1.32 1.28 1.25 .05 2.19 2.03 1.95 1.86 1.76 1.10 1.66 1.56 1.53 1,48 1.42 1.38 .01 1.23 1.21 1.20 1.18 1.16 1.14 1.12 1.11 1.10 1.09 1.08 1.06 .25 1.52 1.46 1.42 1.38 1.34 1.31 1.28 1.24 1.22 1.20 1.17 l.14 .10 1.72 1.62 1.51 1.52 1.46 1.41 1.39 1.32 1.29 1.26 1.22 1.19 .05 2.13 1.91 1.89 1.79 1.69 1.63 1.58 1.48 1.44- 1.39 1.33 1.28 .01 1.22 1.19 1.18 1.16 1.14 1.13 1.12 1.09 1.08 1.07 1.04 1.00.25 1.49 1.42 1.38 1.34 1.30 1.26 1.24 1.18 1.11 1.13 1.08 1.00 .10 1.67 1.57 1.52 1.46 1.39 1.35 1.32 1.24 1.22 1.17 1.11 1.00 .05 2.04 1.88 1.79 1.70 1.59 1.52 1.47 1.36 1.32 1.25 1.15 1.00 .01
Table T.5 Percent Points of the Normal Probability Plot Correlation Coefficient Level II .000 .005 .01 .025 .05 .10 .25 .50 .75 .90 .95 .975 .99 .995 3 .866 .867 .869 .872 .879 .891 .924 .966 .991 .999 1.000 1.000 1.000 1.000 4 .784 .813 .822 .845 .868 .894 .931 .958 .979 .992 .996 .998 .999 1.000 5 .726 .803 .822 .855 .879 .902 .935 .960 .977 .988 .992 .995 .997 .998 \"\"6 .683 .818 .835 .868 .890 .911 .940 .962 .977 .986 .990 .993 .996 .997 7 .648 .828 .847 .876 .899 .916 .944 .965 .978 .986 .990 .992 .995 .996 8 .619 .841 .859 .886 .905 .924 .948 .967 .979 .986 .990 .992 .995 .996 9 .595 .851 .868 .893 .912 .929 .951 .968 .980 .987 .990 .992 .994 .995 10 .574 .860 .876 .900 .917 .934 .954 .970 .981 .987 .990 .992 .994 .995 11 .556 .868 .883 .906 .922 .938 .957 .972 .982 .988 .990 .992 .994 .995 12 .539 .875 .889 .912 .926 .941 .959 .973 .982 .988 .990 .992 .994 .995 13 .525 .882 .895 .917 .931 .944 .962 .975 .983 .988 .991 .993 .994 .995 14 .512 .888 .901 .921 .934 .947 .964 .976 .984 .989 .991 .993 .994 .995 15 .500 .894 .907 .925 .937 .950 .965 .917 .984 .989 .991 .993 .994 .995 16 .489 .899 .912 .928 .940 .952 .967 .978 .985 .989 .991 .993 .994 .995 17 .478 .903 .9]6 .931 .942 .954 .968 .979 .986 .990 .992 .993 .994 .995 18 .469 .907 .919 .934 .945 .956 .969 .979 .986 .990 .992 .993 .995 .995 19 .460 .909 .923 .937 .947 .958 .971 .980 .987 .990 .992 .993 .995 .995 20 .452 .912 .925 .939 .950 .960 .972 .98] .987 .991 .992 .994 .995 .995 21 .445 .914 .928 .942 .952 .961 .973 .981 .987 .991 .993 .994 .995 .996 22 .437 .918 .930 .944 .954 .962 .974 .982 .988 .991 .993 .994 .995 .996 23 .431 .922 .933 .947 .955 .964 .975 .983 .988 .991 .993 .994 .995 .996 24 .424 .926 .936 .949 .957 .965 .975 .983 .988 .992 .993 .994 .995 .996 25 .418 .928 .937 .950 .958 .966 .976 .984 .989 .992 .993 .994 .995 .996 26 .412 .930 .939 .952 .959 .967 .977 .984 .989 .992 .993 .994 .995 .996 27 .407 .932 .941 .953 .960 .968 .977 .984 .989 .992 .994 .995 .995 .996 28 .402 .934 .943 .955 .962 .969 .978 .985 .990 .992 .994 .995 .995 .996 29 .397 .937 .945 .956 .962 .969 .979 .985 .990 .992 .994 .995 .995 .996 30 .392 .938 .947 .957 .964 .970 .979 .986 .990 .993 .994 .995 .996 .996 31 .388 .939 .948 .958 .965 .971 .980 .986 .990 .993 .994 .995 .996 .996 32 .383 .939 .949 .959 .966 .972 .980 .98~ .990 .993 .994 .995 .996 .996 33 .379 .940 .950 .960 .967 .973 .981 .987 .991 .993 .994 .995 .996 .996 34 .375 .941 .951 .960 .967 .973 .981 .987 .991 .993 .994 .995 .996 .996 35 .371 .943 .952 .961 .968 .974 .982 .987 .991 .993 .995 .995 .996 .997 36 .367 .945 .953 .962 .968 .974 .982 .987 .991 .994 .995 .996 .996 .997 37 .364 .947 .955 .962 .969 .975 .982 .988 .991 .994 .995 .996 .996 .997 38 .360 .948 .956 .964 .970 .975 .983 .988 .992 .994 .995 .996 .996 .997 39 .357 .949 .957 .965 .971 .976 .983 .988 .992 .994 .995 .996 .996 .997 40 .354 .949 .958 .966 .972 .977 .983 .988 .992 .994 .995 .996 .996 .997 41 .35] .950 .958 .967 .972 .977 .984 .989 .992 .994 .995 .996 .996 .997 .995 .996 .997 .997 42 .348 .951 .959 .967 .973 .978 .984 .989 .992 .994 .995 .996 .997 .997 43 .345 .953 .959 .967 .973 .97~ .984 .989 .992 .994 .995 .996 .997 .997 44 .342 .954 .960 .968 .973 .978 .984 .989 .992 .994 .995 .996 .997 .997 45 .339 .955 .961 .969 .974 .978 .985 .989 .993 .994 .995 .996 .997 .997 46 .336 .956 .962 .969 .974 .979 .985 .990 .993 .995 .995 .996 .997 .997 47 .334 .956 .963 .970 .974 .979 .985 .990 .993 .995 .996 .997 .997 48 .331 .957 .963 .970 .975 .980 .985 .990 .993 .995 .996 .996 .997 .997 49 .329 .957 .964 .971 .975 .980 .986 .990 .993 .995 .996 .996 .997 .997 50 .326 .959 .965 .972 .917 .981 .986 .990 .993 .995 .996 .997 .997 .997 .996 .997 .997 .998 55 .315 .962 .967 .974 .97g .982 .987 .991 .994 .995 .996 .997 .997 .998 60 .305 .965 .970 .976 .980 .983 .988 .991 .994 .995 .996 .997 .998 .998 65 .296 .967 .972 .977 .981 .984 .989 .992 .994 .996 .997 .998 .998 .997 .997 .998 .998 70 .288 .969 .974 .978 .982 .985 .989 .993 .995 .996 .997 .997 .998 .998 75 .281 .971 .975 .979 .983 .986 .990 .993 .995 .996 .997 .998 .998 .998 .997 .998 .998 .998 80 .274 .973 .976 .980 .984- .987 .991 .993 .995 .996 .997 .998 .998 .998 .997 85 .268 .974 .917 .981 .985 .987 .991 .994 .995 .997 .998 90 .263 .976 .978 .982 .985 .988 .991 .994 .996 .997 95 .257 .917 .979 .983 .986 .989 .992 .994 .996 .997 100 .252 .979 .981 .984 .987 .989 .992 .994 .996 .997 Sourr~: J. J. Filliben (1975). '\"TIle Probability Plot Correlation Coefficient Test for NonnaJity.\" T~chnommics. 17 (1),113.
STATISTICAL TABLES 467 TabU! 7:6 Simulation Percentiles of b2 Sample Percentiles Size 1 2 2.5 5 10 20 80 90 95 97.5 98 99 7 1.25 1.30 1.34 1041 1.53 1.-,,fv. 2.78 3.20 3.55 3.85 3.93 4.23 8 1.31 1.37 lAO 1.46 1.58 1.75 2.84 3.31 3.70 4.09 -1-.20 4.53 9 1.35 1.42 1.-1-5 1.53 1.63 1.80 2.98 3,43 3.86 4.28 4041 4.82 10 1.39 lA5 1.49 1.56 1.68 1.85 3.01 3.53 3.95 4.40 4.55 5.00 12 1.46 1.52 1.56 1.64 1.76 1.93 3.06 3.55 4.05 4.56 4.73 5.:0 15 1.55 1.61 1.64 1.72 1.84 2.01 3.13 3.62 4.13 4.66 4.85 5.30 20 1.65 1.71 1.74 1.82 1.95 2.13 3.21 3.68 4.17 4.68 4.87 5.36 25 1.72 1.79 1.83 1.91 2.03 2.20 3.23 3.68 4.16 4.65 4.82 5.30 30 1.79 1.86 1.90 1.98 2.10 2.26 3.25 3.68 4.11 4.59 4.75 5.21 35 1.84 1.91 1.95 2.03 2.14 2.31 3.27 3.68 4.10 4.53 4.68 5.13 40 1.89 i.96 1.98 2.07 2.19 2.34 3.28 3.67 4.06 4.46 4.61 5.04 45 1.93 .00 2.03 2.11 2.22 2.37 3.28 3.65 4.00 4.39 4.52 4.94 50 1.95 :L03 2.06 2.15 2.25 2,41 3.28 3.62 3.99 4.33 4.45 4.88 Source: R. B. D'Agostino and G. L. Tietjen (1971). \"Simulation Probability Points of b,. for SmaIl Samples,\" Biometrika. 58 (3), 670. Table 7:7 Simulation Probability Points of v bl Two-sided Test n 0.20 0.10 0.05 0.02 0.01 0.002 5 0.819 1.058 1.212 1.342 1.396 1,466 6 0.805 1.034 1.238 1.415 1,498 1.642 7 0.787 1.008 1.215 lA-31 1.576 1.800 8 0.760 0.991 1.202 1,455 1.601 1.873 9 0.752 0.977 1.189 1.408 1.577 1.866 10 0.722 0.950 1.157 1.397 1.565 1.887 11 0.715 0.929 1.129 1.376 1.540 1.924 13 0.688 0.902 1.099 1.312 1,441 1.783 15 0.648 0.862 1.048 1.275 1.462 1.778 17 0.629 0.820 1.009 1.188 1.358 1.705 20 0.593 0.777 0.951 1.152 1.303 1.614 23 0.562 0.743 0.900 l.l19 1.276 1.555 25 0.543 0.714 0.876 1.073 1.218 1.468 30 0.510 0.664 0.804 0.985 1.114 1,410 35 0.474 0.624 0.762 0.932 1.043 1.332 Source: R. B. D'Agostino and G. L. Tietjen (1973). \"Approaches to the Null Distribution of ../br.... Biometrika. 60 (1), p. 172.
References Affifi. A. A. and V. Clark (1984). Computer-Aided Multivariate A.nalysis. Lifetime Le::uning Publications. Belmont. CA. Agresti. A_ (1984)_ Analysis of Ordinal Categorical Data. Wiley. New York_ Agresti, A. (1990)_ Categorical Data Analysis. Wiley. New York_ Allen. S. 1. and R Hubbard (1986)_ \"Regression Equations of the Latent Roots of Random Data Correlation Matrices with Unities on the Diagonal,\" Jlulth'ariate Behavioral Research. vol. 21, 393-398. Anderson. T. W. (1984). An introduction to .'vfuITivariate Statistical Analysis. 2nd ed. Wiley. New York. Andrews. F. M., L. Klem. T. N. Davidson. P. M. O·~l3.11ey. and W. L. Rodgers (1981). A. Guide for Selecting Statistical Techniques for Analy:ing Sodal Science Data. Institute for Social Research, Univ. of Michigan Press. Ann Arbor. Bagozzi. R. P. (1980). \"Perfonnance and Satisfaction in an Industrial Sales Force: An Examina- tion of Their Antecedents and Simultaneity.\" J ouma' of Jfarketillg. 44 (Spring). 65-77. Bearden, W.O.. S. Shanna. and J. E. Teel {1982). \"Sample Size Effects on Chi Square and Other Statistics Used in Evaluating Causal Models.\" Jou.rnal oj.\\1.arketing Research. 19 (~o\\\"ember 1982). 425-430. Bentler, P. M. (1982). Theory and Implemenratioll oj EQS. A Structural Equalions Program. BMDP Statistical Sofrware. Inc.. Los Angeles. BIOMED (990). BMDP Statisrical Software Jlanual. vols. 1 & 2. W. J. Dixon (chief ed.). University of California Press. Los Angeles. Bollen, K. A. (1989). Structural EquaTions with Latent \\ ariables. Wiley. New York. Bone. P. F., S. Sharma. and 1. A. Shimp (1989). \"A Bootstrap Procedure for Evaluating the Goodness-of-Fit Indices of Structural Equation and Confinnatory Factor Models:' Journal of l\\-larketing Research (February 1989). 105-111. Canell, R B. (1966). \"The Meaning and Strategic Use of Factor Analysis,\" in R. B. Cattell (ed.), Handbook oj.Hultil'ariate Experimental Psychology. Rand McNally. Chicago. Cliff. N. (1988). \"The Eigenvalue-Greater-than-One Rule and the Reliability of Components.\" Psychological Bulletin, 103 (2), 27&-279. Cohen. 1. (1977). Statistical P()wer Analysis for the Behavioral Sciellces. Academic Press. New York. Costanza. M. C. and A. A. Affifi (1979). \"Comparison of Stopping Rules in Forward Stepwise Discriminant Analysis:' Journal ojrhe American Statistical Association. 74. i77-785. Cox.. D. R. and E. J. Snell (1989). The Analysis of Binary Data. 2nd ed.• Chapman & Hall. London. D'Agostino, R. B. and O. L. Tietjen (19'71). \"Simulation Probability Points of~ in Small Sam- ples,\" Biometrika. 58, 66~72. O'Agostino. R. B. and G. L. Tietjen (1973). \"Approaches to the Null Distribution of bl,\" Biometrika. 60. 169--173. Daniel. C. and F. S. Wood (1980). Fitting Equations to Data. Wiley. New York. Dillon, W. R. and M. Goldstein (l984). Mulrivariare Ana(~·sis. Wiley. New York. Efron, B. (1987). \"Bener Bootstrap Confidence Intervals,\" Journal of the American Statistical Societ}\". 82 (March), 171-185. 469
470 REFERENCES Etgar, M. (1976). \"Channel Domination and Countervailing Power in Distributive Channels,\" Journal ofMarkenng Research. 13 (August), 254-262. Everitt, B. S. (1979). \"A Monte Carlo Investigation of the Robustness of Hotelling's One and Two Sample T2 Tests,\" Journal ofthe American Statistical Association, 74, 48-51. Filliben, J. J. (1975). \"The Probability Plot COlTelation Coefficient Test for Nonnality,\" Techno- metrics, 17 (1), 111-117. Freeman, D. H.. Jr. (1987). Applied Categorical Data Analysis, Dekker, New York. Gilbert, E. S. (1969). \"The Effect of Unequal Variance-Covariance Matrices on Fisher's Linear Discriminant Function,\" Biometrics, 25, 505-516. Glass, G.v. and K. Hopkins (1984). Statistical Methods in Education and Psycholog)\" Prentice- Hall, Englewood Oiffs, N.J. Glass, G. V:, P. D. Peckham, and 1. R. Sanders (1972). \"Consequences of Failure to Meet Assumptions Underlying the Fixed Effects Analyses of Variance and Covariance,\" Review ofEducational Research. 42. 237-288. Glick, N. (1978). \"Additive Estimators for Probabilities of Correct Classification,\" Pattern Recognition, to,211-222. Gnandesikan, R. (1977). Methods for Statistical Analysis ofMultivariate Observations, Wiley, New York. Goldstein, M. and W. R. DilJon (1978). Discrete Discrim;nant Analysis, Wiley, New York. Green, P. E. (1976). Mathematical Tools for Applied Multivariate Analysis, Academic Press, New York. Green, P. E. (1978). Analyzing Multivariate Data. Dryden, Hinsdale, Ill. Guttman, L. (1953). \"]mage Theory for the Structure of Quantitative Variates,\" Psychometrika, 1~, 277-296. Haberman, S. J. (1978). Analysis ofQualitative Data, Academic Press, New York. Hakstian, A. R., J. C. Roed, and J. C. Linn (1979). \"Two Sample T Procedures and the Assump- tion of Homogeneous Covariance Matrices,\" Psychological Bulletin, 86, 1255-1263. Harman, H. H. (1976). Modern Factor Analysis. Univ. of Chicago Press, Chicago. Hartigan, J. (1975). Clustering Algorithms. Wiley, New York. Hayduk, L. A. (1987). Structural Equation Modeling with USREL, Johns Hopkins Press, Baltimore. Holloway, L. N. and O. J. Dunn (1967). \"The Robustness of Hotelling's T2,\" Journal of the American Statistical Association, 62, 124-136. Hopkins, J. W. and P. P. F. Clay (1963). \"Some Empirical Distributions of Bivariate T2 and Homoscedasticity Criterion M under Unequal Variance and Leptokurtosis,\" Journal of the American Statistical Association. 58, l048-lO53. Hom, J. L. (1965). \"A Rationale and Test for the Number of Factors in Factor Analysis,\" Psychometrika, 30, 179-186. Hosmer. D. W.• Jr., and S. Lemeshow (1989). Applied Logistic Regression. Wiley, New York. Hube:1y, C. J. (1984). \"Issues in the Use and Interpretation of Discriminant Analysis,\" Psycho- logical Bulletin, 95 (1), 156-171. Jackson, J. E. (1991). A User's Guide to Principal Components. Wiley, New York. Johnson, N. and D. Wichern (1988). Applied Mult;\\'ariate Statistical Analysis. Prentice-Hall, Englewood Cliffs, NJ. Joreskog. K. G. and D. Sorbom (1989). Usrel7: A Guide to the Program and Applications, SPSS Inc.• Chicago. Kaiser, H. F. (1970). \"A Second Generation Little Jiffy,\" Psychometrikil, 35 (December). 401-415. Kaiser, H. F. and J. Rice (1974). \"Little Jiffy Mark IV.\" Educational and Psychological Mea- surement. 34 (Spring), 111-117. Kenny, D.• and C. Judd (1986). \"Consequences of Violating the Independent Assumption in Analysis of Variance,\" Ps)'chological Bulletin. 99. 422-431. Lachenbruch. P. A. (]967). ··An Almost Unbiased Method of Obtaining Confidence Intervals for the Probability of Misclassification in Discriminant Analysis,\" Biometrics, 23, 63~5.
REFERENCES 471 Lachenbruch. P. A., C. Sneeringer, and L T. Revo (1973). \"RobUSll1ess of the Linear and Quadratic Discriminant Function to Certain Types of Non-nonnality,\" Communications in Statistics. 1.39-57. Long, S. J. (1983). Confirmatory Factor Models, Sage. Beverly Hills. Calif. Maiti. S. S. and B. N. Mukherjee (1990). \"A Note on Distributional Properties of the Joreskog and Sorbom Fit Indices.\" Psychomerrika. 55 (December), 721-726. Mardi~ K. V. (i 971). 'The Effect of Non-normality on Some Multivariate Tests and Robustttess to Non-normality in the Linear Model,\" Biometrika. 58. 105-121. Marks, S. and O. J. Dunn (1974). \"Discriminant Functions when Covariance Matrices are Unequal,\" Journal ofthe American Statistical Association. 69, 555-559. Marsh, H. W., J. R. Balla, and R. McDonald (1988). \"Goodness-of-Fit Indexes in Confirmatory Factor Analysis: The Effects of Sample Size,\" Psychological Bul/etin, 103,391-410. McDonald. R. (1985). Factor Analysis and Related Techniques, Lawrence Erlbaum, Hills- dale, NJ. McDonald, R. and H. W. Marsh (1990). \"Choosing a Multivariate Model: Noncentrality and Goodness of Fit,\" Psychological Bulletin, 105,430--t45. McDonald, R. and S. A. Mulaik (1979). \"Determinacy of Common Factors: A Nontechnical Review,\" Psychological Bul/etin, 86, 297-306. Mcfntyre, R. M. and R. K. Blashfield (1980). \"'A Nearest-Centroid Technique for Evaluating the MinimumNariance Clustering Procedure,\" Multivariate Behavioral Research. 15,225-238. McLachlan, G. J. (1974). \"An Asymptotic Unbiased Technique for Estimating the Error Rates in Discriminant Analysis,\" Biometrics, 30,139-249. Milligan, G. W. (1980). \"An Examination of the Effect of Six Types of Error PertUrbation of Fifteen Clustering Algorithms,\" Ps!,chomerrika, 45. 325-342 Milligan, G. W. (1981). \"A Montecarlo Study of Thiny Internal Criterion Measures for Cluster Analysis,\" Psychometrika, 46, 325-3·1~. Milligan, G. W. (1985).••An Examination of Procedures for Determining the Number of Clusters in a Data Set,\" Psychometrika, 50, 159-179. Olson, C. L. (1974). \"Comparative Robustness of Six Tests in Multivariate Analysis of Variance,\" Journal of the American Statistical Association. 69 (348), 89~907. Punj, G. and D. W. Stewart (1983). \"Cluster Analysis in Marketing Research: Review and Suggestions for Application:' Journal olMarketing Research. 20 (May). 134-148. Rummel, R. J. (1970). Applied Factor Analysis. Nonhwestern Univ. Press, Evanston, Ill. SAS Institute Inc. (1993). S.4.SISTAT User's Guide. Vol. I, version 6. Scariano. S. and J. Davenpon (1986). \"The Effec[S of Violations of the Independence Assump- tion in the One Way ANOVA,\" The American Statistician, 41, 123-129. Segal. S. (1967). Nonparametric Statistics for the Behavioral Sciences, McGraw-Hill. New York. Sharma, S., S. Durvasula, and W. R. Dillon (1989). \"Some Results on the Behavior of Alternate Covariance Structure Estimation Procedures in the Presence of Non-Normal Data,\" Journal ofMarketing Research, 26, 214-221. Shimp, T. A. and A. Kavas (1984). 'The Theory of Reasoned Action Applied to Coupon Usage,\" Journal ofConsumer Research. 11 (December), 795-809. Shimp, T. A. and S. Sharma (1987). \"Consumer Ethnocentrism: Construction and Validation of the CETSCALE,\" Journal of Marketing Research. 24 (August), 280-289. Sneath, P. and R. Sokal (1973). Numerical Ta:conomy. Freeman, San Francisco. Sobel, M. F. and G. W. Bohmstedt (1985). \"Use of Null Models in Evaluating the Fit of Covariance Structure Models,\" in N. B. Tuma (ed.), Sociological Methodology, 152-178. Jossey-Bass, San Francisco. SOLO Power Analysis (1992). Version 1.0, BMDP Statistical Software Inc., Los Angeles. Sparks, D. L. and W. T. Tucker (1971). \"A Multivariate Analysis of Personality and Product Use,\" Journal ofMarketing Research, 8 (February), 67-70. Spearman, C. (1904)....A General Intelligence Objectivity' Objectively Detennined and Measured,\" American Journal ofPsychology. 15,201-293.
4:'72 REFERENCES Stevens, S. S. (1946). \"On the Theory of Scales of Measurement,\" Science. 103,677-680. Stewart, D. K. and W. A. Love (1968). \"A General Canonical Correlation Index,\" Psychological Bulletin. 70, 160-163. Stewart, D. W. (1981). \"The Application and Misapplication of Factor Analysis in Marketing Research,\" Journal ofMarketing Research, 18 (February), 51-62. Toussant, G. T. (1974). ''Bibliography on Estimation of Misclassification,\" IEEE Transactions on Information Theory, IT-20 (July). 472-479. Tukey, J. W. (1977). Exploratory Data Analysis, Addison-Wesley, Reading, Mass. Urban, G. and J. A. Hauser (1993). Design and Marketing ofNew ProducTs, Prentice-Hall, N.J. Velleman, P. F. and L. Wilkinson (I993). \"Nominal, Ordinal. Interval, and Ratio Typologies Are Misleading,\" The American Statistician, 47 (1),65-72. Werts, C. E., R. L Linn, and K. G. Joreskog (1974). Ulntraclass Reliability Estimates: Testing Structural Assumptions,\" Educational and Psychological Measurement, 34. 25-33. Wilk. H. B.. S. S. Shapiro, and H. J. Chen (1968). \"A Comparative Study of Various Tests of Normality,\" Journal ofthe American Statistical Association. 63, 1343-1372. Wmer, B. J. (1982). Statistical Principles in Experimental Design. 2nd ed., McGraw-Hill, New York. Zwick. W. R. and W. F. Velicer (1986). \"Comparison of Five Rules for Determining the Number of Components to Retain;' Psychological Bulletin. 99 (3), 432-442.
Tables, Figures, and Exhibits CHAPTER 1 9 Tables 1.1 Dependence Statistical Methods 6 1.2 Independent Variables Measured Using Nominal Scale 7 1.3 Attributes and Their Levels for Checking Account Example 1.4 Interdependence Statistical Methods 1I 1.5 Contingency Table 12 Figures 13 1.1 Causal model 13 1.2 Causal model for unobservable constructs CHAPTER 2 Figures 2.1 Points represented relative to a reference point 18 2.2 Change in origin and axes 18 2.3 Euclidean distance between two points 19 2.4 Vectors 20 2.5 Relocation or translation of vectors :!O 2.6 Scalar multiplication of a vector 21 2.7 Vector addition 21 2.8 Vector subtraction 2.9 Vector projections 22 2.10 Vectors in a Cartesian coordinate system 23 2.11 Trigonometric functions 24 2.12 Length and direction cosines 24 2.13 Standard hasis vectors 25 2.14 Linear combinations 26 2.15 Distance and angle between any two vectors 27 2.16 Geometry of vector projections and scalar products 28 2.17 Projection of a vector onto a subspace 29 2.18 Illustrative example 29 2.19 Change in basis 31 2.20 Representing points with respect to new axes 32 473
474 TABLES, FIGURES, AND EXHIBITS CHAPTER 3 40 Tables 3.1 Hypothetical Financial Data 37 3.2 Contingency Table 37 3.3 Hypothetical Financial Data for Groups 3.4 Transposed Mean-Corrected Data 48 Figures 44 3.1 Distribution for random variable 43 3.2 Hypothetical scatterplot of a bivariate distribution 3.3· Plot of data and points as vectors 45 3.4 Mean-corrected data 46 3.5 Plot of standardized data 47 3.6 Plot of data in observation space 49 3.7 Generalized variance 50 CHAPTER 4 Tables 4.1 Original, Mean-Corrected. and Standardized Data 59 4.2 Mean-Corrected Data and New Variable (xi) for a Rotation of 10Q 4.3 Variance Accounted for by the New Variable xi for Various New Axes 61 4.4 Mean-Corrected Data, and xi and xi for the New Axes Making an Angle of 43.261 0 62 4.5 SAS Statements 67 4.6 Standardized Principal Components Scores 70 4.7 Food Price Data 71 4.8 Regression Coefficients for the Principal Components 78 A4.1 PROC IML Commands 88 Figures 60 65 4.1 Plot of mean-corrected data and projection of points onto Xi 4.2 Percent of total variance accounted for by Xi 62 4.3 Plot of mean-corrected data and new axes 63 4.4 Representation of observations in lower-dimensional subspace 4.5 Scree plots 77 4.6 Plot of principal components scores 80 Exhibits 69 4.1 Principal components analysis for data in Table 4.1 73 4.2 Principal components analysis for data in Table 4.7 74 4.3 Principal components analysis on standardized data A4.1 PROC IML output 89 CHAPTER 5 Tables 5.1 Communalities. Pattern and Structure Loadings. and Correlation Matrix for One-Factor Model 93
TABLES. FIGURES, A..\"'ID EXIDBITS 475 5.2 Communalities. Pattern and Structure Loadings. and Correlation Matrix for Two-Factor Model 95 5.3 Communalities. Pattern and Structure Loadings, Shared Variances. and Correlation Matrix for Alternative Two-Factor Model 98 5.4 Summary of Principal Components Factor Analysis for the Correlation Matrix of Table 5.2 105 5.5 Reproduced and Residual Correlation Matrices for PCF 106 5.6 Iteration History for Principal Axis Factor Analysis 108 5.7 SAS Commands 109 5.8 List of Attributes 123 5.9 Correlation Matrix for Detergent Study 124 5.10 SPSS Commands 125 A5.1 Varimax Rotation of 3500 139 A5.2 Variance of Loadings for Varimax Rotation 139 A5.3 Varimax Rotation of 320.057° 140 Figures 5.1 Relationship between grades and intelligence 91 5.2 Two-factor model 94 5.3 Two-indicator two-factor model 99 5.4 Indeterminacy due to to estimates of communalities 100 5.5 Projection of vectors onto a two-dimensional factor space 10 1 5.6 Rotation of factor solution 101 5.7 Factor solution 102 5.8 Scree plot and plot of eigenvalues from parallel analysis 104 5.9 Confirmatory factor model for excellence 129 A5.1 Oblique factor model 140 A5.2 Pattern and structure loadings 141 Exhibits 110 5.1 Principal components analysis for the correlation matrix of Table 5.2 103 5.2 Principal axis factoring for the correlation matrix of Table 5.2 5.3 Quartimax rotation 121 5.4 SPSS output for detergent study 126 CHAPTER 6 Tabl,es 6.1 Symbols Used by LISREL To Represent Parameter Matrices 149 6.2 Correlation Matrix 149 6.3 LISREL Commands for the One-Factor Model 150 6.4 LISREL Commands for the Null Model 161 6.5 Computations for NCP, MDN. TU, and RNI for the One-Factor Model 161 6.6 LISREL Commands for the Two-Factor Model 166 6.7 Computations for NCP, MDN, TU, and RNI for the Correlated Two-Factor Model 170 6.8 SPSS Commands for Multigroup Analysis 172 6.9 Results of Multigroup Analysis: Testing Factor Structure for Males and Females 173
476 TABLES, FIGURES, AND EXHIBITS 6.10 Items or Statements for the 100item CET Scale 174 182 Q6.1 Hypothetical Correlation Matrix 178 A6.1 Value of the Likelihood Function for Various Values of p A6.2 Maximum Likelihood Estimate for the Mean of a Normal Distribution 183 Figures 158 6.1 One-factor model 145 184 6.2 Two-factor model with correlated constructs 147 6.3 EGFI as a function of the number of indicators and sample size 6.4 Two-factor model 165 Q6.1 Model 178 Q6.2 Models 179 A6.1 Maximum likelihood estimation procedure 182 A6.2 Maximum likelihood estimation for mean of a nonnal distribution Exhibits 6.1 LISREL output for the one-factor model 153 6.2 LISREL output (partial) for the two-factor model 167 6.3 Two-factor model with correlated constructs 168 6.4 LISREL output for the 10-item CETSCALE 175 CHAPTER 7 Tables 7.1 Hypothetical Data 186 7.2 Similarity Matrix Containing Euclidean Distances 188 7.3 Centroid Method: Five Clusters 189 7.4 Centroid Method: Four Clusters 189 7.5 Centroid Method: Three Clusters 190 7.6 Ward's Method 194 7.7 SAS Commands 194 7.8 Within-Group Sum of Squares and Degrees of Freedom for Clusters Formed in Steps 1, 2, 3, 4, and 5 199 7.9 Summary of the Statistics for Evaluating Cluster Solution 201 7.10 Initial Cluster Centroids, Distance from Cluster Centroids, and Initial Assignment of Observations 204 7.11 Centroid of the Three Clusters and Change in Cluster Centroids 204 7.12 Distance from Centroids and First Reassignment of Observations to Clusters 204 7.13 Initial Assignment, Cluster Centroids, and Reassignment 206 7.14 Initial Assignment 206 7.15 Change in ESS Due to Reassignment 207 7.16 SAS Commands for Nonhierarchical Clustering 207 7.17 Observations Selected as Seeds for Various Combinations of Radius and Replace Options 208 7.18 RS and RMSSm for 2-, 3-, 4-, and 5-Cluster Solutions 210 7.19 Food Nurrient Dara 222 7.20 Cluster Membership for the Four-Cluster Solution 227 7.21 Cluster Centers for Hierarchical Clustering of Food Nutrient Data 227
TABLES, FIGURES, AND EXHIBITS 471 7.22 Commands for FASTCLUS Procedure 228 7.23 Correlation M'atrix 232 A7.1 Using a Nonhierarchical Clustering Technique to Refine a Hierarchical Cluster Solution 235 Figures 201 7.1 Plot of hypothetical data 186 226 7.2 Dendogram for hypothetical data 190 7.3 Plots of: (a) SPR and RS and (b) RMSSTD and CD 7.4 Hypothetical cluster configurations 217 7.5 City-block distance 218 7.6 Cluster analysis plots. (a) R square. (b) ~\\t1SSTD. Exhibits 7.1 SAS output for cluster analysis on data in Table 7.1 195 7.2 Nonhierarchical clustering on data in Table 7.1 209 7.3 Empirical comparisons of the perfonnance of clustering algorithms 212 7.4 Hierarchical cluster analysis for food data 223 7.5 Nonhierarchical analysis for food-nutrient data 229 CHAPTER 8 Tables 8.1 Financial Data for Most-Admired and Least-Admired Finns 238 8.2 Summary Statistics for Various Linear Combinations 240 8.3 Discriminant Score and Classification for Most-Admired and Least-Admired Firms (Wi = .934 and W2 = .358) 243 8.4 Means. Standard Deviations. and t-values for Most- and Least-Admired Firms 245 8.5 SPSS Commands for Discriminant Analysis of Data in Table 8.1 246 8.6 Misclassification Costs 257 8.7 Classification Based on Mahalanobis Distance 259 8.8 Discriminant Scores. Classification. and Posterior Probability for Unequal Priors 262 8.9 Financial Data for Most-Admired and Least-Adrrllred Firms 267 8.10 SPSS Commands for Stepwise Discriminant Analysis 268 8.11 Correlation Matrix for Discriminating Variables 272 8.12 SPSS Commands for Holdout Validation 274 A8.1 Misclassification Costs 279 A8.2 Summary of Classification Rules 281 A8.3 DIustrative Example 285 A8.4 TCM for Various Combinations of Misclassification Costs and Priors 286 Figures 239 8.1 Plot of data in Table 8.1 and new axis 8.2 Distributions of financial ratios 239 8.3 Plot of lambda versus theta 241 8.4 Examples of linear combinations 242
478 TABLES, FIGURES, AND EXHIBITS 8.5 Plot of discriminant scores 244 280 A8.1 Classification in one-dimensional space 278 AS.2 Classification in two-dimensional space 279 A8.3 Density functions for one discriminating variable A8.4 TCM as a function of cutoff value 286 Exhibits 247 8.1 Discriminant analysis for most-admired and least-admired firms 8.2 Multiple regression approach to discriminant analysis 263 8.3 Stepwise discriminant analysis 269 CHAPTER 9 Tables 9.1 Hypothetical Data for Four Groups 290 9.2 Lambda for Various Angles between Z and Xl 291 9.3 SPSS Commands 294 9.4 Cases in Which Wilks' A Is Exactly Distribut;!d as F 298 9.5 SPSS Commands for Range Tests 300 9.6 SPSS Commands for the Beer Example 304 A9.1 lllustrative Example 312 A9:2 Conditions and Equations for Classification Regions 316 Figures 9.1 Hypothe6:al scatter plot 288 9.2 Plot of data jH Table 9.1 290 9.3 Plot of rotation angle versus lambda 291 9.4 Classification in varil:Lble space 292 9.5 Classification in discriminant space 292 9.6 Plot of brands in discriminant space 308 9.7 Plot of brands and attributes 308 A9.! Classification regions for three groups 314 A9.2 Group centroids 3 15 A9.3 Classification regions RI to R4 316 Exhibits 295 9.1 Discriminant analysis for data in Table 9.1 9.2 Range tests for data in Table 9.1 301 9.3 SPSS output for the beer example 305 9.4 Range tests for the beer example 306 CHAPTER 10 318 Tables 10.1 Data for Most-Successful and Least-Successful Financial Institutions 31S 10.2 Contingency Table for Type and Size of Financial Institution 10.3 SAS Commands for Logistic Regression 321 10.4 Classification Table 326
TABLES, FIGURES, AND EXHIBITS 479 10.5 SAS Commands for Stepwise Logistic Regression 329 10.6 AlO.I Classification Table for Cutoff Value of 0.5 332 Values of the Maximum Likelihood Function for Different Values of Po and PI 340 Figure 320 10.1 The logistic curve Exhibits 10.1 Logistic regression analysis with one categorical variable as the independent variable 322 10.2 Contingency analysis output 328 10.3 Logistic regression for categorical and continuous variables 330 10.4 Discriminant analysis for data in Table 10.1 333 10.5 Logistic regression for mutual fund data 334 CHAPTER 11 Tables 11.1 Cell Means 344 11.2 MANOVA Computations 347 11.3 SPSS Commands 351 11.4 Hypothetical Data To illustrate the Presence of Multivariate Significance in the Absence of Univariate Significance 353 U.5 Data for Drug Effectiveness Study 355 11.6 SPSS Commands for Drug Study 355 11.7 Coefficients for the Contrasts 358 1l.8 SPSS Commands for Helmert Contrasts 360 11.9 Coefficients for Correlated Contrasts 363 11.10 SPSS Commands for Correlated Contrasts 364 11.11 Summary of Significant Tests 364 11.12 Data for the Ad Study 366 11.l3 SPSS Commands for the Ad Smdy 367 11.14 Cell Means for Multivariate GENDER x AD Interaction 369 Figures 345 11.1 One dependent variable and one independent variable at two levels 343 11.2 Two dependent variables and one independent variable at two levels 343 11.3 More than one independent variable and two dependent variables 11.4 Presence of multivariate significance in the absence of univariate significance 354 11.5 GENDER x AD interaction 370 Exhibits 352 1l.1 MANOVA for most-admired and least-admired firms 354 1I .2 Multivariate significance. but no univariate significance 11.3 MANOVA for drug study 356 11.4 Helmert contrasts for drug study 361
480 TABLES, FIGURES, AND EXHIBITS 11.5 SPSS output for correlated contrasts using the sequential method 365 11.6 MANOVA for ad study 368 CHAPTER 12 Tables 12.1 Hypothetical Data Simulated from Normal Distribution. 376 12.2 Financial Data for Most-Admired and Least-Admired Firms 379 12.3 SPSS Commands 379 12.4 Ordered Squared Mahalanobis Distance and Chi-Square Value 381 12.5 Transformations To Achieve Normality 383 12.6 Data for Purchase Intention Study 385 Figures 386 12.1 Q-Q plot for data in Table 12.1 377 12.2 Q-Q plot for transformed data 378 12.3 Chi-square plot for total sample 382 12.4 Chi-square plot for ad awareness data Exhibits 12.1 Univariate normality tests for data in Table 12.1 380 12.2 Partial MANOVA output for checking equality of covariance matrices assumption 387 12.3 Partial MANOVA output for checking equality of covariance matrices assumption for transformed data 388 CHAPTER 1J Tables 13.1 Hypothetical Data 392 13.2 Correlation between Various New Variables 394 13.3 Variables WI and V l 396 13.4 SAS Commands for the Data in Table 13.1 398 Q13.1 Correlation Matrix: Product Use and Personality Trait 411 Q13.2 Results of the Canonical Analysis 412 Q13.3 Indicators of Canonical Association between Measures of Insurers' Power and Insurers' Sources 413 A13.1 PROC IML Commands for Canonical Correlation Analysis 417 Figures 393 13.1 Plot of predictor and criterion variables 397 13.2 New axes for Y and X variables 395 13.3 Geometrical illustrat!'Jn in subject space Exhibits 407 13.1 Canonical correlation analysis on data in Table 13.1 399 13.2 Canonical correlation analysis for nutrition information study A13.1 PROC IML output for canonical analysis 418
TABLES. FIGURES. AND EXHIBITS 481 CHAPTER 14 438 Tables 14.1 Representation of Parameter Matrices of the Structural Model in LlSREL 42) 14.2 Hypothetical Covariance Matrix for tbe Model Given in Figure 14.2 422 14.3 LISREL Commands for the Model Given in Figure 14.2 422 14.4 Summary of Total, Direct, and Indirect Effects 426 14.5 Representation of Parameter Matrices of the Structural Model with Unobservable Constructs in LISREL 428 14.6 LISREL Commands for Structural Model with Unobservable Constructs 429 14.7 Summary of the Results for Structural Model with Unobservable Constructs 434 14.8 Goodness-of-Fit Indices for the Coupon Usage Model 438 14.9 Summary of the Results for the Respecified Coupon Usage Model Figures 14.1 Structural or path model 420 14.2 Structural model for observable constructs 420 14.3 Structural model with unobserved constructs 427 14.4 Coupon usage model 436 A14.1 Structural model with observable constructs 446 A14.2 Structural model with unobservable constructs 447 A14.3 Structural model 450 A14.4 Indirect effects of length three 451 A14.5 MUltiple indirect effects 451 Exhibits 14.1 LISREL output for the covariance matrix given in Table 14.2 423 14.2 LISREL output for structural model with unobservable constructs 431 14.3 LISREL output for coupon usage model 437 A14.1 Covariance matrix for structural model with observable constructs 447 A14.2 Covariance matrix for structural model with unobservable constructs 449
Index A Bernoulli trial, 340 Between-group analysis, 41-42 Adjusted goodness-of-fit index. LISREL. 159 sum of squares and cross products matrix. 42 Akailce's infonnation criteria, 324 Alpha factor analysis. 109 BIOMED Analysis of variance (ANOVA) clustering routines. 220 structural model estimation, 426 monotonic analysis of variance (MONANOVA). 8-9 Bootstrap method. discriminant function validation, 274 multivariate analysis of variance (MANOVA), 10 Box'M checking equality of covariance with one dependent/more than one matrices, 384-386 independent variable. 7 in mUltiple-group MANOVA, 351. 356 situations for use, 7 c ANOVA. see Analysis of variance Canonical correlation (ANOVA) analytic approach to. 397-398 Association coefficients, in cluster canonical variates, 401-402, 404 change in scale. effect of, 415 analysis, 220 computer analysis. 398-406 Assumptions examples of use, 406-409, 412-418 external validity of, 409 equality of covariance matrices as general technique. 409 assumption. 383-386 geometric view of. 391-397 with more than one dependent/one or independence assumption. 387-388 more independent variables. 9 nonnality assumptions. 375 Average-linkage method, hierarchical practical significance of. 404-406 clustering method, 192-193 situations for use, 9. 391 Axes. in Cartesian coordinate system. statistical significance tests for. 402-404 Canonical discriminant function, 251 17-19 standardized, 253 - 254 B Cartesian coordinate system, 17-19 Backward selection. stepwise discriminant change in origin and axes. 18-19 analysis, 265 Euclidian distance, 19 origin and axes in. 17-19 Bartlett's test, 76 rectangular Cartesian axes, 17 purpose of. 123 representation of points, 17-18 sensitivity of, 123 vectors in. 23 - 25 Basis vectors, 25, 31 483 Bayesian theory objective of, 256 posterior possibiliti~s based on, 281
484 INDEX Central tendency measures, mean, 36 semipartial R-squared, 198, 200 Centroid method, hierarchical clustering similarity measures. 187-188, 218-220 single-linkage or nearest-neighbor method, 188-191 Chaining effect, in hierarchical clustering method. 191 situations for use, 12, 185 methods, 211, 217 Ward's method, 193 Chi-square difference test. 439 Common factors, 96, 108 Chi-square goodness of fit test, 378 Communality, 92 Chi-square plot, 381-382 Communality estimation problem, and factor analysis, 136 computer program for, 389-390 Complete-linkage method, hierarchical Classification clustering method, 192, 217 classification function method, 257-258 Computer programs, see BIOMED; classification matrix, 255-256 classification rate, evaluation of. Statistical Analysis System (SAS); Statistical Package for the Social 258-260 Sciences (SPSS) computer analysis, 256-257,261 Concordant pair. 325, 326 cutoff-value method, 255-256 Confirmatory factor analysis. 128 in discriminant analysis, 242-244, LISREL, 148-177 objectives of, 148 278-284 situations for use, 144 as independent procedure, 242, 244 Confusion matrix. 255-256 in logistic regression, 326-327 Conjoint analysis Mahalanobis distance method, 258 monotonic analysis of variance misclassification errors, 256-257, 261, (MONANOVA), 8-9 with one dependent/more than one 311-312 independent variable, 8-9 for more than two groups, 311-312 Con~trained analysis, LISREL. 171, 173 mUltiple-group discriminant analysis, Contingency table analysis, in logistic regression, 327 - 328 293,303-304,311-312.313 Contrasts multivariate normal distributions, rules computer analysis, 360-366 correlated contrasts, 363-366 for, 281-283 Helmen contrasts, 360-361 practical significance of, 260 multivariate significance tests for, statistical decision theory, 256-257, 359-360. 363 orthogonal contrasts, 357-363 279-281 univariate significance tests for, statistical tests used, 258, 260 357-359,360,362-363 total probability of misclassification, 280 Correlated contrasts, in multiple-group Cluster analysis MANOVA, 363-366 average-linkage method, 192-193 Correlation coefficient centroid method, 188-191 in cluster analysis. 220 comparison of hierarchical/nonhierarchical for standardized data. 39 Correlation matrix methods, 211-217 in confinnatory factor analysis, 144-145 complete-linkage or farthest-neighbor use in, 144-145 Correspondence analysis, situations for method, 192 use, 12 computer analysis of, 193-202 Covariance matrix dendrogram in. 190-191 equality of covariance matrices examples of. 221 - 232 assumption, 383-387 external validity of solution. 221 and factor analysis, 144-145 geometrical view of, 186-187 one-factor model with. 145-147 hierarchical clustering methods, Cutoff-value method. 244. 255-256 188-193 loss of homogeneity in, 200 nonhierarchical cl ustering. 202 - 211 objective of. 187 Q-factor analysis. 187 reliability of solution, 221 root-mean-square total-sample standard deviation. 197, 198 R-squared. 198, 200
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272
- 273
- 274
- 275
- 276
- 277
- 278
- 279
- 280
- 281
- 282
- 283
- 284
- 285
- 286
- 287
- 288
- 289
- 290
- 291
- 292
- 293
- 294
- 295
- 296
- 297
- 298
- 299
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
- 309
- 310
- 311
- 312
- 313
- 314
- 315
- 316
- 317
- 318
- 319
- 320
- 321
- 322
- 323
- 324
- 325
- 326
- 327
- 328
- 329
- 330
- 331
- 332
- 333
- 334
- 335
- 336
- 337
- 338
- 339
- 340
- 341
- 342
- 343
- 344
- 345
- 346
- 347
- 348
- 349
- 350
- 351
- 352
- 353
- 354
- 355
- 356
- 357
- 358
- 359
- 360
- 361
- 362
- 363
- 364
- 365
- 366
- 367
- 368
- 369
- 370
- 371
- 372
- 373
- 374
- 375
- 376
- 377
- 378
- 379
- 380
- 381
- 382
- 383
- 384
- 385
- 386
- 387
- 388
- 389
- 390
- 391
- 392
- 393
- 394
- 395
- 396
- 397
- 398
- 399
- 400
- 401
- 402
- 403
- 404
- 405
- 406
- 407
- 408
- 409
- 410
- 411
- 412
- 413
- 414
- 415
- 416
- 417
- 418
- 419
- 420
- 421
- 422
- 423
- 424
- 425
- 426
- 427
- 428
- 429
- 430
- 431
- 432
- 433
- 434
- 435
- 436
- 437
- 438
- 439
- 440
- 441
- 442
- 443
- 444
- 445
- 446
- 447
- 448
- 449
- 450
- 451
- 452
- 453
- 454
- 455
- 456
- 457
- 458
- 459
- 460
- 461
- 462
- 463
- 464
- 465
- 466
- 467
- 468
- 469
- 470
- 471
- 472
- 473
- 474
- 475
- 476
- 477
- 478
- 479
- 480
- 481
- 482
- 483
- 484
- 485
- 486
- 487
- 488
- 489
- 490
- 491
- 492
- 493
- 494
- 495
- 496
- 497
- 498
- 499
- 500
- 501
- 502
- 503
- 504
- 505
- 506
- 507
- 508
- 509
- 1 - 50
- 51 - 100
- 101 - 150
- 151 - 200
- 201 - 250
- 251 - 300
- 301 - 350
- 351 - 400
- 401 - 450
- 451 - 500
- 501 - 509
Pages: