1 Mean structure

Up to this point in class, we have largely focused on understanding models of covariance structure. In many circumstances, however, we may be interested in understanding the means as well.

Let’s consider a few examples:

  1. What is the mean difference in depression – measured by a multi-item factor – before and after cognitive behavior therapy?
  2. How does the mean level of conscientiousness (in a Big 5 sense) change over the life span in the population?
  3. Does a group who completes a mindfulness training show greater openness to experience than a group who completes a time management training?

In the depression example, notice how we are leveraging the power of SEM by specifying a measurement model for depression. This addresses measurement error formally (i.e., in the model) when testing for mean change over time. This should both increase the precision of our mean change estimate and reduce bias. This holds true for the conscientiousness question, too.

In the third example (mindfulness), we are interested in comparing means on a latent variable (openness) between groups who were randomized to different conditions. Although such questions are often addressed by ANOVA or t-tests, we can capture a similar phenomenon in SEM, with the added advantage of accessing measurement models.

1.1 Notation for mean structure

Here, we use the triangle with a 1 inside to denote the regression of a variable (e.g., the factor scores) on a constant (1).

When we regress any variable on a vector of 1s, we will obtain an expectation of its mean (assuming there are no other predictors).

When we regress a variable on a vector of 1s, along with predictors that vary between observations (e.g., personality test scores), then the regression on a constant yields the intercept – the expectation of the variable when all predictors are zero.

Also note my shorthand notation in the diagram to denote intercepts for observed variables – this avoids having to add many paths between the 1 triangle and each variable.

1.2 Additional parameter vectors that are added for means

  1. \(\boldsymbol{\nu}\): contains model-estimated intercepts (endogenous) and means (exogenous) for continuous observed variables.
  2. \(\boldsymbol{\alpha}\): contains model-estimated intercepts (endogenous) and means (exogenous) for continuous latent variables.
  3. \(\boldsymbol{\tau}\): contains model-estimated thresholds for categorical observed variables.

1.3 Identification concerns with mean structure

Kline Rule 15.4. The parameters of a model with a mean structure include the

  1. means of the exogenous variables (except errors);
  2. intercepts of the endogenous variables; and
  3. number of parameters in the covariance structure, counted in the usual way for that type of model.

Up to this point, the number of elements in the covariance matrix (aka ‘observations’) has been: \(p = k(k+1)/2)\) where \(k\) is the number of variables.

With mean structure, the number of observations in the model is amended:

\[ \begin{equation} p = \frac{k(k+3)}{2} \end{equation} \]

For example if we analyze a 5x5 covariance matrix along with corresponding means, then we have: 10 covariances (\(k[k-1]/2\)), 5 variances (\(k\)), and 5 means (\(k\)) for a total of 20 observations (unique pieces of information about the data structure).

2 Equality constraints in SEM

An equality constraint specifies that the unstandardized estimates for two parameters in a model must be equal. In essence, instead of fitting two free parameters, the program fits only one. Why might this be useful? Constraints are primarily useful to test specific hypotheses.

2.1 Is the association between violent video games and aggressive behavior equal between men and women?

This is a path analysis in which we analyze men and women separately (dividing into subsamples) and examine whether an association parameter is equal between sexes.

grViz("
digraph sexdiff {
  # a 'graph' statement
  graph [overlap = true, fontsize = 12]

  node [shape = box, fontname = Arial]
  vgames_M; aggress_M; vgames_F; aggress_F;

  { rank = same; vgames_M; aggress_M }
  { rank = same; vgames_F; aggress_F }

  # edges
  vgames_M:s -> aggress_M:s [dir='both', label='a'];
  vgames_F:s -> aggress_F:s [dir='both', label='b'];
}")

In other words, does \(a\) equal \(b\)? From a parameter estimation standpoint, if \(a\) and \(b\) are free, we are estimating two parameters. If we enforce equality, then we are estimating only one: \(a := b\). Therefore, in general, models with equality constraints are nested within models that lack such constraints. This holds true because a free model can capture the case when \(a\) and \(b\) are equal, even if both are free parameters.

2.2 Is the relationship of “I worry a lot” with a neuroticism factor similar in the US, Canada, and Mexico?

This is a multiple-group analysis in which we have a covariance matrix for each country in three separate samples (each with \(N = 1000\)). The goal of the analysis is to check whether the factor loading for a specific item (“I worry a lot”) is equivalent across countries. One could imagine also testing a broader family of models in which we equate all loadings across countries to examine measurement invariance, discussed below.

2.3 Is the correlation between Anger and Sadness equal to the correlation betwen Happiness and Wellbeing?

In a four-factor model, we might be interested in whether the association of Anger and Sadness is equal in magnitude to the association of Happiness and Wellbeing? Remember that we constrain unstandardized estimates, so for interpretability, we might apply a unit variance identification (UVI) strategy so that the variances of each factor are on the same scale. Therefore, the covariances would also scale similarly.

2.4 Take home message about constraints

There are many canonical examples of how constraints are used in the SEM literature. For example, in formal tests of measurement invariance, factor loadings are often equated to enforce similar associations between a factor and its indicators in different groups.

However, constraints are a more general tool for you as a researcher to test a hypothesis about the components of a SEM. Like modification indices, constraints should be well-founded conceptually, but are open to abuse or confusion. It is up to you as a scientist to decide whether testing a nested model in which some parameters are equated is a meaningful alternative account of the data. Simply observing that two parameters estimates are around 0.8 should not motivate you to test a model in which they are equated.

Comparing equality-constrained versus free (nested) models is a test of approximate equivalence. That is, the free model will, by definition, fit better because it has more free parameters. However, does if fit substantially (or statistically significantly) better? If the nested model comparison is not significantly different, it suggests that the model does not fit substantially worse with the addition of equality constraints. In general, the principle of parsimony should tilt our preference toward models that fit the data well using fewer parameters. Thus, if the constrained model is not inferior, it is often preferred.

2.5 Statistical tests of constraints

The most common approach to examining the effect of constraints on fit is nested model comparison using a likelihood ratio test (LRT). This is also sometimes called a chi-square difference test; these tests are usually interchangeale. As a reminder, the model chi-square difference test is based on the difference of the model \(\chi^2\) for the constrained versus free models and the difference in degrees of freedom (constrained - free). The difference in \(\chi^2\) is itself \(\chi^2\) distributed with degrees of freedom equal to the difference between models.

If the LRT/\(\chi^2\) difference is significant (often \(p < .05\)), it indicates that the more constrained model fits significantly worse than the free model. But as we’ve discussed elsewhere, this test can become overpowered in large samples, so examining parameter differences in the free and constrained model is essential to get a sense of the practical differences. For example, if two factor loadings are 0.9 and 0.8 in the free case, but 0.86 in the constrained case, what would we do if the LRT had \(p = .02\)? How bad are residuals?

Note that the effect of constraints (or dropping paths, for that matter) can also be quantified using a Wald W statistic. This estimates the expected increase in the model \(\chi^2\) if the parameter constraints were to be introduced to a free model. This is asymptotically equivalent (i.e., as sample size increases) to the model chi-square difference test and the LRT under most scenarios. The practical advantage of a Wald test is that one can test the effect of constraints on a model without needing to refit the model or specify new syntax. This is particularly handy for models that are slow to estimate and thus, that you don’t want to refit.

Unlike the Wald W statistic, which is usually used to examine how model fit worsens with the addition of constraints, a modification index for a given path estimates the improvement in model \(\chi^2\) if the path were included. Thus, if we constrain a number of paths to be equal, then examine the modification indices, we may learn which paths are unlikely to be equal – as indicated by a large modification index.

Modification indices are based on the Lagrange multiplier test (LMT), which (in this context) is a test of whether a candidate omitted parameter is equal to zero. Remember that if we don’t specify a parameter in the model (e.g., a cross-loading), it is assumed to be zero. Although the LMT is usually computed on individual parameters (e.g., the table of modification indices), one can compute an LMT on a set of tests such as the hypothesis that all five potential cross-loadings are zero. See lavTestScore for details.

If you’re curious about the relationships among these parameter equivalence testing approaches, see this: https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faqhow-are-the-likelihood-ratio-wald-and-lagrange-multiplier-score-tests-different-andor-similar/.

2.6 How to test constraints in lavaan

Let’s return to our example from last week with Fear of Global Warming and Concern for Animal Welfare as examples that predict individual charitable contributions to environmental groups. For the moment, I’ve left out the donations part to focus only on the measurement model. What if we wanted to test the hypothesis that all of the indicators of FoGWar are equally associated with the construct? (As we’ll see below, this is called tau-equivalence).

Raw correlation matrix:

#We will rig things a bit to have the factor loadings of the 4 indicators vary: .6, .7, .8, and .7.

#this is just the syntax to simulate the data
demo.model <- '

l_FoGWar =~ .6*FoGWar1 + .7*FoGWar2 + .8*FoGWar3 + .7*FoGWar4  #definition of factor FoGWar with loadings on 4 items
l_CoAWe =~ .75*CoAWe5 + .85*CoAWe6 + .8*CoAWe7 + .8*CoAWe8  #definition of factor CoAWe with loadings on 4 items

l_FoGWar ~~ 0.4*l_CoAWe

#error variance in inverse proportion to factor loadings
FoGWar1 ~~ (1-.6^2)*FoGWar1
FoGWar2 ~~ (1-.7^2)*FoGWar2
FoGWar3 ~~ (1-.8^2)*FoGWar3
FoGWar4 ~~ (1-.7^2)*FoGWar4
CoAWe5 ~~ (1-.75^2)*CoAWe5
CoAWe6 ~~ (1-.85^2)*CoAWe6
CoAWe7 ~~ (1-.8^2)*CoAWe7
CoAWe8 ~~ (1-.8^2)*CoAWe8
'

# generate data; note, standardized lv is default
simData <- lavaan::simulateData(demo.model, sample.nobs=500)

# round(cor(simData), 2)
ggcorrplot(cor(simData))

#note: we specify FoGWar1 twice as an indicator 
#the first NA* tells lavaan to free the loading
#the second f1* provides a parameter label for the loading so we can constrain it later

msem.syntax <- '
#definition of factor FoGWar with loadings on 4 items
l_FoGWar =~ NA*FoGWar1 + f1*FoGWar1 + f2*FoGWar2 + f3*FoGWar3 + f4*FoGWar4

#definition of factor CoAWe with loadings on 4 items
l_CoAWe =~ NA*CoAWe5 + CoAWe6 + CoAWe7 + CoAWe8

#unit variance identification (since we want to compare all factor loadings)
l_FoGWar ~~ 1*l_FoGWar
l_CoAWe ~~ 1*l_CoAWe

l_FoGWar ~~ l_CoAWe
'

msem <- sem(msem.syntax, simData) #standardize factor variances to avoid first loading at 1.0.
#summary(msem, fit.measures=TRUE)
fitMeasures(msem)[c("chisq", "df", "pvalue", "rmsea", "srmr", "cfi")]
##   chisq      df  pvalue   rmsea    srmr     cfi 
## 27.2557 19.0000  0.0988  0.0295  0.0290  0.9952
semPaths_default(msem)

inspect(msem)
## $lambda
##         l_FGWr l_CoAW
## FoGWar1      1      0
## FoGWar2      2      0
## FoGWar3      3      0
## FoGWar4      4      0
## CoAWe5       0      5
## CoAWe6       0      6
## CoAWe7       0      7
## CoAWe8       0      8
## 
## $theta
##         FoGWr1 FoGWr2 FoGWr3 FoGWr4 CoAWe5 CoAWe6 CoAWe7 CoAWe8
## FoGWar1 10                                                     
## FoGWar2  0     11                                              
## FoGWar3  0      0     12                                       
## FoGWar4  0      0      0     13                                
## CoAWe5   0      0      0      0     14                         
## CoAWe6   0      0      0      0      0     15                  
## CoAWe7   0      0      0      0      0      0     16           
## CoAWe8   0      0      0      0      0      0      0     17    
## 
## $psi
##          l_FGWr l_CoAW
## l_FoGWar 0            
## l_CoAWe  9      0

2.6.1 Wald test

This is implemented by the lavTestWald function in lavaan:

con <- '
f1==f2
f1==f3
f1==f4
'

lavTestWald(msem, constraints = con)
## $stat
## [1] 12.4
## 
## $df
## [1] 3
## 
## $p.value
## [1] 0.00625
## 
## $se
## [1] "standard"

Note that the constraints must be unique to have the Wald test be estimable. Thus, after we specify f1==f2 and f1==f3, this implies that f2==f3 (transitivity) and we should not include that in the syntax.

Here, we find a significant p-value, suggesting that the fit is significantly worse if we force the loadings of FoGWar indicators to be equal.

2.6.2 Likelihood ratio test / model chi-square difference test

The anova function in lavaan will compute the LRT for nested models. It requires the researcher to fit two nested models, then compare them. Here, we can fit the constrained model where the loadings for all items on FoGWar are equal.

msem_eq.syntax <- '
l_FoGWar =~ NA*FoGWar1 + f1*FoGWar1 + f1*FoGWar2 + f1*FoGWar3 + f1*FoGWar4  #equate loadings by using the same parameter label
l_CoAWe =~ NA*CoAWe5 + CoAWe6 + CoAWe7 + CoAWe8

l_FoGWar ~~ 1*l_FoGWar
l_CoAWe ~~ 1*l_CoAWe
l_FoGWar ~~ l_CoAWe
'

#same model, but using unit loading identification (ULI) approach
# msem_eq.syntax <- '
# #equate loadings by using the same parameter label
# l_FoGWar =~ f1*FoGWar1 + f1*FoGWar2 + f1*FoGWar3 + f1*FoGWar4  
# 
# l_CoAWe =~ NA*CoAWe5 + CoAWe6 + CoAWe7 + CoAWe8
# 
# l_FoGWar ~~ l_FoGWar #free factor variance
# l_CoAWe ~~ 1*l_CoAWe
# l_FoGWar ~~ l_CoAWe
# '

msem_eq <- sem(msem_eq.syntax, simData) #standardize factor variances to avoid first loading at 1.0.
#summary(msem_eq, fit.measures=TRUE)
fitMeasures(msem_eq)[c("chisq", "df", "pvalue", "rmsea", "srmr", "cfi")]
##   chisq      df  pvalue   rmsea    srmr     cfi 
## 39.5229 22.0000  0.0123  0.0399  0.0426  0.9897
semPaths_default(msem_eq)

inspect(msem_eq)
## 
## Note: model contains equality constraints:
## 
##   lhs op rhs
## 1   1 ==   2
## 2   1 ==   3
## 3   1 ==   4
## 
## $lambda
##         l_FGWr l_CoAW
## FoGWar1      1      0
## FoGWar2      2      0
## FoGWar3      3      0
## FoGWar4      4      0
## CoAWe5       0      5
## CoAWe6       0      6
## CoAWe7       0      7
## CoAWe8       0      8
## 
## $theta
##         FoGWr1 FoGWr2 FoGWr3 FoGWr4 CoAWe5 CoAWe6 CoAWe7 CoAWe8
## FoGWar1 10                                                     
## FoGWar2  0     11                                              
## FoGWar3  0      0     12                                       
## FoGWar4  0      0      0     13                                
## CoAWe5   0      0      0      0     14                         
## CoAWe6   0      0      0      0      0     15                  
## CoAWe7   0      0      0      0      0      0     16           
## CoAWe8   0      0      0      0      0      0      0     17    
## 
## $psi
##          l_FGWr l_CoAW
## l_FoGWar 0            
## l_CoAWe  9      0

By using the same parameter label f1, lavaan constrains those parameters to be equal. This is a convenient way to specify equality constraints.

anova(msem, msem_eq)
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
msem 19 9760 9831 27.3 NA NA NA
msem_eq 22 9766 9825 39.5 12.3 3 0.007

We see the same effect for the LRT as the Wald test (no surprise there), suggesting worse fit under constrained loadings.

Altogether, note that we achieved two hypothesis tests using different means. In the lavTestScore approach, we compute the decrease in fit (as indicated by higher \(\chi^2\)) when we constrain FoGWar factor loadings to be equal based on a model in which the loadings are freely estimated. In the conventional LRT or \(\chi^2\) difference test, we fit both the free and constrained models and test the difference in fit based on difference in likelihoods.

3 Constraints in measurement models: congeneric, tau equivalent, and parallel specifications

In measurement models, there is a useful set of constraints one can apply to understand the similarity of associations between indicators and latent variables. In a typical CFA, we estimate factor loadings for each indicator (excepting the reference if we use the unit loading identification strategy). As we’ve discussed, the standardized loadings provide an interpretable measure of how associated each indicator (item) is with the factor. Items with lower factor loadings reflect the latent construct to a lesser degree.

What if, however, we wish to test the hypothesis that all items measure the construct equally well? Likewise, what if we also wish to know if all items have equal precision in their measurement (i.e., proportion of item variance explained by the factor)? These are hypotheses that can be tested using constraints.

3.1 Congeneric

The first type of factor model is called congeneric. This is what we’ve come to expect of vanilla CFA: we specify that certain items load onto certain factors. Factor loadings and error variances (disturbances) are freely estimated for each indicator (excepting the reference indicator under ULI). A congeneric model has the following properties:

  1. Factor loadings are freely estimated for each indicator
  2. Disturbances (aka measurement error, error variance) are freely estimated for each indicator
  3. Disturbances are independent (conditional independence assumption)
  4. Indicators load onto only one latent variable (no cross-loadings)

Under these assumptions, the observed correlations among indicators reflect the influence of the latent variables, but we do not assume that a) the proportion of true score (factor score) variance is equal across indicators, or that b) that the latent construct scales with item responses equally.

The congeneric model is not guaranteed. For example, if we have ‘complex indicators’ that have cross loadings, or if there are correlated errors among indicators, this does not satisfy congenericity.

3.2 Tau equivalent

Congeneric models already have certain constraints on structure, namely that certain items load onto certain factors according to our hypotheses. Unlike EFA, where all items technically load onto all factors, we typically fix many elements in \(\lambda\) to zero to enforce a hypothesis about latent structure. If the congeneric model fails, we would not usually move on to test more restrictive measurement models. But if the congeneric model fits well, we might consider additional measurement tests.

First, we will consider what’s called tau-equivalence, defined as:

  1. Loadings on a given factor are equal
  2. Error variances are free

Note we inherit the other features of the congeneric model, namely independent errors and no cross-loadings.

If loadings are equal, then each indicator has an equivalent association to the underlying construct. Therefore, a one-unit change in the underlying factor is associated with the same increase in the response to each indicator.

Note that measures of internal consistency are implicitly based on a tau-equivalent model. This is part of why coefficient alpha is technically a lower bound on the test’s reliability. If we do not have a tau-equivalent factor model, coefficient alpha will underestimate the reliability of the scale. For details, see Graham 2006 Educational and Psychological Measurement.

3.3 Parallel

If we meet the assumptions of a tau-equivalent model, we might consider a further restriction called parallel structure.

  1. Loadings on a given factor are equal
  2. Error variances for indicators of a factor are equal

The additional requirement of equal error variances (disturbances) tests a model where each indicator has the same level of precision. This structure has two major implications. First, the amount of item variation attributable to the true score (in a CTT sense) is equal for all items. Second, the expected value of each of the items is the same – i.e., the items are fungible. A parallel indicators model relates in part to the practice of developing parallel forms of a test where we wish to use a subsets of items from a broader pool to assess the same construct. To be confident that we are indeed measuring the same construct, we need to satisfy the parallel constraint step. That is, the model with parallel indicators should not fit worse than a tau-equivalent model, which should not fit worse than the congeneric model.

3.4 Testing these constraints in practice

Going back to our FoGWar and CoAWe example, we can examine whether we can satisfy congeneric, tau-equivalent, and/or parallel structure. These structural tests should be applied sequentially. For example, if we do not have tau equivalence, we cannot have parallel structure. But, we can evaluate these structural tests for each factor individually – one may be parallel, the other may be only congeneric.

3.4.1 Congeneric

Though we didn’t use the nomenclature, our initial two-factor CFA model with freed loadings is the congeneric case. As a reminder:

summary(msem, standardized=TRUE)
## lavaan 0.6-3 ended normally after 19 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         17
## 
##   Number of observations                           500
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                      27.256
##   Degrees of freedom                                19
##   P-value (Chi-square)                           0.099
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   l_FoGWar =~                                                           
##     FoGWar1   (f1)    0.665    0.046   14.596    0.000    0.665    0.636
##     FoGWar2   (f2)    0.715    0.044   16.433    0.000    0.715    0.699
##     FoGWar3   (f3)    0.843    0.042   20.032    0.000    0.843    0.816
##     FoGWar4   (f4)    0.747    0.041   18.089    0.000    0.747    0.754
##   l_CoAWe =~                                                            
##     CoAWe5            0.737    0.039   18.728    0.000    0.737    0.750
##     CoAWe6            0.857    0.039   22.067    0.000    0.857    0.842
##     CoAWe7            0.801    0.038   20.907    0.000    0.801    0.811
##     CoAWe8            0.780    0.040   19.735    0.000    0.780    0.779
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   l_FoGWar ~~                                                           
##     l_CoAWe           0.375    0.046    8.117    0.000    0.375    0.375
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##     l_FoGWar          1.000                               1.000    1.000
##     l_CoAWe           1.000                               1.000    1.000
##    .FoGWar1           0.650    0.048   13.616    0.000    0.650    0.595
##    .FoGWar2           0.536    0.042   12.676    0.000    0.536    0.511
##    .FoGWar3           0.358    0.039    9.256    0.000    0.358    0.335
##    .FoGWar4           0.424    0.037   11.399    0.000    0.424    0.432
##    .CoAWe5            0.422    0.033   12.920    0.000    0.422    0.437
##    .CoAWe6            0.302    0.029   10.276    0.000    0.302    0.291
##    .CoAWe7            0.333    0.029   11.424    0.000    0.333    0.342
##    .CoAWe8            0.395    0.032   12.324    0.000    0.395    0.393

Here, we have labeled the factor loadings for FoGWar, but we have not constrained them to be equal.

Note that the loadings for FoGWar are similar, but perhaps not equivalent. They range from .64 to .82. The question of tau equivalence is whether the loadings can be constrained to equality without reducing fit substantially. As we saw above, by using the same parameter loading for a set of indicators, lavaan will enforce an equality constraint.

3.4.2 Tau equivalent

Above, we tested a model in which we equated FoGWar loadings. This is the tau equivalent model (for FoGWar, anyhow). As a reminder, the syntax looks like this:

msem_eq.syntax <- '
  #equate loadings by using the same parameter label
  l_FoGWar =~ NA*FoGWar1 + f1*FoGWar1 + f1*FoGWar2 + f1*FoGWar3 + f1*FoGWar4  
  l_CoAWe =~ NA*CoAWe5 + CoAWe6 + CoAWe7 + CoAWe8
  
  l_FoGWar ~~ 1*l_FoGWar
  l_CoAWe ~~ 1*l_CoAWe
  l_FoGWar ~~ l_CoAWe
'

As noted above, the premultiplication of FoGWar1 by NA instructs lavaan to free the loading for the first indicator (as opposed to fixing it to 1.0). And the second premultiplication by f1 instructs lavaan to fix all parameters (here FoGWar loadings) with the same f1 label to be equal. This is the unit variance identification (UVI) approach. Look in the R code above for the unit loading identification (ULI) approach, which is commented out.

Here is the fit of the tau equivalent model:

summary(msem_eq, standardized=TRUE)
## lavaan 0.6-3 ended normally after 17 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         17
##   Number of equality constraints                     3
## 
##   Number of observations                           500
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                      39.523
##   Degrees of freedom                                22
##   P-value (Chi-square)                           0.012
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   l_FoGWar =~                                                           
##     FoGWar1   (f1)    0.750    0.029   25.901    0.000    0.750    0.687
##     FoGWar2   (f1)    0.750    0.029   25.901    0.000    0.750    0.722
##     FoGWar3   (f1)    0.750    0.029   25.901    0.000    0.750    0.759
##     FoGWar4   (f1)    0.750    0.029   25.901    0.000    0.750    0.755
##   l_CoAWe =~                                                            
##     CoAWe5            0.737    0.039   18.731    0.000    0.737    0.750
##     CoAWe6            0.858    0.039   22.080    0.000    0.858    0.842
##     CoAWe7            0.800    0.038   20.898    0.000    0.800    0.811
##     CoAWe8            0.780    0.040   19.731    0.000    0.780    0.779
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   l_FoGWar ~~                                                           
##     l_CoAWe           0.380    0.046    8.217    0.000    0.380    0.380
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##     l_FoGWar          1.000                               1.000    1.000
##     l_CoAWe           1.000                               1.000    1.000
##    .FoGWar1           0.628    0.047   13.323    0.000    0.628    0.528
##    .FoGWar2           0.517    0.040   12.775    0.000    0.517    0.479
##    .FoGWar3           0.414    0.035   12.009    0.000    0.414    0.424
##    .FoGWar4           0.423    0.035   12.088    0.000    0.423    0.429
##    .CoAWe5            0.422    0.033   12.921    0.000    0.422    0.437
##    .CoAWe6            0.302    0.029   10.265    0.000    0.302    0.291
##    .CoAWe7            0.334    0.029   11.435    0.000    0.334    0.343
##    .CoAWe8            0.395    0.032   12.330    0.000    0.395    0.393

And the LRT:

anova(msem, msem_eq)
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
msem 19 9760 9831 27.3 NA NA NA
msem_eq 22 9766 9825 39.5 12.3 3 0.007

This recapitulates what we already reviewed above, but without calling the models congeneric and tau equivalent. As a reminder, the LRT suggested that model fit worsened when FoGWar loadings were equated.

What about testing tau equivalence for CoAWe?

The tests above suggest that tau equivalence does not hold for FoGWar, but perhaps it does for CoAWe.

msem_tau_coawe.syntax <- '
l_FoGWar =~ NA*FoGWar1 + FoGWar2 + FoGWar3 + FoGWar4  #equate loadings by using the same parameter label
l_CoAWe =~ NA*CoAWe5 + f1*CoAWe5 + f1*CoAWe6 + f1*CoAWe7 + f1*CoAWe8

l_FoGWar ~~ 1*l_FoGWar
l_CoAWe ~~ 1*l_CoAWe
l_FoGWar ~~ l_CoAWe
'

#ULI approach -- same model
# msem_tau_coawe.syntax <- '
# l_FoGWar =~ FoGWar1 + FoGWar2 + FoGWar3 + FoGWar4  #equate loadings by using the same parameter label
# l_CoAWe =~ f1*CoAWe5 + f1*CoAWe6 + f1*CoAWe7 + f1*CoAWe8
# 
# l_FoGWar ~~ l_CoAWe
# '

msem_tau_coawe <- cfa(msem_tau_coawe.syntax, simData)
summary(msem_tau_coawe, fit.measures=TRUE, standardized=TRUE)
## lavaan 0.6-3 ended normally after 15 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         17
##   Number of equality constraints                     3
## 
##   Number of observations                           500
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                      35.358
##   Degrees of freedom                                22
##   P-value (Chi-square)                           0.036
## 
## Model test baseline model:
## 
##   Minimum Function Test Statistic             1734.552
##   Degrees of freedom                                28
##   P-value                                        0.000
## 
## User model versus baseline model:
## 
##   Comparative Fit Index (CFI)                    0.992
##   Tucker-Lewis Index (TLI)                       0.990
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -4866.928
##   Loglikelihood unrestricted model (H1)      -4849.249
## 
##   Number of free parameters                         14
##   Akaike (AIC)                                9761.856
##   Bayesian (BIC)                              9820.861
##   Sample-size adjusted Bayesian (BIC)         9776.424
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.035
##   90 Percent Confidence Interval          0.009  0.055
##   P-value RMSEA <= 0.05                          0.880
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.038
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   l_FoGWar =~                                                           
##     FoGWar1           0.665    0.046   14.596    0.000    0.665    0.636
##     FoGWar2           0.715    0.044   16.427    0.000    0.715    0.699
##     FoGWar3           0.844    0.042   20.039    0.000    0.844    0.816
##     FoGWar4           0.747    0.041   18.089    0.000    0.747    0.754
##   l_CoAWe =~                                                            
##     CoAWe5    (f1)    0.797    0.029   27.608    0.000    0.797    0.780
##     CoAWe6    (f1)    0.797    0.029   27.608    0.000    0.797    0.811
##     CoAWe7    (f1)    0.797    0.029   27.608    0.000    0.797    0.810
##     CoAWe8    (f1)    0.797    0.029   27.608    0.000    0.797    0.788
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   l_FoGWar ~~                                                           
##     l_CoAWe           0.376    0.046    8.124    0.000    0.376    0.376
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##     l_FoGWar          1.000                               1.000    1.000
##     l_CoAWe           1.000                               1.000    1.000
##    .FoGWar1           0.650    0.048   13.616    0.000    0.650    0.595
##    .FoGWar2           0.536    0.042   12.681    0.000    0.536    0.512
##    .FoGWar3           0.358    0.039    9.249    0.000    0.358    0.334
##    .FoGWar4           0.425    0.037   11.400    0.000    0.425    0.432
##    .CoAWe5            0.408    0.032   12.715    0.000    0.408    0.391
##    .CoAWe6            0.329    0.028   11.958    0.000    0.329    0.342
##    .CoAWe7            0.334    0.028   12.008    0.000    0.334    0.345
##    .CoAWe8            0.387    0.031   12.543    0.000    0.387    0.379

Let’s compare to the congeneric model using an LRT. The tau equivalent model is nested in the congeneric.

anova(msem, msem_tau_coawe)
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
msem 19 9760 9831 27.3 NA NA NA
msem_tau_coawe 22 9762 9821 35.4 8.1 3 0.044

The LRT suggests that the tau equivalent model is barely worse (\(p = .04\)) than the congeneric.

aictab(list(congeneric=msem, tau=msem_tau_coawe))
Modnames K AICc Delta_AICc ModelLik AICcWt LL Cum.Wt
congeneric 17 9761 0.0 1.000 0.7 -4863 0.7
tau 14 9763 1.7 0.428 0.3 -4867 1.0
fitMeasures(msem_tau_coawe)[c("chisq", "df", "pvalue", "rmsea", "srmr", "cfi")]
##   chisq      df  pvalue   rmsea    srmr     cfi 
## 35.3582 22.0000  0.0355  0.0348  0.0378  0.9922

With the relative evidence approach, we see something perhaps less definitive: the LRT is barely significant (in our sample of 500), suggesting a failure of tau equivalence, but the AICc indicates that the tau equivalent model is nearly as good (difference of 1.7 points), and it is more parsimonious. We also find that the global fit measures are all acceptable.

This highlights the problem with relying on null hypothesis significance testing (NHST) alone to judge constrained models. One should make an informed judgment based on a) LRT, b) changes in global fit measures (esp. change in CFI), c) change in residuals under the constrained solution, and d) interpretability.

With regard to change in residuals, we can examine this in a few ways. I suggest looking at the absolute change in the residuals in correlational units. This captures the absolute difference between the congeneric and tau equivalent models (for CoAWe) in terms of fitting each cell of the covariance matrix. Note that the FoGWar residual differences should be zero because the models being compared are the same for this factor – they only differ in CoAWe.

abs(resid(msem, type="cor")$cov - resid(msem_tau_coawe, type="cor")$cov)
##         FoGWr1 FoGWr2 FoGWr3 FoGWr4 CoAWe5 CoAWe6 CoAWe7 CoAWe8
## FoGWar1 0.000                                                  
## FoGWar2 0.000  0.000                                           
## FoGWar3 0.000  0.000  0.000                                    
## FoGWar4 0.000  0.000  0.000  0.000                             
## CoAWe5  0.007  0.008  0.010  0.009  0.000                      
## CoAWe6  0.007  0.008  0.009  0.008  0.001  0.000               
## CoAWe7  0.000  0.000  0.000  0.000  0.023  0.026  0.000        
## CoAWe8  0.003  0.003  0.003  0.003  0.030  0.016  0.006  0.000

The reduction in fit for bivariate correlations falls in the .03 range for most indicators.

In this case, I would at least consider the parallel model.

3.4.3 Parallel

In the parallel model, we add constraints on disturbances for the indicators (error variances). Let’s apply these constraints to the tau-equivalent CoAWe model, leaving FoGWar as congeneric. Remember that we only consider the parallel model if the tau-equivalent step fits well.

msem_parallel_coawe.syntax <- '
l_FoGWar =~ NA*FoGWar1 + FoGWar2 + FoGWar3 + FoGWar4  #equate loadings by using the same parameter label
l_CoAWe =~ NA*CoAWe5 + f1*CoAWe5 + f1*CoAWe6 + f1*CoAWe7 + f1*CoAWe8

l_FoGWar ~~ 1*l_FoGWar
l_CoAWe ~~ 1*l_CoAWe
l_FoGWar ~~ l_CoAWe

#equate residual variances
CoAWe5 ~~ rv1*CoAWe5
CoAWe6 ~~ rv1*CoAWe6
CoAWe7 ~~ rv1*CoAWe7
CoAWe8 ~~ rv1*CoAWe8
'

msem_parallel_coawe <- sem(msem_parallel_coawe.syntax, simData)
summary(msem_parallel_coawe, fit.measures=TRUE, standardized=TRUE)
## lavaan 0.6-3 ended normally after 14 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         17
##   Number of equality constraints                     6
## 
##   Number of observations                           500
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                      40.234
##   Degrees of freedom                                25
##   P-value (Chi-square)                           0.028
## 
## Model test baseline model:
## 
##   Minimum Function Test Statistic             1734.552
##   Degrees of freedom                                28
##   P-value                                        0.000
## 
## User model versus baseline model:
## 
##   Comparative Fit Index (CFI)                    0.991
##   Tucker-Lewis Index (TLI)                       0.990
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -4869.366
##   Loglikelihood unrestricted model (H1)      -4849.249
## 
##   Number of free parameters                         11
##   Akaike (AIC)                                9760.732
##   Bayesian (BIC)                              9807.093
##   Sample-size adjusted Bayesian (BIC)         9772.178
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.035
##   90 Percent Confidence Interval          0.012  0.054
##   P-value RMSEA <= 0.05                          0.896
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.034
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   l_FoGWar =~                                                           
##     FoGWar1           0.665    0.046   14.601    0.000    0.665    0.636
##     FoGWar2           0.715    0.044   16.422    0.000    0.715    0.699
##     FoGWar3           0.844    0.042   20.042    0.000    0.844    0.816
##     FoGWar4           0.747    0.041   18.091    0.000    0.747    0.754
##   l_CoAWe =~                                                            
##     CoAWe5    (f1)    0.793    0.029   27.537    0.000    0.793    0.795
##     CoAWe6    (f1)    0.793    0.029   27.537    0.000    0.793    0.795
##     CoAWe7    (f1)    0.793    0.029   27.537    0.000    0.793    0.795
##     CoAWe8    (f1)    0.793    0.029   27.537    0.000    0.793    0.795
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   l_FoGWar ~~                                                           
##     l_CoAWe           0.378    0.046    8.189    0.000    0.378    0.378
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##     l_FoGWar          1.000                               1.000    1.000
##     l_CoAWe           1.000                               1.000    1.000
##    .CoAWe5   (rv1)    0.366    0.013   27.386    0.000    0.366    0.368
##    .CoAWe6   (rv1)    0.366    0.013   27.386    0.000    0.366    0.368
##    .CoAWe7   (rv1)    0.366    0.013   27.386    0.000    0.366    0.368
##    .CoAWe8   (rv1)    0.366    0.013   27.386    0.000    0.366    0.368
##    .FoGWar1           0.650    0.048   13.616    0.000    0.650    0.595
##    .FoGWar2           0.536    0.042   12.686    0.000    0.536    0.512
##    .FoGWar3           0.357    0.039    9.250    0.000    0.357    0.334
##    .FoGWar4           0.425    0.037   11.402    0.000    0.425    0.432

The f1 parameter label equates CoAWe factor loadings, whereas the rv1 parameter label equates error variances. These parameter labels are arbitrary and chosen by you. I try to use a shorthand that is easy (for me) to remember.

Let’s compare the congeneric, tau-equivalent, and parallel models using LRT:

anova(msem, msem_tau_coawe, msem_parallel_coawe)
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
msem 19 9760 9831 27.3 NA NA NA
msem_tau_coawe 22 9762 9821 35.4 8.10 3 0.044
msem_parallel_coawe 25 9761 9807 40.2 4.88 3 0.181

The parallel model does not fit significantly worse than the tau-equivalent model despite having three fewer parameters.

And now AICc and global fit:

aictab(list(congeneric=msem, tau=msem_tau_coawe, parallel=msem_parallel_coawe))
Modnames K AICc Delta_AICc ModelLik AICcWt LL Cum.Wt
1 congeneric 17 9761 0.00 1.000 0.433 -4863 0.433
3 parallel 11 9761 0.25 0.883 0.382 -4869 0.815
2 tau 14 9763 1.70 0.428 0.185 -4867 1.000
fitMeasures(msem_parallel_coawe)[c("chisq", "df", "pvalue", "rmsea", "srmr", "cfi")]
##   chisq      df  pvalue   rmsea    srmr     cfi 
## 40.2340 25.0000  0.0276  0.0349  0.0343  0.9911

AIC also suggests that the models all fit the data similarly. The parallel model is actually quite close to the congeneric case in the fit:parsimony tradeoff that AICc represents (remember its ‘efficiency’ property).

Altogether, this is a judgment call (as often happens in practice). I would err on the side of parsimony, potentially preferring the parallel model where the loadings and residual variances are equated. But I would want to make sure that I’m not missing any key covariance patterns that the congeneric model captures.

4 Multiple groups

So far, we have primarily considered constraints in the context of a single sample, enforcing equality on different parameters in the model. Oftentimes, however, we are faced with questions in which we a) have multiple distinct groups or samples, or b) hypothesize that a categorical moderator may affect some of the parameters in our model. In either case, this is typically a multiple-groups problem in which we are interested in understanding how well a given SEM fits independent samples of data.

In the simplest case, we could simply run the SEM in each sample and compare the results qualitatively. An equivalent approach is to fit both groups simultaneously in a multiple-groups model, but allow every parameter to be free between groups.

Let’s consider the example from the lavaan tutorial: http://lavaan.ugent.be/tutorial/groups.html. Here, we have data from intelligence tests conducted in two separate schools, Pasteur and Grant-White:

data(HolzingerSwineford1939)
lapply(describeBy(HolzingerSwineford1939, HolzingerSwineford1939$school), function(df) {
  select(df, -trimmed, -vars, -se, -mad)
})
## $`Grant-White`
##           n   mean    sd median    min    max  range  skew kurtosis
## id      145 275.26 43.34 275.00 201.00 351.00 150.00  0.02    -1.21
## sex     145   1.50  0.50   2.00   1.00   2.00   1.00 -0.01    -2.01
## ageyr   145  12.72  0.97  13.00  11.00  16.00   5.00  0.70     0.63
## agemo   145   5.34  3.48   5.00   0.00  11.00  11.00  0.06    -1.23
## school* 145   1.00  0.00   1.00   1.00   1.00   0.00   NaN      NaN
## grade   144   7.45  0.50   7.00   7.00   8.00   1.00  0.19    -1.98
## x1      145   4.93  1.15   5.00   1.83   8.50   6.67 -0.12    -0.13
## x2      145   6.20  1.11   6.25   2.25   9.25   7.00  0.23     0.75
## x3      145   2.00  1.04   1.88   0.38   4.50   4.12  0.61    -0.51
## x4      145   3.32  1.13   3.00   0.33   6.33   6.00  0.40     0.16
## x5      145   4.71  1.16   4.75   1.00   7.00   6.00 -0.54     0.13
## x6      145   2.47  1.14   2.29   0.29   5.86   5.57  0.71     0.14
## x7      145   3.92  1.03   3.87   1.30   6.48   5.17  0.16    -0.42
## x8      145   5.49  1.05   5.50   3.05  10.00   6.95  0.68     2.09
## x9      145   5.33  1.03   5.31   3.11   9.25   6.14  0.20     0.41
## 
## $Pasteur
##           n  mean    sd median   min    max  range  skew kurtosis
## id      156 84.81 48.91  85.50  1.00 168.00 167.00 -0.02    -1.23
## sex     156  1.53  0.50   2.00  1.00   2.00   1.00 -0.10    -2.00
## ageyr   156 13.25  1.06  13.00 12.00  16.00   4.00  0.68    -0.21
## agemo   156  5.40  3.44   5.00  0.00  11.00  11.00  0.11    -1.23
## school* 156  2.00  0.00   2.00  2.00   2.00   0.00   NaN      NaN
## grade   156  7.50  0.50   7.50  7.00   8.00   1.00  0.00    -2.01
## x1      156  4.94  1.19   5.00  0.67   7.50   6.83 -0.37     0.63
## x2      156  5.98  1.23   5.75  3.50   9.25   5.75  0.68     0.17
## x3      156  2.49  1.16   2.38  0.25   4.50   4.25  0.15    -1.10
## x4      156  2.82  1.15   2.67  0.00   6.00   6.00  0.22    -0.13
## x5      156  4.00  1.31   4.00  1.00   6.75   5.75 -0.13    -0.88
## x6      156  1.92  0.99   1.86  0.14   6.14   6.00  0.99     1.87
## x7      156  4.43  1.09   4.37  2.04   7.43   5.39  0.30    -0.45
## x8      156  5.56  0.98   5.47  3.50   8.30   4.80  0.35     0.01
## x9      156  5.42  0.99   5.42  2.78   8.61   5.83  0.22     0.11

The variables x1-x3 are indicators of visual processing, x4-x6 reflect textual processing, and x7-x9 measure processing speed.

4.1 Fitting groups (schools) separately

Let’s start by fitting SEMs to each school separately. We would not normally do this in practice given the advantages of testing multiple groups in one SEM. But it is useful to see how this works.

Here is the fit of a three-factor model to each group.

4.1.1 Pasteur

We subset the data to Pasteur alone, then fit the CFA.

HS.model <- '  visual =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 '

fit_pasteur <- cfa(HS.model, data = filter(HolzingerSwineford1939, school=="Pasteur"))
summary(fit_pasteur)
## lavaan 0.6-3 ended normally after 38 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         21
## 
##   Number of observations                           156
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                      64.309
##   Degrees of freedom                                24
##   P-value (Chi-square)                           0.000
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1                1.000                           
##     x2                0.394    0.122    3.220    0.001
##     x3                0.570    0.140    4.076    0.000
##   textual =~                                          
##     x4                1.000                           
##     x5                1.183    0.102   11.613    0.000
##     x6                0.875    0.077   11.421    0.000
##   speed =~                                            
##     x7                1.000                           
##     x8                1.125    0.277    4.057    0.000
##     x9                0.922    0.225    4.104    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.479    0.106    4.531    0.000
##     speed             0.185    0.077    2.397    0.017
##   textual ~~                                          
##     speed             0.182    0.069    2.628    0.009
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                0.298    0.232    1.286    0.198
##    .x2                1.334    0.158    8.464    0.000
##    .x3                0.989    0.136    7.271    0.000
##    .x4                0.425    0.069    6.138    0.000
##    .x5                0.456    0.086    5.292    0.000
##    .x6                0.290    0.050    5.780    0.000
##    .x7                0.820    0.125    6.580    0.000
##    .x8                0.510    0.116    4.406    0.000
##    .x9                0.680    0.104    6.516    0.000
##     visual            1.097    0.276    3.967    0.000
##     textual           0.894    0.150    5.963    0.000
##     speed             0.350    0.126    2.778    0.005
semPaths_default(fit_pasteur)

4.1.2 Grant-White

Now the same for Grant-White, fitting the CFA after subsetting the data.

### Grant-White
fit_grantwhite <- cfa(HS.model, data = filter(HolzingerSwineford1939, school=="Grant-White"))
summary(fit_grantwhite)
## lavaan 0.6-3 ended normally after 34 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         21
## 
##   Number of observations                           145
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                      51.542
##   Degrees of freedom                                24
##   P-value (Chi-square)                           0.001
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1                1.000                           
##     x2                0.736    0.155    4.760    0.000
##     x3                0.925    0.166    5.584    0.000
##   textual =~                                          
##     x4                1.000                           
##     x5                0.990    0.087   11.418    0.000
##     x6                0.963    0.085   11.377    0.000
##   speed =~                                            
##     x7                1.000                           
##     x8                1.226    0.187    6.569    0.000
##     x9                1.058    0.165    6.429    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.408    0.098    4.153    0.000
##     speed             0.276    0.076    3.639    0.000
##   textual ~~                                          
##     speed             0.222    0.073    3.022    0.003
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                0.715    0.126    5.675    0.000
##    .x2                0.899    0.123    7.339    0.000
##    .x3                0.557    0.103    5.409    0.000
##    .x4                0.315    0.065    4.870    0.000
##    .x5                0.419    0.072    5.812    0.000
##    .x6                0.406    0.069    5.880    0.000
##    .x7                0.600    0.091    6.584    0.000
##    .x8                0.401    0.094    4.248    0.000
##    .x9                0.535    0.089    6.010    0.000
##     visual            0.604    0.160    3.762    0.000
##     textual           0.942    0.152    6.177    0.000
##     speed             0.461    0.118    3.910    0.000

4.2 Fitting groups together

We can fit the groups in a single model in which both contribute to the sample likelihood. Recall that the sample likelihood reflect the probability of the observed data given the model and parameters. If we consider all free parameters for one group as a vector, \(\boldsymbol{\theta}_1\) and free parameters for the other group as a vector, \(\boldsymbol{\theta}_2\), then the total number of free parameters, \(q\) is simply \(\boldsymbol{\theta}_1 + \boldsymbol{\theta}_2\). In this way, each sample contributes independently to the sample likelihood function insofar as the parameters are completely separated by group.

We can fit multiple groups in SEM using the group argument in lavaan. Let’s add a grouping structure to the intelligence testing example:

fit_combined <- cfa(HS.model, data = HolzingerSwineford1939, group="school", meanstructure=FALSE) #lavaan will include means/intercepts by default in multiple-group analysis
summary(fit_combined, fit.measures=TRUE, standardized=TRUE)
## lavaan 0.6-3 ended normally after 57 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         42
## 
##   Number of observations per group         
##   Pasteur                                          156
##   Grant-White                                      145
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                     115.851
##   Degrees of freedom                                48
##   P-value (Chi-square)                           0.000
## 
## Chi-square for each group:
## 
##   Pasteur                                       64.309
##   Grant-White                                   51.542
## 
## Model test baseline model:
## 
##   Minimum Function Test Statistic              957.769
##   Degrees of freedom                                72
##   P-value                                        0.000
## 
## User model versus baseline model:
## 
##   Comparative Fit Index (CFI)                    0.923
##   Tucker-Lewis Index (TLI)                       0.885
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -3682.198
##   Loglikelihood unrestricted model (H1)      -3624.272
## 
##   Number of free parameters                         42
##   Akaike (AIC)                                7448.395
##   Bayesian (BIC)                              7604.094
##   Sample-size adjusted Bayesian (BIC)         7470.894
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.097
##   90 Percent Confidence Interval          0.075  0.120
##   P-value RMSEA <= 0.05                          0.001
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.074
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## 
## Group 1 [Pasteur]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual =~                                                             
##     x1                1.000                               1.047    0.887
##     x2                0.394    0.122    3.220    0.001    0.412    0.336
##     x3                0.570    0.140    4.076    0.000    0.597    0.515
##   textual =~                                                            
##     x4                1.000                               0.946    0.823
##     x5                1.183    0.102   11.613    0.000    1.119    0.856
##     x6                0.875    0.077   11.421    0.000    0.827    0.838
##   speed =~                                                              
##     x7                1.000                               0.591    0.547
##     x8                1.125    0.277    4.057    0.000    0.665    0.682
##     x9                0.922    0.225    4.104    0.000    0.545    0.551
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual ~~                                                             
##     textual           0.479    0.106    4.531    0.000    0.484    0.484
##     speed             0.185    0.077    2.397    0.017    0.299    0.299
##   textual ~~                                                            
##     speed             0.182    0.069    2.628    0.009    0.325    0.325
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1                0.298    0.232    1.286    0.198    0.298    0.214
##    .x2                1.334    0.158    8.464    0.000    1.334    0.887
##    .x3                0.989    0.136    7.271    0.000    0.989    0.735
##    .x4                0.425    0.069    6.138    0.000    0.425    0.322
##    .x5                0.456    0.086    5.292    0.000    0.456    0.267
##    .x6                0.290    0.050    5.780    0.000    0.290    0.297
##    .x7                0.820    0.125    6.580    0.000    0.820    0.701
##    .x8                0.510    0.116    4.406    0.000    0.510    0.535
##    .x9                0.680    0.104    6.516    0.000    0.680    0.696
##     visual            1.097    0.276    3.967    0.000    1.000    1.000
##     textual           0.894    0.150    5.963    0.000    1.000    1.000
##     speed             0.350    0.126    2.778    0.005    1.000    1.000
## 
## 
## Group 2 [Grant-White]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual =~                                                             
##     x1                1.000                               0.777    0.677
##     x2                0.736    0.155    4.760    0.000    0.572    0.517
##     x3                0.925    0.166    5.583    0.000    0.719    0.694
##   textual =~                                                            
##     x4                1.000                               0.971    0.866
##     x5                0.990    0.087   11.418    0.000    0.961    0.829
##     x6                0.963    0.085   11.377    0.000    0.935    0.826
##   speed =~                                                              
##     x7                1.000                               0.679    0.659
##     x8                1.226    0.187    6.569    0.000    0.833    0.796
##     x9                1.058    0.165    6.429    0.000    0.719    0.701
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual ~~                                                             
##     textual           0.408    0.098    4.153    0.000    0.541    0.541
##     speed             0.276    0.076    3.639    0.000    0.523    0.523
##   textual ~~                                                            
##     speed             0.222    0.073    3.022    0.003    0.336    0.336
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1                0.715    0.126    5.676    0.000    0.715    0.542
##    .x2                0.899    0.123    7.339    0.000    0.899    0.733
##    .x3                0.557    0.103    5.409    0.000    0.557    0.519
##    .x4                0.315    0.065    4.870    0.000    0.315    0.251
##    .x5                0.419    0.072    5.812    0.000    0.419    0.312
##    .x6                0.406    0.069    5.880    0.000    0.406    0.317
##    .x7                0.600    0.091    6.584    0.000    0.600    0.566
##    .x8                0.401    0.094    4.249    0.000    0.401    0.367
##    .x9                0.535    0.089    6.010    0.000    0.535    0.509
##     visual            0.604    0.160    3.762    0.000    1.000    1.000
##     textual           0.942    0.152    6.177    0.000    1.000    1.000
##     speed             0.461    0.118    3.910    0.000    1.000    1.000

We are now provided with parameter estimates for each group. We also have the \(\chi^2\) contribution of each group to the total model \(\chi^2\). Notice that the \(\chi^2\) for each group is identical to the \(\chi^2\) obtained when we fit the samples independently. Thus, the jumping off point for most multiple-group analyses is a free model in which all parameters are estimated uniquely in each group.

Also note that the model degrees of freedom is 48, whereas the individual school fits were 24 df each. Thus, in the multiple-groups model with no constraints, the \(\chi^2\) is the sum of the individual group model \(\chi^2\) values. Likewise, the degrees of freedom in the multiple-groups case is the sum of the individual group model dfs.

This model specification essentially looks like this:

Note that the model is partitioned by group. No person was observed in both schools, and thus, we would never consider whether the Vis factor in Pasteur correlates with the Vis factor in Grant-White. Rather, in multiple-groups SEM, we are interested in testing how multiple covariance matrices (one per group) are explained by a SEM that has as many parameters in common as fits well.

The free model is not very parsimonious, nor does it usually address our substantive research questions. In fact, we are often interested in testing the hypothesis that the latent structure (or at least part of it) is equivalent across groups. If we translate a measure of personality from English to Dutch, it would be ideal if the latent structure was the same in both populations. This would let us compare results across nations without concern that the tests are measuring different constructs in each nation. Likewise, if we wish to compare rates of change in aggressive behaviors between prison and psychiatric settings, we wish to test a hypothesis about relative equality in change parameters across settings.

inspect(fit_combined)
## $Pasteur
## $Pasteur$lambda
##    visual textul speed
## x1      0      0     0
## x2      1      0     0
## x3      2      0     0
## x4      0      0     0
## x5      0      3     0
## x6      0      4     0
## x7      0      0     0
## x8      0      0     5
## x9      0      0     6
## 
## $Pasteur$theta
##    x1 x2 x3 x4 x5 x6 x7 x8 x9
## x1  7                        
## x2  0  8                     
## x3  0  0  9                  
## x4  0  0  0 10               
## x5  0  0  0  0 11            
## x6  0  0  0  0  0 12         
## x7  0  0  0  0  0  0 13      
## x8  0  0  0  0  0  0  0 14   
## x9  0  0  0  0  0  0  0  0 15
## 
## $Pasteur$psi
##         visual textul speed
## visual  16                 
## textual 19     17          
## speed   20     21     18   
## 
## 
## $`Grant-White`
## $`Grant-White`$lambda
##    visual textul speed
## x1      0      0     0
## x2     22      0     0
## x3     23      0     0
## x4      0      0     0
## x5      0     24     0
## x6      0     25     0
## x7      0      0     0
## x8      0      0    26
## x9      0      0    27
## 
## $`Grant-White`$theta
##    x1 x2 x3 x4 x5 x6 x7 x8 x9
## x1 28                        
## x2  0 29                     
## x3  0  0 30                  
## x4  0  0  0 31               
## x5  0  0  0  0 32            
## x6  0  0  0  0  0 33         
## x7  0  0  0  0  0  0 34      
## x8  0  0  0  0  0  0  0 35   
## x9  0  0  0  0  0  0  0  0 36
## 
## $`Grant-White`$psi
##         visual textul speed
## visual  37                 
## textual 40     38          
## speed   41     42     39
#semPaths_default(fit_combined)

Thus, we can apply the same general idea of applying equality constraints between groups as we did in the single sample case above. For multiple group analyses, lavaan uses a vector of parameter loadings to indicate what equality constraints to apply in each group. In the two-school example, what if we wish to equate factor loadings on the visual factor across groups? We can add a two-element parameter label vector in front of each loading (e.g., c(f1, f1)*x1).

HS.model_floadings_vis <- '  visual =~ c(f1,f1)*x1 + c(f2,f2)*x2 + c(f3,f3)*x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 '

floadings_vis <- cfa(HS.model_floadings_vis, data = HolzingerSwineford1939, group="school")
summary(floadings_vis)
## lavaan 0.6-3 ended normally after 50 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         60
##   Number of equality constraints                     2
## 
##   Number of observations per group         
##   Pasteur                                          156
##   Grant-White                                      145
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                     118.761
##   Degrees of freedom                                50
##   P-value (Chi-square)                           0.000
## 
## Chi-square for each group:
## 
##   Pasteur                                       66.242
##   Grant-White                                   52.519
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## 
## Group 1 [Pasteur]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1        (f1)    1.000                           
##     x2        (f2)    0.602    0.101    5.980    0.000
##     x3        (f3)    0.788    0.109    7.255    0.000
##   textual =~                                          
##     x4                1.000                           
##     x5                1.190    0.103   11.570    0.000
##     x6                0.878    0.077   11.370    0.000
##   speed =~                                            
##     x7                1.000                           
##     x8                1.138    0.281    4.053    0.000
##     x9                0.952    0.232    4.100    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.396    0.096    4.105    0.000
##     speed             0.171    0.071    2.412    0.016
##   textual ~~                                          
##     speed             0.178    0.068    2.618    0.009
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                4.941    0.093   52.980    0.000
##    .x2                5.984    0.100   60.116    0.000
##    .x3                2.487    0.094   26.474    0.000
##    .x4                2.823    0.092   30.689    0.000
##    .x5                3.995    0.105   38.183    0.000
##    .x6                1.922    0.079   24.321    0.000
##    .x7                4.432    0.087   51.181    0.000
##    .x8                5.563    0.078   71.214    0.000
##    .x9                5.418    0.079   68.440    0.000
##     visual            0.000                           
##     textual           0.000                           
##     speed             0.000                           
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                0.556    0.138    4.026    0.000
##    .x2                1.256    0.155    8.103    0.000
##    .x3                0.879    0.129    6.842    0.000
##    .x4                0.431    0.070    6.174    0.000
##    .x5                0.449    0.086    5.192    0.000
##    .x6                0.289    0.050    5.743    0.000
##    .x7                0.831    0.124    6.696    0.000
##    .x8                0.513    0.115    4.459    0.000
##    .x9                0.670    0.105    6.394    0.000
##     visual            0.801    0.171    4.693    0.000
##     textual           0.889    0.150    5.932    0.000
##     speed             0.339    0.123    2.745    0.006
## 
## 
## Group 2 [Grant-White]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1        (f1)    1.000                           
##     x2        (f2)    0.602    0.101    5.980    0.000
##     x3        (f3)    0.788    0.109    7.255    0.000
##   textual =~                                          
##     x4                1.000                           
##     x5                0.990    0.087   11.424    0.000
##     x6                0.963    0.085   11.374    0.000
##   speed =~                                            
##     x7                1.000                           
##     x8                1.228    0.188    6.546    0.000
##     x9                1.076    0.168    6.420    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.447    0.102    4.408    0.000
##     speed             0.310    0.080    3.877    0.000
##   textual ~~                                          
##     speed             0.222    0.073    3.044    0.002
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                4.930    0.097   50.782    0.000
##    .x2                6.200    0.091   68.357    0.000
##    .x3                1.996    0.085   23.448    0.000
##    .x4                3.317    0.093   35.625    0.000
##    .x5                4.712    0.096   48.986    0.000
##    .x6                2.469    0.094   26.277    0.000
##    .x7                3.921    0.086   45.819    0.000
##    .x8                5.488    0.087   63.174    0.000
##    .x9                5.327    0.085   62.571    0.000
##     visual            0.000                           
##     textual           0.000                           
##     speed             0.000                           
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                0.647    0.126    5.125    0.000
##    .x2                0.933    0.121    7.731    0.000
##    .x3                0.604    0.096    6.276    0.000
##    .x4                0.315    0.065    4.869    0.000
##    .x5                0.418    0.072    5.807    0.000
##    .x6                0.407    0.069    5.889    0.000
##    .x7                0.607    0.091    6.645    0.000
##    .x8                0.409    0.094    4.361    0.000
##    .x9                0.524    0.089    5.910    0.000
##     visual            0.719    0.160    4.483    0.000
##     textual           0.942    0.152    6.178    0.000
##     speed             0.454    0.117    3.879    0.000

5 Measurement invariance

For a good summary of how to present measurement invariance tests, see: http://www.lesahoffman.com/CLP948/CLP948_Example07a_CFA_MG_Invariance.pdf

The code is in Mplus, but the description is general.

The measEq.syntax function from the semTools package provides a convenient way to obtain syntax that tests different steps of measurement invariance, ranging from configural through strict invariance. It applies the appropriate constraints for measurement invariance tests:

Integrating (blurring?) over a huge swath of literature, I would recommend satisfying at least strong invariance if you want to compare groups on factor means, variance, or covariances. There is less general support for requiring strict invariance, since that essentially forces the uniqueness of each item (i.e., variance not attributable to the factor) to be the same across groups. This also implies equal reliability across groups, assuming equal factor variances. There is not a huge conceptual gain in demonstrating strict MI for most applications, in my opinion.

Increasingly, change in CFI, not LRT significance, is seen as a sensitive test of measurement noninvariance (i.e., that the measure functions differntly in some way by group). Cheung and Rensvold, 2002, Structural Equation Modeling suggest that an increase in CFI of .01 between a more constrained model (e.g., strong invariance) and a less constrained model (e.g., weak invariance) may indicate measurement noninvariance. Meade, Johnson, and Braddy (2008) found that CFI changes as little as .002 may be indicative of a problem with invariance.

Here, we will initially stick to LRT. We will use the compareFit function from semTools. This provides a broader set of summary statistics when comparing the global fit of multiple models. To use it, we must pass in a named list of fitted lavaan models. Also note the use of the nested argument. If TRUE, compareFit will conduct LRTs on our behalf to test chi-square differences.

to_constrain <- c("configural", "loadings", "intercepts", "residuals", "means")
mlist <- list()
for (i in 1:length(to_constrain)) {
  geq <- if(i==1) { "" } else { to_constrain[2:i] }
  mlist[[ to_constrain[i] ]] <- measEq.syntax(configural.model=HS.model, data = HolzingerSwineford1939, ID.fac="UV", 
                         group = "school", return.fit=TRUE, group.equal=geq)
}
fit_summary <- compareFit(mlist, nested=TRUE)
fit_summary
## ################### Nested Model Comparison #########################
## Chi Square Difference Test
## 
##            Df  AIC  BIC Chisq Chisq diff Df diff Pr(>Chisq)    
## configural 48 7484 7707   116                                  
## loadings   54 7481 7681   124        8.2       6      0.224    
## intercepts 60 7509 7687   164       40.1       6    4.4e-07 ***
## residuals  69 7508 7653   182       17.4       9      0.043 *  
## means      72 7542 7675   221       39.8       3    1.2e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## ####################### Model Fit Indices ###########################
##               chisq df pvalue   cfi   tli       aic       bic rmsea  srmr
## configural 115.851† 48   .000 .923† .885  7484.395  7706.822  .097  .068†
## loadings   124.044  54   .000 .921  .895† 7480.587† 7680.771  .093† .072 
## intercepts 164.103  60   .000 .882  .859  7508.647  7686.588  .107  .082 
## residuals  181.511  69   .000 .873  .867  7508.055  7652.632† .104  .088 
## means      221.335  72   .000 .831  .831  7541.879  7675.335  .117  .114 
## 
## ################## Differences in Fit Indices #######################
##                        df    cfi    tli    aic    bic  rmsea  srmr
## loadings - configural   6 -0.002  0.009 -3.808 -26.05 -0.004 0.004
## intercepts - loadings   6 -0.038 -0.036 28.059   5.82  0.015 0.011
## residuals - intercepts  9 -0.009  0.008 -0.591 -33.95 -0.003 0.006
## means - means           3 -0.042 -0.036 33.824  22.70  0.013 0.026
#deprecated, handy, function that generates equivalent results
#mlist <- measurementInvariance(model=HS.model, data = HolzingerSwineford1939, strict=TRUE, std.lv=TRUE, group = "school")

5.1 Weak invariance

Although the measEq.syntax function is great, it is often useful to have more control over what is equated among groups while avoiding the verbose syntax of needing to label every parameter constraint. Lavaan provides the group.equal argument, which can accept any of the following: "loadings", "intercepts", "means", "thresholds", "regressions", "residuals", "residual.covariances", "lv.variances", or "lv.covariances". See ?lavOptions for details.

For the weak invariance model, we would use group.equal="loadings". Note that we have to be careful about the std.lv argument when fitting multiple groups. There are two ways to identify the weak invariance model:

  1. Unit loading identification (ULI)
    • 1a. The same reference indicator fixed at 1 for each factor in each group.
    • 1b. Factor loading for remaining indicators equated across groups.
    • 1c. Factor variances free in all groups.
  2. Unit variance identification (UVI)
    • 2a. No reference indicator.
    • 2b. All loadings equated equated across groups.
    • 2c. Factor variances set to 1.0 in one group and free in the others.

Note that passing std.lv=TRUE to sem will setup the UVI properly in a single-group analysis. But under multiple groups, at present it will set factor variances to 1 in all groups, which does not satisfy 2c above. Thus, I would suggest not using std.lv=TRUE and not taking the UVI approach. Instead, the ULI (reference indicators) is easy to setup. I include syntax below for UVI but you’ll see it is a bit tricky.

Small sidebar: the measEq.syntax function above does handle the UVI/ULI challenge in invariance testing internally, so you can use TRUE or FALSE above and get equivalent results (as you should).

mconfigural <- sem(HS.model, data = HolzingerSwineford1939, std.lv=FALSE, group = "school")

#ULI approach (reference indicator)
mweak <- sem(HS.model, data = HolzingerSwineford1939, std.lv=FALSE, group = "school", group.equal="loadings")

#Manual UVI approach since std.lv=TRUE gives improper (nonequivalent) results
mmod_uvi <-'
  visual  =~ NA*x1 + x2 + x3
  textual =~ NA*x4 + x5 + x6
  speed   =~ NA*x7 + x8 + x9 

#at present, lavaan does not allow parameter labels to be combined with constraints, such as c(1, v1)
visual ~~ c(v1, v2)*visual
textual ~~ c(t1, t2)*textual
speed ~~ c(s1, s2)*speed

#thus, use constraints to enforce the structure of fixing variances to 1.0 in one group, free in others
v1==1
t1==1
s1==1
'

mweak_uvi <- sem(mmod_uvi, data = HolzingerSwineford1939, std.lv=FALSE, group = "school", group.equal="loadings", meanstructure=TRUE)

Let’s check the parameter matrices for the UVI weak invariance model:

inspect(mweak_uvi)
## 
## Note: model contains equality constraints:
## 
##    lhs op rhs
## 1   10 ==   1
## 2   11 ==   1
## 3   12 ==   1
## 4    1 ==  34
## 5    2 ==  35
## 6    3 ==  36
## 7    4 ==  37
## 8    5 ==  38
## 9    6 ==  39
## 10   7 ==  40
## 11   8 ==  41
## 12   9 ==  42
## 
## $Pasteur
## $Pasteur$lambda
##    visual textul speed
## x1      1      0     0
## x2      2      0     0
## x3      3      0     0
## x4      0      4     0
## x5      0      5     0
## x6      0      6     0
## x7      0      0     7
## x8      0      0     8
## x9      0      0     9
## 
## $Pasteur$theta
##    x1 x2 x3 x4 x5 x6 x7 x8 x9
## x1 13                        
## x2  0 14                     
## x3  0  0 15                  
## x4  0  0  0 16               
## x5  0  0  0  0 17            
## x6  0  0  0  0  0 18         
## x7  0  0  0  0  0  0 19      
## x8  0  0  0  0  0  0  0 20   
## x9  0  0  0  0  0  0  0  0 21
## 
## $Pasteur$psi
##         visual textul speed
## visual  10                 
## textual 22     11          
## speed   23     24     12   
## 
## $Pasteur$nu
##    intrcp
## x1     25
## x2     26
## x3     27
## x4     28
## x5     29
## x6     30
## x7     31
## x8     32
## x9     33
## 
## $Pasteur$alpha
##         intrcp
## visual       0
## textual      0
## speed        0
## 
## 
## $`Grant-White`
## $`Grant-White`$lambda
##    visual textul speed
## x1     34      0     0
## x2     35      0     0
## x3     36      0     0
## x4      0     37     0
## x5      0     38     0
## x6      0     39     0
## x7      0      0    40
## x8      0      0    41
## x9      0      0    42
## 
## $`Grant-White`$theta
##    x1 x2 x3 x4 x5 x6 x7 x8 x9
## x1 46                        
## x2  0 47                     
## x3  0  0 48                  
## x4  0  0  0 49               
## x5  0  0  0  0 50            
## x6  0  0  0  0  0 51         
## x7  0  0  0  0  0  0 52      
## x8  0  0  0  0  0  0  0 53   
## x9  0  0  0  0  0  0  0  0 54
## 
## $`Grant-White`$psi
##         visual textul speed
## visual  43                 
## textual 55     44          
## speed   56     57     45   
## 
## $`Grant-White`$nu
##    intrcp
## x1     58
## x2     59
## x3     60
## x4     61
## x5     62
## x6     63
## x7     64
## x8     65
## x9     66
## 
## $`Grant-White`$alpha
##         intrcp
## visual       0
## textual      0
## speed        0

And we can inspect the fit:

summary(mweak_uvi, fit.measures=TRUE)
## lavaan 0.6-3 ended normally after 50 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         66
##   Number of equality constraints                    12
## 
##   Number of observations per group         
##   Pasteur                                          156
##   Grant-White                                      145
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                     124.044
##   Degrees of freedom                                54
##   P-value (Chi-square)                           0.000
## 
## Chi-square for each group:
## 
##   Pasteur                                       68.825
##   Grant-White                                   55.219
## 
## Model test baseline model:
## 
##   Minimum Function Test Statistic              957.769
##   Degrees of freedom                                72
##   P-value                                        0.000
## 
## User model versus baseline model:
## 
##   Comparative Fit Index (CFI)                    0.921
##   Tucker-Lewis Index (TLI)                       0.895
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -3686.294
##   Loglikelihood unrestricted model (H1)      -3624.272
## 
##   Number of free parameters                         54
##   Akaike (AIC)                                7480.587
##   Bayesian (BIC)                              7680.771
##   Sample-size adjusted Bayesian (BIC)         7509.514
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.093
##   90 Percent Confidence Interval          0.071  0.114
##   P-value RMSEA <= 0.05                          0.001
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.072
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## 
## Group 1 [Pasteur]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1      (.p1.)    0.897    0.095    9.427    0.000
##     x2      (.p2.)    0.537    0.085    6.329    0.000
##     x3      (.p3.)    0.704    0.085    8.241    0.000
##   textual =~                                          
##     x4      (.p4.)    0.956    0.072   13.302    0.000
##     x5      (.p5.)    1.035    0.078   13.308    0.000
##     x6      (.p6.)    0.871    0.064   13.685    0.000
##   speed =~                                            
##     x7      (.p7.)    0.552    0.070    7.841    0.000
##     x8      (.p8.)    0.663    0.075    8.892    0.000
##     x9      (.p9.)    0.573    0.069    8.298    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.485    0.087    5.601    0.000
##     speed             0.340    0.114    2.994    0.003
##   textual ~~                                          
##     speed             0.333    0.100    3.342    0.001
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                4.941    0.093   52.991    0.000
##    .x2                5.984    0.100   60.096    0.000
##    .x3                2.487    0.094   26.465    0.000
##    .x4                2.823    0.093   30.371    0.000
##    .x5                3.995    0.101   39.715    0.000
##    .x6                1.922    0.081   23.711    0.000
##    .x7                4.432    0.086   51.540    0.000
##    .x8                5.563    0.078   71.088    0.000
##    .x9                5.418    0.079   68.153    0.000
##     visual            0.000                           
##     textual           0.000                           
##     speed             0.000                           
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##     visual    (v1)    1.000                           
##     textual   (t1)    1.000       NA                  
##     speed     (s1)    1.000                           
##    .x1                0.551    0.137    4.010    0.000
##    .x2                1.258    0.155    8.117    0.000
##    .x3                0.882    0.128    6.884    0.000
##    .x4                0.434    0.070    6.238    0.000
##    .x5                0.508    0.082    6.229    0.000
##    .x6                0.266    0.050    5.294    0.000
##    .x7                0.849    0.114    7.468    0.000
##    .x8                0.515    0.095    5.409    0.000
##    .x9                0.658    0.096    6.865    0.000
## 
## 
## Group 2 [Grant-White]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1      (.p1.)    0.897    0.095    9.427    0.000
##     x2      (.p2.)    0.537    0.085    6.329    0.000
##     x3      (.p3.)    0.704    0.085    8.241    0.000
##   textual =~                                          
##     x4      (.p4.)    0.956    0.072   13.302    0.000
##     x5      (.p5.)    1.035    0.078   13.308    0.000
##     x6      (.p6.)    0.871    0.064   13.685    0.000
##   speed =~                                            
##     x7      (.p7.)    0.552    0.070    7.841    0.000
##     x8      (.p8.)    0.663    0.075    8.892    0.000
##     x9      (.p9.)    0.573    0.069    8.298    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.510    0.125    4.088    0.000
##     speed             0.633    0.167    3.789    0.000
##   textual ~~                                          
##     speed             0.428    0.140    3.071    0.002
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                4.930    0.097   50.763    0.000
##    .x2                6.200    0.091   68.379    0.000
##    .x3                1.996    0.085   23.455    0.000
##    .x4                3.317    0.092   35.950    0.000
##    .x5                4.712    0.100   47.173    0.000
##    .x6                2.469    0.091   27.248    0.000
##    .x7                3.921    0.086   45.555    0.000
##    .x8                5.488    0.087   63.257    0.000
##    .x9                5.327    0.085   62.786    0.000
##     visual            0.000                           
##     textual           0.000                           
##     speed             0.000                           
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##     visual    (v2)    0.897    0.225    3.990    0.000
##     textual   (t2)    0.992    0.186    5.336    0.000
##     speed     (s2)    1.559    0.379    4.116    0.000
##    .x1                0.645    0.127    5.084    0.000
##    .x2                0.933    0.121    7.732    0.000
##    .x3                0.605    0.096    6.282    0.000
##    .x4                0.329    0.062    5.279    0.000
##    .x5                0.384    0.073    5.270    0.000
##    .x6                0.437    0.067    6.575    0.000
##    .x7                0.599    0.090    6.651    0.000
##    .x8                0.406    0.089    4.541    0.000
##    .x9                0.532    0.086    6.202    0.000
## 
## Constraints:
##                                                |Slack|
##     v1 - (1)                                     0.000
##     t1 - (1)                                     0.000
##     s1 - (1)                                     0.000

5.1.1 Are the weak ULI and weak UVI models equivalent?

Let’s return to the handy net function from semTools:

semTools::net(mweak, mweak_uvi)
## 
##         If cell [R, C] is TRUE, the model in row R is nested within column C.
## 
##         If the models also have the same degrees of freedom, they are equivalent.
## 
##         NA indicates the model in column C did not converge when fit to the
##         implied means and covariance matrix from the model in row R.
## 
##         The hidden diagonal is TRUE because any model is equivalent to itself.
##         The upper triangle is hidden because for models with the same degrees
##         of freedom, cell [C, R] == cell [R, C].  For all models with different
##         degrees of freedom, the upper diagonal is all FALSE because models with
##         fewer degrees of freedom (i.e., more parameters) cannot be nested
##         within models with more degrees of freedom (i.e., fewer parameters).
##         
##                     mweak mweak_uvi
## mweak (df = 54)                    
## mweak_uvi (df = 54) TRUE

5.2 Strong invariance

  1. Equate loadings and intercepts across groups
  2. Free item residual variances across groups
  3. Fix factor means at zero in one group, free in others (reference group approach)

The UVI/ULI identification steps are the same as in the weak invariance model.

mstrong <- sem(HS.model, data = HolzingerSwineford1939, std.lv=FALSE, group = "school", group.equal=c("loadings", "intercepts"), meanstructure=TRUE)

summary(mstrong) #useful to double check which parameters are equated.
## lavaan 0.6-3 ended normally after 60 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         63
##   Number of equality constraints                    15
## 
##   Number of observations per group         
##   Pasteur                                          156
##   Grant-White                                      145
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                     164.103
##   Degrees of freedom                                60
##   P-value (Chi-square)                           0.000
## 
## Chi-square for each group:
## 
##   Pasteur                                       90.210
##   Grant-White                                   73.892
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## 
## Group 1 [Pasteur]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1                1.000                           
##     x2      (.p2.)    0.576    0.101    5.713    0.000
##     x3      (.p3.)    0.798    0.112    7.146    0.000
##   textual =~                                          
##     x4                1.000                           
##     x5      (.p5.)    1.120    0.066   16.965    0.000
##     x6      (.p6.)    0.932    0.056   16.608    0.000
##   speed =~                                            
##     x7                1.000                           
##     x8      (.p8.)    1.130    0.145    7.786    0.000
##     x9      (.p9.)    1.009    0.132    7.667    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.410    0.095    4.293    0.000
##     speed             0.178    0.066    2.687    0.007
##   textual ~~                                          
##     speed             0.180    0.062    2.900    0.004
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1      (.25.)    5.001    0.090   55.760    0.000
##    .x2      (.26.)    6.151    0.077   79.905    0.000
##    .x3      (.27.)    2.271    0.083   27.387    0.000
##    .x4      (.28.)    2.778    0.087   31.953    0.000
##    .x5      (.29.)    4.035    0.096   41.858    0.000
##    .x6      (.30.)    1.926    0.079   24.426    0.000
##    .x7      (.31.)    4.242    0.073   57.975    0.000
##    .x8      (.32.)    5.630    0.072   78.531    0.000
##    .x9      (.33.)    5.465    0.069   79.016    0.000
##     visual            0.000                           
##     textual           0.000                           
##     speed             0.000                           
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                0.555    0.139    3.983    0.000
##    .x2                1.296    0.158    8.186    0.000
##    .x3                0.944    0.136    6.929    0.000
##    .x4                0.445    0.069    6.430    0.000
##    .x5                0.502    0.082    6.136    0.000
##    .x6                0.263    0.050    5.264    0.000
##    .x7                0.888    0.120    7.416    0.000
##    .x8                0.541    0.095    5.706    0.000
##    .x9                0.654    0.096    6.805    0.000
##     visual            0.796    0.172    4.641    0.000
##     textual           0.879    0.131    6.694    0.000
##     speed             0.322    0.082    3.914    0.000
## 
## 
## Group 2 [Grant-White]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1                1.000                           
##     x2      (.p2.)    0.576    0.101    5.713    0.000
##     x3      (.p3.)    0.798    0.112    7.146    0.000
##   textual =~                                          
##     x4                1.000                           
##     x5      (.p5.)    1.120    0.066   16.965    0.000
##     x6      (.p6.)    0.932    0.056   16.608    0.000
##   speed =~                                            
##     x7                1.000                           
##     x8      (.p8.)    1.130    0.145    7.786    0.000
##     x9      (.p9.)    1.009    0.132    7.667    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.427    0.097    4.417    0.000
##     speed             0.329    0.082    4.006    0.000
##   textual ~~                                          
##     speed             0.236    0.073    3.224    0.001
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1      (.25.)    5.001    0.090   55.760    0.000
##    .x2      (.26.)    6.151    0.077   79.905    0.000
##    .x3      (.27.)    2.271    0.083   27.387    0.000
##    .x4      (.28.)    2.778    0.087   31.953    0.000
##    .x5      (.29.)    4.035    0.096   41.858    0.000
##    .x6      (.30.)    1.926    0.079   24.426    0.000
##    .x7      (.31.)    4.242    0.073   57.975    0.000
##    .x8      (.32.)    5.630    0.072   78.531    0.000
##    .x9      (.33.)    5.465    0.069   79.016    0.000
##     visual           -0.148    0.122   -1.211    0.226
##     textual           0.576    0.117    4.918    0.000
##     speed            -0.177    0.090   -1.968    0.049
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                0.654    0.128    5.094    0.000
##    .x2                0.964    0.123    7.812    0.000
##    .x3                0.641    0.101    6.316    0.000
##    .x4                0.343    0.062    5.534    0.000
##    .x5                0.376    0.073    5.133    0.000
##    .x6                0.437    0.067    6.559    0.000
##    .x7                0.625    0.095    6.574    0.000
##    .x8                0.434    0.088    4.914    0.000
##    .x9                0.522    0.086    6.102    0.000
##     visual            0.708    0.160    4.417    0.000
##     textual           0.870    0.131    6.659    0.000
##     speed             0.505    0.115    4.379    0.000
anova(mweak, mstrong)
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
mweak 54 7481 7681 124 NA NA NA
mstrong 60 7509 7687 164 40.1 6 0

Consistent with the above output of measEq.syntax, the LRT indicates that constraining the intercepts to equality worsens fit. Likewise, the AIC is 28 points worse, a large reduction in relative fit. What might be the problem? By constraining the loadings and intercepts to equality, we are implying that at a given level of the factor, the expected value of each indicator shoud be equal between groups. This is true because we have equated the strength of association between the factor and the indicators via the loadings.

5.3 Partial invariance

But if the strong invariance model is worse than the weak MI model, it implies that the expected value of at least one indicator at a given level of the factor differs between groups. In other words, the average level of an indicator is higher or lower in one group even after accounting for the part explained by the factor.

When mean structure is considered in the common factor model, the predicted values for a given indicator, \(\textbf{y}_i\), are:

\[ \textbf{y}_i = \boldsymbol{\nu}_i + \boldsymbol{\Lambda}_y \boldsymbol{\eta}_i + \boldsymbol{\varepsilon}_i \]

Note that we don’t have an any covariates predicting factor scores or indicators. These would be require an extension of the model. And more matrices!

But the main idea here is that we now have an intercept \(\boldsymbol{\nu}_i\) for the item \(\textbf{y}_i\). We often add mean structure to a SEM for two reasons:

  • we wish to compare groups on factor means (e.g., does group A have higher neuroticism scores than group B)
  • items have a nonlinear relationship to the factor (more common in variants of IRT)

Circling back to our failure to corroborate strong invariance, we might wish to look at which constraints on the intercepts are leading to the greatest reduction in fit. Whereas the modificationIndices in lavaan gives information on change in model \(\chi^2\) if we add additional parameters, we need to use the lavTestScore function to examine how model \(\chi^2\) would improve if we freed (removed) these constraints. Note that this function is still based on the Lagrange multiplier test, as in standard modification indices.

#what is the effect of freeing constraints of the strong relative to the weak?
lavTestScore(mstrong, epc=TRUE)
## $test
## 
## total score test:
## 
##    test X2 df p.value
## 1 score 47 15       0
## 
## $uni
## 
## univariate score tests:
## 
##      lhs op   rhs     X2 df p.value
## 1   .p2. == .p38.  0.306  1   0.580
## 2   .p3. == .p39.  1.636  1   0.201
## 3   .p5. == .p41.  2.744  1   0.098
## 4   .p6. == .p42.  2.627  1   0.105
## 5   .p8. == .p44.  0.027  1   0.871
## 6   .p9. == .p45.  0.004  1   0.952
## 7  .p25. == .p61.  5.847  1   0.016
## 8  .p26. == .p62.  6.863  1   0.009
## 9  .p27. == .p63. 19.193  1   0.000
## 10 .p28. == .p64.  2.139  1   0.144
## 11 .p29. == .p65.  1.563  1   0.211
## 12 .p30. == .p66.  0.032  1   0.857
## 13 .p31. == .p67. 15.021  1   0.000
## 14 .p32. == .p68.  4.710  1   0.030
## 15 .p33. == .p69.  1.498  1   0.221
## 
## $epc
## 
## expected parameter changes (epc) and expected parameter values (epv):
## 
##        lhs op     rhs group free label plabel    est    epc    epv
## 1   visual =~      x1     1    0         .p1.     NA     NA     NA
## 2   visual =~      x2     1    1  .p2.   .p2.  0.576  0.136  0.711
## 3   visual =~      x3     1    2  .p3.   .p3.  0.798  0.216  1.015
## 4  textual =~      x4     1    0         .p4.     NA     NA     NA
## 5  textual =~      x5     1    3  .p5.   .p5.  1.120 -0.074  1.046
## 6  textual =~      x6     1    4  .p6.   .p6.  0.932  0.049  0.981
## 7    speed =~      x7     1    0         .p7.     NA     NA     NA
## 8    speed =~      x8     1    5  .p8.   .p8.  1.130 -0.038  1.091
## 9    speed =~      x9     1    6  .p9.   .p9.  1.009 -0.006  1.003
## 10      x1 ~~      x1     1    7        .p10.  0.555  0.151  0.706
## 11      x2 ~~      x2     1    8        .p11.  1.296 -0.030  1.266
## 12      x3 ~~      x3     1    9        .p12.  0.944 -0.089  0.855
## 13      x4 ~~      x4     1   10        .p13.  0.445  0.003  0.448
## 14      x5 ~~      x5     1   11        .p14.  0.502  0.041  0.544
## 15      x6 ~~      x6     1   12        .p15.  0.263 -0.027  0.236
## 16      x7 ~~      x7     1   13        .p16.  0.888 -0.006  0.883
## 17      x8 ~~      x8     1   14        .p17.  0.541  0.010  0.551
## 18      x9 ~~      x9     1   15        .p18.  0.654 -0.005  0.649
## 19  visual ~~  visual     1   16        .p19.  0.796 -0.198  0.598
## 20 textual ~~ textual     1   17        .p20.  0.879  0.000  0.880
## 21   speed ~~   speed     1   18        .p21.  0.322  0.010  0.333
## 22  visual ~~ textual     1   19        .p22.  0.410 -0.037  0.373
## 23  visual ~~   speed     1   20        .p23.  0.178 -0.013  0.165
## 24 textual ~~   speed     1   21        .p24.  0.180  0.003  0.183
## 25      x1 ~1             1   22 .p25.  .p25.  5.001  0.060  5.062
## 26      x2 ~1             1   23 .p26.  .p26.  6.151  0.167  6.318
## 27      x3 ~1             1   24 .p27.  .p27.  2.271 -0.216  2.055
## 28      x4 ~1             1   25 .p28.  .p28.  2.778 -0.045  2.733
## 29      x5 ~1             1   26 .p29.  .p29.  4.035  0.040  4.074
## 30      x6 ~1             1   27 .p30.  .p30.  1.926  0.003  1.929
## 31      x7 ~1             1   28 .p31.  .p31.  4.242 -0.190  4.052
## 32      x8 ~1             1   29 .p32.  .p32.  5.630  0.067  5.698
## 33      x9 ~1             1   30 .p33.  .p33.  5.465  0.048  5.513
## 34  visual ~1             1    0        .p34.     NA     NA     NA
## 35 textual ~1             1    0        .p35.     NA     NA     NA
## 36   speed ~1             1    0        .p36.     NA     NA     NA
## 37  visual =~      x1     2    0        .p37.     NA     NA     NA
## 38  visual =~      x2     2   31  .p2.  .p38.  0.576 -0.119  0.457
## 39  visual =~      x3     2   32  .p3.  .p39.  0.798 -0.065  0.734
## 40 textual =~      x4     2    0        .p40.     NA     NA     NA
## 41 textual =~      x5     2   33  .p5.  .p41.  1.120  0.142  1.262
## 42 textual =~      x6     2   34  .p6.  .p42.  0.932 -0.031  0.901
## 43   speed =~      x7     2    0        .p43.     NA     NA     NA
## 44   speed =~      x8     2   35  .p8.  .p44.  1.130 -0.118  1.012
## 45   speed =~      x9     2   36  .p9.  .p45.  1.009 -0.097  0.912
## 46      x1 ~~      x1     2   37        .p46.  0.654 -0.048  0.606
## 47      x2 ~~      x2     2   38        .p47.  0.964  0.023  0.987
## 48      x3 ~~      x3     2   39        .p48.  0.641  0.013  0.653
## 49      x4 ~~      x4     2   40        .p49.  0.343  0.027  0.370
## 50      x5 ~~      x5     2   41        .p50.  0.376 -0.061  0.315
## 51      x6 ~~      x6     2   42        .p51.  0.437  0.026  0.463
## 52      x7 ~~      x7     2   43        .p52.  0.625 -0.027  0.598
## 53      x8 ~~      x8     2   44        .p53.  0.434  0.019  0.453
## 54      x9 ~~      x9     2   45        .p54.  0.522  0.006  0.529
## 55  visual ~~  visual     2   46        .p55.  0.708  0.085  0.793
## 56 textual ~~ textual     2   47        .p56.  0.870 -0.073  0.797
## 57   speed ~~   speed     2   48        .p57.  0.505  0.075  0.580
## 58  visual ~~ textual     2   49        .p58.  0.427  0.004  0.431
## 59  visual ~~   speed     2   50        .p59.  0.329  0.042  0.371
## 60 textual ~~   speed     2   51        .p60.  0.236  0.008  0.244
## 61      x1 ~1             2   52 .p25.  .p61.  5.001 -0.061  4.940
## 62      x2 ~1             2   53 .p26.  .p62.  6.151 -0.143  6.009
## 63      x3 ~1             2   54 .p27.  .p63.  2.271  0.160  2.431
## 64      x4 ~1             2   55 .p28.  .p64.  2.778  0.056  2.834
## 65      x5 ~1             2   56 .p29.  .p65.  4.035 -0.092  3.943
## 66      x6 ~1             2   57 .p30.  .p66.  1.926  0.030  1.955
## 67      x7 ~1             2   58 .p31.  .p67.  4.242  0.145  4.387
## 68      x8 ~1             2   59 .p32.  .p68.  5.630 -0.078  5.552
## 69      x9 ~1             2   60 .p33.  .p69.  5.465 -0.057  5.408
## 70  visual ~1             2   61        .p70. -0.148 -0.015 -0.163
## 71 textual ~1             2   62        .p71.  0.576 -0.019  0.557
## 72   speed ~1             2   63        .p72. -0.177 -0.001 -0.178

The epc=TRUE argument requests that lavaan compute the “expected parameter change” that would result from freeing the constraint. This also gives us a readout of the parameter labels used by lavaan to set constraints. You can also examine parTable to understand the parameter labels. Note the addition of the group column.

parTable(mstrong)

Examining the output of lavTestScore, we see that two constraints are associated with the lion’s share of decrement in fit – 34 of 51 points. These are:

.p27. == .p63. \(\chi^2\) = 19.24

.p31. == .p67. \(\chi^2\) = 15.01

From the output, we see that p27 and p63 represent intercept equality for x3, whereas p31 and p67 represent intercept equality for x7. Freeing these between groups would result in partial strong invariance in which 7 of 9 indicators have equal loadings and intercepts. This yields a model with one item having differential item functioning (DIF) for the visual and speed factor. Partial invariance is often seen as acceptable (in the sense of permitting group comparisons on factor means and variances/covariances) if only a handful of equality constraints are not satisfied. Here, because we have only three indicators per factor, noninvariance of one indicator may make comparing factor means/variances more dubious than if we had 10 indicators per factor.

Setting aside this complexity, if we wished to allow the intercepts for x1 and x3 to vary by group, while otherwise enforcing strong invariance, we could use the group.partial argument, which frees selected parameters while equating the remainder:

mpartial_strong <- sem(HS.model, data = HolzingerSwineford1939, std.lv=FALSE, group = "school", group.equal=c("loadings", "intercepts"), group.partial=c("x3~1", "x7~1"), meanstructure=TRUE)

summary(mpartial_strong)
## lavaan 0.6-3 ended normally after 61 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         63
##   Number of equality constraints                    13
## 
##   Number of observations per group         
##   Pasteur                                          156
##   Grant-White                                      145
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                     129.422
##   Degrees of freedom                                58
##   P-value (Chi-square)                           0.000
## 
## Chi-square for each group:
## 
##   Pasteur                                       71.170
##   Grant-White                                   58.253
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## 
## Group 1 [Pasteur]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1                1.000                           
##     x2      (.p2.)    0.606    0.101    5.988    0.000
##     x3      (.p3.)    0.791    0.109    7.259    0.000
##   textual =~                                          
##     x4                1.000                           
##     x5      (.p5.)    1.120    0.066   16.960    0.000
##     x6      (.p6.)    0.932    0.056   16.606    0.000
##   speed =~                                            
##     x7                1.000                           
##     x8      (.p8.)    1.200    0.155    7.741    0.000
##     x9      (.p9.)    1.041    0.136    7.635    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.404    0.095    4.247    0.000
##     speed             0.168    0.064    2.647    0.008
##   textual ~~                                          
##     speed             0.172    0.060    2.882    0.004
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1      (.25.)    4.914    0.092   53.538    0.000
##    .x2      (.26.)    6.087    0.079   76.999    0.000
##    .x3                2.487    0.094   26.474    0.000
##    .x4      (.28.)    2.778    0.087   31.953    0.000
##    .x5      (.29.)    4.035    0.096   41.861    0.000
##    .x6      (.30.)    1.926    0.079   24.425    0.000
##    .x7                4.432    0.086   51.533    0.000
##    .x8      (.32.)    5.569    0.074   75.328    0.000
##    .x9      (.33.)    5.409    0.070   77.182    0.000
##     visual            0.000                           
##     textual           0.000                           
##     speed             0.000                           
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                0.560    0.137    4.086    0.000
##    .x2                1.267    0.156    8.105    0.000
##    .x3                0.879    0.128    6.850    0.000
##    .x4                0.446    0.069    6.432    0.000
##    .x5                0.502    0.082    6.132    0.000
##    .x6                0.263    0.050    5.258    0.000
##    .x7                0.850    0.114    7.471    0.000
##    .x8                0.516    0.095    5.429    0.000
##    .x9                0.656    0.096    6.852    0.000
##     visual            0.796    0.170    4.691    0.000
##     textual           0.879    0.131    6.693    0.000
##     speed             0.304    0.078    3.918    0.000
## 
## 
## Group 2 [Grant-White]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1                1.000                           
##     x2      (.p2.)    0.606    0.101    5.988    0.000
##     x3      (.p3.)    0.791    0.109    7.259    0.000
##   textual =~                                          
##     x4                1.000                           
##     x5      (.p5.)    1.120    0.066   16.960    0.000
##     x6      (.p6.)    0.932    0.056   16.606    0.000
##   speed =~                                            
##     x7                1.000                           
##     x8      (.p8.)    1.200    0.155    7.741    0.000
##     x9      (.p9.)    1.041    0.136    7.635    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.426    0.097    4.412    0.000
##     speed             0.312    0.079    3.955    0.000
##   textual ~~                                          
##     speed             0.223    0.071    3.163    0.002
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1      (.25.)    4.914    0.092   53.538    0.000
##    .x2      (.26.)    6.087    0.079   76.999    0.000
##    .x3                1.955    0.108   18.170    0.000
##    .x4      (.28.)    2.778    0.087   31.953    0.000
##    .x5      (.29.)    4.035    0.096   41.861    0.000
##    .x6      (.30.)    1.926    0.079   24.425    0.000
##    .x7                3.992    0.094   42.478    0.000
##    .x8      (.32.)    5.569    0.074   75.328    0.000
##    .x9      (.33.)    5.409    0.070   77.182    0.000
##     visual            0.051    0.129    0.393    0.695
##     textual           0.576    0.117    4.918    0.000
##     speed            -0.071    0.089   -0.800    0.424
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1                0.651    0.127    5.138    0.000
##    .x2                0.939    0.122    7.721    0.000
##    .x3                0.603    0.096    6.248    0.000
##    .x4                0.343    0.062    5.532    0.000
##    .x5                0.377    0.073    5.136    0.000
##    .x6                0.437    0.067    6.556    0.000
##    .x7                0.599    0.090    6.655    0.000
##    .x8                0.407    0.089    4.570    0.000
##    .x9                0.531    0.086    6.186    0.000
##     visual            0.715    0.160    4.473    0.000
##     textual           0.870    0.131    6.659    0.000
##     speed             0.475    0.109    4.344    0.000
anova(mweak, mpartial_strong)
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
mweak 54 7481 7681 124 NA NA NA
mpartial_strong 58 7478 7663 129 5.38 4 0.251
compareFit(list(weak=mweak, strong_partial=mpartial_strong))
## ################### Nested Model Comparison #########################
## Chi Square Difference Test
## 
##                Df  AIC  BIC Chisq Chisq diff Df diff Pr(>Chisq)
## weak           54 7481 7681   124                              
## strong_partial 58 7478 7663   129       5.38       4       0.25
## 
## ####################### Model Fit Indices ###########################
##                   chisq df pvalue   cfi   tli       aic       bic rmsea
## weak           124.044† 54   .000 .921† .895  7480.587  7680.771  .093 
## strong_partial 129.422  58   .000 .919  .900† 7477.966† 7663.322† .090†
##                 srmr
## weak           .072†
## strong_partial .073 
## 
## ################## Differences in Fit Indices #######################
##                                 df    cfi   tli   aic   bic  rmsea  srmr
## strong_partial - strong_partial  4 -0.002 0.005 -2.62 -17.4 -0.002 0.001

We now see that our partial strong invariance model does not fit worse than the weak model. Examining the parameter estimates, we also see that the mean of x3 and x7 is higher in the Pasteur school compared ot Grant-White. One might then infer that there are school-specific differences in these cognitive test scores that are not attributable to differences in the corresponding latent factors, visual and speed, respectively.

5.4 Strict invariance

  1. Factor loadings equated across groups
  2. Item intercepts equated across groups
  3. Residual variances equated across groups
  4. Factor means set at 0 in one group, free in others
  5. Factor variances and covariances freely estimated

We might then entertain a strict variant. Once we allow partial noninvariance in a testing step, we should typically carry this freedom forward to the next step. Here, we should probably retain noninvariance of intercepts for x3 and x7 when we test the (partial) strict invariance model.

mstrict <- cfa(HS.model, data = HolzingerSwineford1939, std.lv=FALSE, group = "school", group.equal=c("loadings", "intercepts", "residuals"), group.partial=c("x3~1", "x7~1"))

summary(mstrict)
## lavaan 0.6-3 ended normally after 59 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         63
##   Number of equality constraints                    22
## 
##   Number of observations per group         
##   Pasteur                                          156
##   Grant-White                                      145
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                     147.260
##   Degrees of freedom                                67
##   P-value (Chi-square)                           0.000
## 
## Chi-square for each group:
## 
##   Pasteur                                       79.438
##   Grant-White                                   67.822
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## 
## Group 1 [Pasteur]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1                1.000                           
##     x2      (.p2.)    0.624    0.104    5.986    0.000
##     x3      (.p3.)    0.825    0.113    7.288    0.000
##   textual =~                                          
##     x4                1.000                           
##     x5      (.p5.)    1.125    0.066   17.130    0.000
##     x6      (.p6.)    0.934    0.056   16.749    0.000
##   speed =~                                            
##     x7                1.000                           
##     x8      (.p8.)    1.194    0.162    7.355    0.000
##     x9      (.p9.)    1.068    0.146    7.308    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.363    0.094    3.875    0.000
##     speed             0.165    0.063    2.627    0.009
##   textual ~~                                          
##     speed             0.168    0.060    2.807    0.005
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1      (.25.)    4.910    0.093   52.673    0.000
##    .x2      (.26.)    6.072    0.079   76.983    0.000
##    .x3                2.487    0.090   27.772    0.000
##    .x4      (.28.)    2.784    0.086   32.195    0.000
##    .x5      (.29.)    4.029    0.096   41.811    0.000
##    .x6      (.30.)    1.927    0.081   23.746    0.000
##    .x7                4.432    0.082   53.865    0.000
##    .x8      (.32.)    5.568    0.073   75.821    0.000
##    .x9      (.33.)    5.411    0.071   76.698    0.000
##     visual            0.000                           
##     textual           0.000                           
##     speed             0.000                           
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1      (.10.)    0.636    0.101    6.303    0.000
##    .x2      (.11.)    1.101    0.100   10.973    0.000
##    .x3      (.12.)    0.724    0.085    8.529    0.000
##    .x4      (.13.)    0.383    0.047    8.096    0.000
##    .x5      (.14.)    0.435    0.057    7.613    0.000
##    .x6      (.15.)    0.353    0.042    8.336    0.000
##    .x7      (.16.)    0.738    0.076    9.696    0.000
##    .x8      (.17.)    0.479    0.073    6.592    0.000
##    .x9      (.18.)    0.580    0.069    8.386    0.000
##     visual            0.776    0.164    4.737    0.000
##     textual           0.893    0.131    6.826    0.000
##     speed             0.318    0.080    3.990    0.000
## 
## 
## Group 2 [Grant-White]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual =~                                           
##     x1                1.000                           
##     x2      (.p2.)    0.624    0.104    5.986    0.000
##     x3      (.p3.)    0.825    0.113    7.288    0.000
##   textual =~                                          
##     x4                1.000                           
##     x5      (.p5.)    1.125    0.066   17.130    0.000
##     x6      (.p6.)    0.934    0.056   16.749    0.000
##   speed =~                                            
##     x7                1.000                           
##     x8      (.p8.)    1.194    0.162    7.355    0.000
##     x9      (.p9.)    1.068    0.146    7.308    0.000
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##   visual ~~                                           
##     textual           0.421    0.095    4.444    0.000
##     speed             0.315    0.078    4.029    0.000
##   textual ~~                                          
##     speed             0.223    0.071    3.148    0.002
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1      (.25.)    4.910    0.093   52.673    0.000
##    .x2      (.26.)    6.072    0.079   76.983    0.000
##    .x3                1.951    0.114   17.044    0.000
##    .x4      (.28.)    2.784    0.086   32.195    0.000
##    .x5      (.29.)    4.029    0.096   41.811    0.000
##    .x6      (.30.)    1.927    0.081   23.746    0.000
##    .x7                3.992    0.099   40.135    0.000
##    .x8      (.32.)    5.568    0.073   75.821    0.000
##    .x9      (.33.)    5.411    0.071   76.698    0.000
##     visual            0.054    0.128    0.423    0.672
##     textual           0.575    0.118    4.888    0.000
##     speed            -0.071    0.089   -0.805    0.421
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)
##    .x1      (.10.)    0.636    0.101    6.303    0.000
##    .x2      (.11.)    1.101    0.100   10.973    0.000
##    .x3      (.12.)    0.724    0.085    8.529    0.000
##    .x4      (.13.)    0.383    0.047    8.096    0.000
##    .x5      (.14.)    0.435    0.057    7.613    0.000
##    .x6      (.15.)    0.353    0.042    8.336    0.000
##    .x7      (.16.)    0.738    0.076    9.696    0.000
##    .x8      (.17.)    0.479    0.073    6.592    0.000
##    .x9      (.18.)    0.580    0.069    8.386    0.000
##     visual            0.664    0.150    4.436    0.000
##     textual           0.876    0.132    6.620    0.000
##     speed             0.446    0.109    4.095    0.000
anova(mpartial_strong, mstrict)
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
mpartial_strong 58 7478 7663 129 NA NA NA
mstrict 67 7478 7630 147 17.8 9 0.037

We do not see a substantial decrement in fit for adding strict invariance: the LRT is significant (bad), but the AIC is essentially the same. We might examine change in CFI (recall: reductions of .01 or more should be cause for concern) as well.

5.5 Structural invariance

In all of these models, variance and covariances among latent variables are freely estimated among groups. One can also consider structural invariance in which variances, covariances, and/or structural regressions among variables can be equated among groups. For example, one might wish to know whether the association between neuroticism and extraversion is similar in college students compared to Wall Street executives.

Structural invariance is more about testing a substantively interesting hypothesis than verifying that we can compare the latent variables in the first place. Thus, measurement invariance is about examining potential problems in the comparability of the factors across groups. If we satisfy at least strong MI we could look at structural invariance. But if structural invariance fails, it doesn’t mean anything bad in a psychometric sense, more than the groups are legitimately different in their mean/covariance structure at the factor level.

If we want to equate factor covariances in a model that at least satisfied strong measurement invariance, we can add group.equal="lv.covariances". Although lavaan will implement this for us, note models that equate both variances and covariances between groups are likely to be more interpretable. What does it mean to have identical factor covariance across groups when the variances are different? Thus, I would advocate also adding "lv.variances" in general.

mstructural <- cfa(HS.model, data = HolzingerSwineford1939, std.lv=FALSE, group = "school", group.equal=c("loadings", "intercepts", "residuals", "lv.variances", "lv.covariances"), group.partial=c("x3~1", "x7~1"))

summary(mstructural, standardized=TRUE)
## lavaan 0.6-3 ended normally after 59 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         63
##   Number of equality constraints                    28
## 
##   Number of observations per group         
##   Pasteur                                          156
##   Grant-White                                      145
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                     153.258
##   Degrees of freedom                                73
##   P-value (Chi-square)                           0.000
## 
## Chi-square for each group:
## 
##   Pasteur                                       81.641
##   Grant-White                                   71.617
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## 
## Group 1 [Pasteur]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual =~                                                             
##     x1                1.000                               0.869    0.745
##     x2      (.p2.)    0.599    0.102    5.847    0.000    0.521    0.443
##     x3      (.p3.)    0.789    0.111    7.079    0.000    0.685    0.622
##   textual =~                                                            
##     x4                1.000                               0.941    0.835
##     x5      (.p5.)    1.124    0.066   17.112    0.000    1.057    0.848
##     x6      (.p6.)    0.935    0.056   16.768    0.000    0.879    0.829
##   speed =~                                                              
##     x7                1.000                               0.618    0.585
##     x8      (.p8.)    1.186    0.162    7.303    0.000    0.733    0.726
##     x9      (.p9.)    1.066    0.146    7.280    0.000    0.659    0.655
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual ~~                                                             
##     textual (.22.)    0.403    0.070    5.751    0.000    0.493    0.493
##     speed   (.23.)    0.244    0.054    4.528    0.000    0.454    0.454
##   textual ~~                                                            
##     speed   (.24.)    0.196    0.048    4.049    0.000    0.337    0.337
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1      (.25.)    4.912    0.092   53.595    0.000    4.912    4.215
##    .x2      (.26.)    6.074    0.077   78.473    0.000    6.074    5.167
##    .x3                2.487    0.088   28.188    0.000    2.487    2.257
##    .x4      (.28.)    2.784    0.086   32.312    0.000    2.784    2.472
##    .x5      (.29.)    4.029    0.096   41.986    0.000    4.029    3.231
##    .x6      (.30.)    1.927    0.081   23.820    0.000    1.927    1.817
##    .x7                4.432    0.085   52.360    0.000    4.432    4.192
##    .x8      (.32.)    5.568    0.077   72.219    0.000    5.568    5.513
##    .x9      (.33.)    5.411    0.074   73.380    0.000    5.411    5.375
##     visual            0.000                               0.000    0.000
##     textual           0.000                               0.000    0.000
##     speed             0.000                               0.000    0.000
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1      (.10.)    0.603    0.105    5.732    0.000    0.603    0.444
##    .x2      (.11.)    1.111    0.101   11.020    0.000    1.111    0.804
##    .x3      (.12.)    0.745    0.086    8.701    0.000    0.745    0.614
##    .x4      (.13.)    0.383    0.047    8.095    0.000    0.383    0.302
##    .x5      (.14.)    0.438    0.057    7.650    0.000    0.438    0.281
##    .x6      (.15.)    0.352    0.042    8.305    0.000    0.352    0.313
##    .x7      (.16.)    0.735    0.076    9.625    0.000    0.735    0.658
##    .x8      (.17.)    0.482    0.073    6.574    0.000    0.482    0.473
##    .x9      (.18.)    0.579    0.070    8.300    0.000    0.579    0.571
##     visual  (.19.)    0.754    0.136    5.548    0.000    1.000    1.000
##     textual (.20.)    0.885    0.103    8.621    0.000    1.000    1.000
##     speed   (.21.)    0.383    0.083    4.587    0.000    1.000    1.000
## 
## 
## Group 2 [Grant-White]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual =~                                                             
##     x1                1.000                               0.869    0.745
##     x2      (.p2.)    0.599    0.102    5.847    0.000    0.521    0.443
##     x3      (.p3.)    0.789    0.111    7.079    0.000    0.685    0.622
##   textual =~                                                            
##     x4                1.000                               0.941    0.835
##     x5      (.p5.)    1.124    0.066   17.112    0.000    1.057    0.848
##     x6      (.p6.)    0.935    0.056   16.768    0.000    0.879    0.829
##   speed =~                                                              
##     x7                1.000                               0.618    0.585
##     x8      (.p8.)    1.186    0.162    7.303    0.000    0.733    0.726
##     x9      (.p9.)    1.066    0.146    7.280    0.000    0.659    0.655
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual ~~                                                             
##     textual (.22.)    0.403    0.070    5.751    0.000    0.493    0.493
##     speed   (.23.)    0.244    0.054    4.528    0.000    0.454    0.454
##   textual ~~                                                            
##     speed   (.24.)    0.196    0.048    4.049    0.000    0.337    0.337
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1      (.25.)    4.912    0.092   53.595    0.000    4.912    4.215
##    .x2      (.26.)    6.074    0.077   78.473    0.000    6.074    5.167
##    .x3                1.957    0.111   17.606    0.000    1.957    1.776
##    .x4      (.28.)    2.784    0.086   32.312    0.000    2.784    2.472
##    .x5      (.29.)    4.029    0.096   41.986    0.000    4.029    3.231
##    .x6      (.30.)    1.927    0.081   23.820    0.000    1.927    1.817
##    .x7                3.993    0.102   39.290    0.000    3.993    3.776
##    .x8      (.32.)    5.568    0.077   72.219    0.000    5.568    5.513
##    .x9      (.33.)    5.411    0.074   73.380    0.000    5.411    5.375
##     visual            0.049    0.129    0.381    0.703    0.057    0.057
##     textual           0.575    0.118    4.885    0.000    0.611    0.611
##     speed            -0.072    0.089   -0.809    0.418   -0.116   -0.116
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1      (.10.)    0.603    0.105    5.732    0.000    0.603    0.444
##    .x2      (.11.)    1.111    0.101   11.020    0.000    1.111    0.804
##    .x3      (.12.)    0.745    0.086    8.701    0.000    0.745    0.614
##    .x4      (.13.)    0.383    0.047    8.095    0.000    0.383    0.302
##    .x5      (.14.)    0.438    0.057    7.650    0.000    0.438    0.281
##    .x6      (.15.)    0.352    0.042    8.305    0.000    0.352    0.313
##    .x7      (.16.)    0.735    0.076    9.625    0.000    0.735    0.658
##    .x8      (.17.)    0.482    0.073    6.574    0.000    0.482    0.473
##    .x9      (.18.)    0.579    0.070    8.300    0.000    0.579    0.571
##     visual  (.19.)    0.754    0.136    5.548    0.000    1.000    1.000
##     textual (.20.)    0.885    0.103    8.621    0.000    1.000    1.000
##     speed   (.21.)    0.383    0.083    4.587    0.000    1.000    1.000
#anova(mstrict, mstructural)
compareFit(list(strict=mstrict, structural=mstructural))
## ################### Nested Model Comparison #########################
## Chi Square Difference Test
## 
##            Df  AIC  BIC Chisq Chisq diff Df diff Pr(>Chisq)
## strict     67 7478 7630   147                              
## structural 73 7472 7602   153          6       6       0.42
## 
## ####################### Model Fit Indices ###########################
##               chisq df pvalue   cfi   tli       aic       bic rmsea  srmr
## strict     147.260† 67   .000 .909  .903  7477.804  7629.796  .089  .079†
## structural 153.258  73   .000 .909† .911† 7471.802† 7601.551† .085† .086 
## 
## ################## Differences in Fit Indices #######################
##                         df cfi   tli aic   bic  rmsea  srmr
## structural - structural  6   0 0.008  -6 -28.2 -0.004 0.007

Here, we see no reduction in fit for the structurally invariant model. Given that this model also satisfies at least partial strong MI, this bolsters our confidence that the factor covariance structure among cognitive tests is essentially equivalent between schools.

Finally, as part of structural invariance, we might also be interested in mean differences between schools. This is group.equal="means".

mmeandiff <- cfa(HS.model, data = HolzingerSwineford1939, std.lv=FALSE, group = "school", meanstructure=TRUE, group.equal=c("loadings", "intercepts", "residuals", "lv.variances", "lv.covariances", "means"), group.partial=c("x3~1", "x7~1"))

summary(mmeandiff, standardized=TRUE)
## lavaan 0.6-3 ended normally after 56 iterations
## 
##   Optimization method                           NLMINB
##   Number of free parameters                         60
##   Number of equality constraints                    28
## 
##   Number of observations per group         
##   Pasteur                                          156
##   Grant-White                                      145
## 
##   Estimator                                         ML
##   Model Fit Test Statistic                     181.981
##   Degrees of freedom                                76
##   P-value (Chi-square)                           0.000
## 
## Chi-square for each group:
## 
##   Pasteur                                       96.004
##   Grant-White                                   85.977
## 
## Parameter Estimates:
## 
##   Information                                 Expected
##   Information saturated (h1) model          Structured
##   Standard Errors                             Standard
## 
## 
## Group 1 [Pasteur]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual =~                                                             
##     x1                1.000                               0.854    0.733
##     x2      (.p2.)    0.622    0.105    5.926    0.000    0.532    0.452
##     x3      (.p3.)    0.814    0.115    7.098    0.000    0.696    0.631
##   textual =~                                                            
##     x4                1.000                               0.989    0.851
##     x5      (.p5.)    1.114    0.065   17.017    0.000    1.101    0.855
##     x6      (.p6.)    0.928    0.055   16.729    0.000    0.918    0.839
##   speed =~                                                              
##     x7                1.000                               0.620    0.586
##     x8      (.p8.)    1.197    0.164    7.276    0.000    0.742    0.734
##     x9      (.p9.)    1.054    0.145    7.269    0.000    0.653    0.649
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual ~~                                                             
##     textual (.22.)    0.407    0.072    5.627    0.000    0.482    0.482
##     speed   (.23.)    0.236    0.053    4.448    0.000    0.446    0.446
##   textual ~~                                                            
##     speed   (.24.)    0.187    0.050    3.762    0.000    0.305    0.305
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1      (.25.)    4.936    0.067   73.473    0.000    4.936    4.235
##    .x2      (.26.)    6.088    0.068   89.855    0.000    6.088    5.179
##    .x3                2.529    0.083   30.585    0.000    2.529    2.293
##    .x4      (.28.)    3.061    0.067   45.694    0.000    3.061    2.634
##    .x5      (.29.)    4.341    0.074   58.452    0.000    4.341    3.369
##    .x6      (.30.)    2.186    0.063   34.667    0.000    2.186    1.998
##    .x7                4.425    0.080   55.451    0.000    4.425    4.185
##    .x8      (.32.)    5.527    0.058   94.854    0.000    5.527    5.467
##    .x9      (.33.)    5.374    0.058   92.546    0.000    5.374    5.334
##     visual            0.000                               0.000    0.000
##     textual           0.000                               0.000    0.000
##     speed             0.000                               0.000    0.000
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1      (.10.)    0.628    0.104    6.048    0.000    0.628    0.463
##    .x2      (.11.)    1.099    0.101   10.930    0.000    1.099    0.795
##    .x3      (.12.)    0.733    0.086    8.500    0.000    0.733    0.602
##    .x4      (.13.)    0.373    0.048    7.831    0.000    0.373    0.276
##    .x5      (.14.)    0.447    0.058    7.674    0.000    0.447    0.269
##    .x6      (.15.)    0.354    0.043    8.253    0.000    0.354    0.296
##    .x7      (.16.)    0.734    0.076    9.592    0.000    0.734    0.656
##    .x8      (.17.)    0.472    0.075    6.321    0.000    0.472    0.462
##    .x9      (.18.)    0.588    0.070    8.403    0.000    0.588    0.579
##     visual  (.19.)    0.730    0.133    5.470    0.000    1.000    1.000
##     textual (.20.)    0.978    0.112    8.729    0.000    1.000    1.000
##     speed   (.21.)    0.384    0.084    4.591    0.000    1.000    1.000
## 
## 
## Group 2 [Grant-White]:
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual =~                                                             
##     x1                1.000                               0.854    0.733
##     x2      (.p2.)    0.622    0.105    5.926    0.000    0.532    0.452
##     x3      (.p3.)    0.814    0.115    7.098    0.000    0.696    0.631
##   textual =~                                                            
##     x4                1.000                               0.989    0.851
##     x5      (.p5.)    1.114    0.065   17.017    0.000    1.101    0.855
##     x6      (.p6.)    0.928    0.055   16.729    0.000    0.918    0.839
##   speed =~                                                              
##     x7                1.000                               0.620    0.586
##     x8      (.p8.)    1.197    0.164    7.276    0.000    0.742    0.734
##     x9      (.p9.)    1.054    0.145    7.269    0.000    0.653    0.649
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual ~~                                                             
##     textual (.22.)    0.407    0.072    5.627    0.000    0.482    0.482
##     speed   (.23.)    0.236    0.053    4.448    0.000    0.446    0.446
##   textual ~~                                                            
##     speed   (.24.)    0.187    0.050    3.762    0.000    0.305    0.305
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1      (.25.)    4.936    0.067   73.473    0.000    4.936    4.235
##    .x2      (.26.)    6.088    0.068   89.855    0.000    6.088    5.179
##    .x3                1.951    0.085   22.865    0.000    1.951    1.769
##    .x4      (.28.)    3.061    0.067   45.694    0.000    3.061    2.634
##    .x5      (.29.)    4.341    0.074   58.452    0.000    4.341    3.369
##    .x6      (.30.)    2.186    0.063   34.667    0.000    2.186    1.998
##    .x7                3.929    0.082   47.704    0.000    3.929    3.716
##    .x8      (.32.)    5.527    0.058   94.854    0.000    5.527    5.467
##    .x9      (.33.)    5.374    0.058   92.546    0.000    5.374    5.334
##     visual            0.000                               0.000    0.000
##     textual           0.000                               0.000    0.000
##     speed             0.000                               0.000    0.000
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1      (.10.)    0.628    0.104    6.048    0.000    0.628    0.463
##    .x2      (.11.)    1.099    0.101   10.930    0.000    1.099    0.795
##    .x3      (.12.)    0.733    0.086    8.500    0.000    0.733    0.602
##    .x4      (.13.)    0.373    0.048    7.831    0.000    0.373    0.276
##    .x5      (.14.)    0.447    0.058    7.674    0.000    0.447    0.269
##    .x6      (.15.)    0.354    0.043    8.253    0.000    0.354    0.296
##    .x7      (.16.)    0.734    0.076    9.592    0.000    0.734    0.656
##    .x8      (.17.)    0.472    0.075    6.321    0.000    0.472    0.462
##    .x9      (.18.)    0.588    0.070    8.403    0.000    0.588    0.579
##     visual  (.19.)    0.730    0.133    5.470    0.000    1.000    1.000
##     textual (.20.)    0.978    0.112    8.729    0.000    1.000    1.000
##     speed   (.21.)    0.384    0.084    4.591    0.000    1.000    1.000
anova(mstructural, mmeandiff)
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
mstructural 73 7472 7602 153 NA NA NA
mmeandiff 76 7495 7613 182 28.7 3 0

It does not appear that we can equate the means between groups: significant LRT and drop in AIC of 23 points. Rather, examining the structurally invariant model with equal factor variances/covariances, we see that the mean of the textual factor in the Grant-White school is higher about .6 SD shift, whereas the other factors (visual and speed) are similar between schools.

5.6 Longitudinal invariance

All of the above considerations (equality on measurement parameters) apply equally to factors that represent the same constructs over time. For example, if we are interested in measuring neuroticism across 10 years based on annual assessments, then we should examine whether the factor structure of the neuroticism measure is stable over that interval. If strong longitudinal invariance does not hold, we should be concerned about interpreting longitudinal changes in the factor scores and means because these may not represent the same construct at all times.

For a useful reference, see Widaman et al., 2010.

Also, see the measEq.syntax, as above, focusing on the longFacNames, longIndNames, long.equal, and long.partial arguments. For example, from the documentation, here is a model that tests threshold equality (categorical data), both across multiple groups and multiple waves:

 mod.cat <- ' FU1 =~ u1 + u2 + u3 + u4
              FU2 =~ u5 + u6 + u7 + u8 '
 ## the 2 factors are actually the same factor (FU) measured twice
 longFacNames <- list(FU = c("FU1","FU2"))

 ## threshold invariance
 syntax.thresh <- measEq.syntax(configural.model = mod.cat, data = datCat,
                                ordered = paste0("u", 1:8),
                                parameterization = "theta",
                                ID.fac = "std.lv", ID.cat = "Wu.Estabrook.2016",
                                group = "g", group.equal = "thresholds",
                                longFacNames = longFacNames,
                                long.equal = "thresholds")
 
 ## notice that constraining 4 thresholds allows intercepts and residual
 ## variances to be freely estimated in all but the first group & occasion
 cat(as.character(syntax.thresh))
 
 ## print a summary of model features
 summary(syntax.thresh)