Code for this demo adapted from Nick Michalak. Also see https://nickmichalak.com/post/2019-02-14-testing-conditional-indirect-effects-mediation-in-r/testing-conditional-indirect-effects-mediation-in-r/
The dataset for this example contains standardized test scores on reading, writing, math, science, and social studies. There are also binary indicators (e.g., hisci
) that encode whether someone received a high score on a given test. We also want to compute centered versions of each variable to aid interpretation in moderation-related models.
Following Hayes, I will use the convention \(X\) for independent variable/predictor, \(M\) for mediator, \(W\) for moderator, and \(Y\) for outcome.
df <- read_csv("mediation_data.csv")
## Parsed with column specification:
## cols(
## id = col_double(),
## female = col_double(),
## ses = col_double(),
## prog = col_double(),
## read = col_double(),
## write = col_double(),
## math = col_double(),
## science = col_double(),
## socst = col_double(),
## honors = col_double(),
## awards = col_double(),
## cid = col_double(),
## hiread = col_double(),
## hiwrite = col_double(),
## hisci = col_double(),
## himath = col_double()
## )
df <- df %>% mutate_at(vars(read, write, math, science, socst), funs(c=scale))
## Warning: funs() is soft deprecated as of dplyr 0.8.0
## please use list() instead
##
## # Before:
## funs(name = f(.)
##
## # After:
## list(name = ~f(.))
## This warning is displayed once per session.
Hayes Process model 1
Does the relationship between performance on reading (\(Y\)) and social studies (\(X\)) tests depend on math ability (\(W\))?
library(lavaan)
# create interaction term between centered X (socst) and W (math)
df <- df %>% mutate(socst_x_math = socst_c * math_c)
# parameters
moderation_model <- '
# regressions
read ~ b1*socst_c
read ~ b2*math_c
read ~ b3*socst_x_math
# define mean parameter label for centered math for use in simple slopes
math_c ~ math.mean*1
# define variance parameter label for centered math for use in simple slopes
math_c ~~ math.var*math_c
# simple slopes for condition effect
SD.below := b1 + b3*(math.mean - sqrt(math.var))
mean := b1 + b3*(math.mean)
SD.above := b1 + b3*(math.mean + sqrt(math.var))
'
# fit the model using nonparametric bootstrapping (this takes some time)
sem1 <- sem(model = moderation_model,
data = df,
se = "bootstrap",
bootstrap = 1000)
## Warning in lavaan::lavaan(model = moderation_model, data = df, se =
## "bootstrap", : lavaan WARNING: syntax contains parameters involving
## exogenous covariates; switching to fixed.x = FALSE
# fit measures
summary(sem1, fit.measures = TRUE, standardized = TRUE, rsquare = TRUE)
## lavaan 0.6-3 ended normally after 34 iterations
##
## Optimization method NLMINB
## Number of free parameters 12
##
## Number of observations 200
##
## Estimator ML
## Model Fit Test Statistic 76.103
## Degrees of freedom 2
## P-value (Chi-square) 0.000
##
## Model test baseline model:
##
## Minimum Function Test Statistic 234.093
## Degrees of freedom 5
## P-value 0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI) 0.677
## Tucker-Lewis Index (TLI) 0.191
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -1509.766
## Loglikelihood unrestricted model (H1) -1471.714
##
## Number of free parameters 12
## Akaike (AIC) 3043.531
## Bayesian (BIC) 3083.111
## Sample-size adjusted Bayesian (BIC) 3045.094
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.430
## 90 Percent Confidence Interval 0.351 0.516
## P-value RMSEA <= 0.05 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.180
##
## Parameter Estimates:
##
## Standard Errors Bootstrap
## Number of requested bootstrap draws 1000
## Number of successful bootstrap draws 1000
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## read ~
## socst_c (b1) 4.013 0.571 7.029 0.000 4.013 0.437
## math_c (b2) 4.503 0.603 7.466 0.000 4.503 0.490
## scst_x_mt (b3) 1.135 0.503 2.254 0.024 1.135 0.118
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## socst_c ~~
## socst_x_math -0.082 0.101 -0.807 0.419 -0.082 -0.086
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## math_c (mth.) -0.000 0.070 -0.000 1.000 -0.000 -0.000
## .read 51.615 0.585 88.240 0.000 51.615 5.628
## socst_c -0.000 0.072 -0.000 1.000 -0.000 -0.000
## scst_x_ 0.542 0.069 7.846 0.000 0.542 0.569
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## math_c (mth.) 0.995 0.082 12.090 0.000 0.995 1.000
## .read 47.473 4.698 10.105 0.000 47.473 0.564
## socst_c 0.995 0.091 10.909 0.000 0.995 1.000
## scst_x_ 0.908 0.106 8.567 0.000 0.908 1.000
##
## R-Square:
## Estimate
## read 0.436
##
## Defined Parameters:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## SD.below 2.882 0.724 3.981 0.000 2.882 0.319
## mean 4.013 0.584 6.875 0.000 4.013 0.437
## SD.above 5.145 0.812 6.339 0.000 5.145 0.554
#compute bias-corrected estimates of bootstrapped confidence intervals
parameterEstimates(sem1, boot.ci.type = "bca.simple",
level = .95, ci = TRUE, standardized = FALSE)
lhs | op | rhs | label | est | se | z | pvalue | ci.lower | ci.upper |
---|---|---|---|---|---|---|---|---|---|
read | ~ | socst_c | b1 | 4.013 | 0.571 | 7.029 | 0.000 | 2.890 | 5.253 |
read | ~ | math_c | b2 | 4.503 | 0.603 | 7.466 | 0.000 | 3.282 | 5.709 |
read | ~ | socst_x_math | b3 | 1.135 | 0.503 | 2.254 | 0.024 | 0.115 | 2.143 |
math_c | ~1 | math.mean | 0.000 | 0.070 | 0.000 | 1.000 | -0.131 | 0.147 | |
math_c | ~~ | math_c | math.var | 0.995 | 0.082 | 12.090 | 0.000 | 0.846 | 1.168 |
read | ~~ | read | 47.473 | 4.698 | 10.105 | 0.000 | 39.123 | 58.338 | |
socst_c | ~~ | socst_c | 0.995 | 0.091 | 10.909 | 0.000 | 0.827 | 1.194 | |
socst_c | ~~ | socst_x_math | -0.082 | 0.101 | -0.807 | 0.419 | -0.296 | 0.096 | |
socst_x_math | ~~ | socst_x_math | 0.908 | 0.106 | 8.567 | 0.000 | 0.713 | 1.130 | |
read | ~1 | 51.615 | 0.585 | 88.240 | 0.000 | 50.541 | 52.796 | ||
socst_c | ~1 | 0.000 | 0.072 | 0.000 | 1.000 | -0.138 | 0.140 | ||
socst_x_math | ~1 | 0.542 | 0.069 | 7.846 | 0.000 | 0.413 | 0.682 | ||
SD.below | := | b1+b3*(math.mean-sqrt(math.var)) | SD.below | 2.882 | 0.724 | 3.981 | 0.000 | 1.523 | 4.310 |
mean | := | b1+b3*(math.mean) | mean | 4.013 | 0.584 | 6.875 | 0.000 | 2.868 | 5.293 |
SD.above | := | b1+b3*(math.mean+sqrt(math.var)) | SD.above | 5.145 | 0.812 | 6.339 | 0.000 | 3.530 | 6.693 |
Hayes Process model 4
Is the relationship between science
and math
mediated by read
?
# parameters
mediation_model <- '
# direct effect
science ~ cp*math_c
direct := cp
# regressions
read_c ~ a*math_c
science ~ b*read_c
# indirect effect (a*b)
indirect := a*b
# total effect
total := cp + (a*b)
'
# fit model
sem2 <- sem(model = mediation_model, data = df, se = "bootstrap", bootstrap = 1000)
## Warning in lavaan(slotOptions = lavoptions, slotParTable = lavpartable, :
## lavaan WARNING: the optimizer warns that a solution has NOT been found!
## Warning in lavaan(slotOptions = lavoptions, slotParTable = lavpartable, :
## lavaan WARNING: the optimizer warns that a solution has NOT been found!
## Warning in bootstrap.internal(object = NULL, lavmodel. = lavmodel,
## lavsamplestats. = lavsamplestats, : lavaan WARNING: only 997 bootstrap
## draws were successful
# fit measures
summary(sem2, fit.measures = TRUE, standardize = TRUE, rsquare = TRUE)
## lavaan 0.6-3 ended normally after 27 iterations
##
## Optimization method NLMINB
## Number of free parameters 5
##
## Number of observations 200
##
## Estimator ML
## Model Fit Test Statistic 0.000
## Degrees of freedom 0
##
## Model test baseline model:
##
## Minimum Function Test Statistic 245.569
## Degrees of freedom 3
## P-value 0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -902.313
## Loglikelihood unrestricted model (H1) -902.313
##
## Number of free parameters 5
## Akaike (AIC) 1814.627
## Bayesian (BIC) 1831.118
## Sample-size adjusted Bayesian (BIC) 1815.278
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent Confidence Interval 0.000 0.000
## P-value RMSEA <= 0.05 NA
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.000
##
## Parameter Estimates:
##
## Standard Errors Bootstrap
## Number of requested bootstrap draws 1000
## Number of successful bootstrap draws 997
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## science ~
## math_c (cp) 3.763 0.770 4.887 0.000 3.763 0.380
## read_c ~
## math_c (a) 0.662 0.050 13.163 0.000 0.662 0.662
## science ~
## read_c (b) 3.747 0.767 4.885 0.000 3.747 0.378
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .science 50.894 4.935 10.314 0.000 50.894 0.522
## .read_c 0.559 0.054 10.389 0.000 0.559 0.561
##
## R-Square:
## Estimate
## science 0.478
## read_c 0.439
##
## Defined Parameters:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## direct 3.763 0.770 4.885 0.000 3.763 0.380
## indirect 2.481 0.501 4.950 0.000 2.481 0.251
## total 6.245 0.518 12.052 0.000 6.245 0.631
Does the indirect effect of math
(\(X\)) on science
(\(Y\)) via read
(\(M\)) depend on write
(\(W\))? More specifically, does writing ability moderate the relationship betwen math and reading? For example, perhaps only people with high writing and math ability tend to score higher on a reading test. If this is true, then the indirect effect of math on science via reading depends on writing as well.
#compute math x writing interaction term
df <- df %>% mutate(math_x_write = math_c*write_c)
moderated_mediation_1 <- '
# regressions
read_c ~ a1*math_c
science ~ b*read_c
read_c ~ a2*write_c
read_c ~ a3*math_x_write
science ~ cdash*math_c
# mean of centered write (for use in simple slopes)
write_c ~ write.mean*1
# variance of centered write (for use in simple slopes)
write_c ~~ write.var*write_c
# index of moderated mediation
imm := a3*b
# indirect effects conditional on moderator (a1 + a3*a2.value)*b
indirect.SDbelow := a1*b + a3*-sqrt(write.var)*b
indirect.mean := a1*b + a3*write.mean*b
indirect.SDabove := a1*b + a3*sqrt(write.var)*b
'
# fit model
sem3 <- sem(model = moderated_mediation_1, data = df, se = "bootstrap", bootstrap = 1000)
## Warning in lavaan::lavaan(model = moderated_mediation_1, data = df, se
## = "bootstrap", : lavaan WARNING: syntax contains parameters involving
## exogenous covariates; switching to fixed.x = FALSE
# fit measures
summary(sem3, fit.measures = TRUE, standardize = TRUE, rsquare = TRUE)
## lavaan 0.6-3 ended normally after 32 iterations
##
## Optimization method NLMINB
## Number of free parameters 16
##
## Number of observations 200
##
## Estimator ML
## Model Fit Test Statistic 119.179
## Degrees of freedom 4
## P-value (Chi-square) 0.000
##
## Model test baseline model:
##
## Minimum Function Test Statistic 386.622
## Degrees of freedom 9
## P-value 0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI) 0.695
## Tucker-Lewis Index (TLI) 0.314
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -1718.688
## Loglikelihood unrestricted model (H1) -1659.098
##
## Number of free parameters 16
## Akaike (AIC) 3469.376
## Bayesian (BIC) 3522.149
## Sample-size adjusted Bayesian (BIC) 3471.459
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.379
## 90 Percent Confidence Interval 0.323 0.440
## P-value RMSEA <= 0.05 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.199
##
## Parameter Estimates:
##
## Standard Errors Bootstrap
## Number of requested bootstrap draws 1000
## Number of successful bootstrap draws 1000
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## read_c ~
## math_c (a1) 0.464 0.068 6.794 0.000 0.464 0.511
## science ~
## read_c (b) 3.747 0.782 4.792 0.000 3.747 0.358
## read_c ~
## write_c (a2) 0.315 0.065 4.841 0.000 0.315 0.347
## mth_x_w (a3) 0.039 0.055 0.713 0.476 0.039 0.039
## science ~
## math_c (cdsh) 3.763 0.786 4.786 0.000 3.763 0.397
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## math_c ~~
## math_x_write 0.110 0.086 1.283 0.200 0.110 0.123
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## write_c (wrt.) 0.000 0.072 0.000 1.000 0.000 0.000
## .read_c -0.024 0.066 -0.363 0.717 -0.024 -0.026
## .science 51.850 0.513 101.080 0.000 51.850 5.477
## math_c -0.000 0.071 -0.000 1.000 -0.000 -0.000
## mth_x_w 0.614 0.063 9.788 0.000 0.614 0.684
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## write_c (wrt.) 0.995 0.080 12.390 0.000 0.995 1.000
## .read_c 0.501 0.048 10.421 0.000 0.501 0.612
## .science 50.894 5.095 9.988 0.000 50.894 0.568
## math_c 0.995 0.080 12.482 0.000 0.995 1.000
## mth_x_w 0.806 0.084 9.574 0.000 0.806 1.000
##
## R-Square:
## Estimate
## read_c 0.388
## science 0.432
##
## Defined Parameters:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## imm 0.146 0.211 0.690 0.490 0.146 0.014
## indirect.SDblw 1.592 0.482 3.300 0.001 1.592 0.169
## indirect.mean 1.737 0.393 4.425 0.000 1.737 0.183
## indirect.SDabv 1.883 0.406 4.631 0.000 1.883 0.197
#compute bias-corrected confidence intervals
parameterEstimates(sem3, boot.ci.type = "bca.simple", level = .95, ci = TRUE, standardized = FALSE)
lhs | op | rhs | label | est | se | z | pvalue | ci.lower | ci.upper |
---|---|---|---|---|---|---|---|---|---|
read_c | ~ | math_c | a1 | 0.464 | 0.068 | 6.794 | 0.000 | 0.327 | 0.587 |
science | ~ | read_c | b | 3.747 | 0.782 | 4.792 | 0.000 | 2.073 | 5.198 |
read_c | ~ | write_c | a2 | 0.315 | 0.065 | 4.841 | 0.000 | 0.194 | 0.448 |
read_c | ~ | math_x_write | a3 | 0.039 | 0.055 | 0.713 | 0.476 | -0.071 | 0.143 |
science | ~ | math_c | cdash | 3.763 | 0.786 | 4.786 | 0.000 | 2.308 | 5.410 |
write_c | ~1 | write.mean | 0.000 | 0.072 | 0.000 | 1.000 | -0.137 | 0.143 | |
write_c | ~~ | write_c | write.var | 0.995 | 0.080 | 12.390 | 0.000 | 0.838 | 1.160 |
read_c | ~~ | read_c | 0.501 | 0.048 | 10.421 | 0.000 | 0.418 | 0.603 | |
science | ~~ | science | 50.894 | 5.095 | 9.988 | 0.000 | 42.058 | 62.922 | |
math_c | ~~ | math_c | 0.995 | 0.080 | 12.482 | 0.000 | 0.850 | 1.167 | |
math_c | ~~ | math_x_write | 0.110 | 0.086 | 1.283 | 0.200 | -0.056 | 0.274 | |
math_x_write | ~~ | math_x_write | 0.806 | 0.084 | 9.574 | 0.000 | 0.648 | 0.976 | |
read_c | ~1 | -0.024 | 0.066 | -0.363 | 0.717 | -0.150 | 0.097 | ||
science | ~1 | 51.850 | 0.513 | 101.080 | 0.000 | 50.838 | 52.796 | ||
math_c | ~1 | 0.000 | 0.071 | 0.000 | 1.000 | -0.143 | 0.142 | ||
math_x_write | ~1 | 0.614 | 0.063 | 9.788 | 0.000 | 0.500 | 0.743 | ||
imm | := | a3*b | imm | 0.146 | 0.211 | 0.690 | 0.490 | -0.259 | 0.600 |
indirect.SDbelow | := | a1b+a3-sqrt(write.var)*b | indirect.SDbelow | 1.592 | 0.482 | 3.300 | 0.001 | 0.858 | 2.728 |
indirect.mean | := | a1b+a3write.mean*b | indirect.mean | 1.737 | 0.393 | 4.425 | 0.000 | 1.026 | 2.575 |
indirect.SDabove | := | a1b+a3sqrt(write.var)*b | indirect.SDabove | 1.883 | 0.406 | 4.631 | 0.000 | 1.086 | 2.729 |
Mediated effects could also be moderated at what Hayes calls the ‘second stage’ (i.e., the relationship between \(M\) and \(Y\)). In the Hayes Process model world, this is also called Model 14. For example, does the indirect effect of math
(\(X\)) on science
(\(Y\)) via read
(\(M\)) depend on write
(\(W\)) because it moderates the relationship betwen math and reading? Perhaps only people with high writing and reading ability tend to do well in science. If this is true, then the indirect effect of math on science via reading depends on writing as well.
#compute math x writing interaction term
df <- df %>% mutate(read_x_write = read_c*write_c)
moderated_mediation_stage2 <- '
# regressions
read_c ~ a*math_c
science ~ b1*read_c
science ~ b2*write_c
science ~ b3*read_x_write
science ~ cdash*math_c
# mean of centered write (moderator; for use in simple slopes)
write_c ~ write.mean*1
# variance of centered write (moderator; for use in simple slopes)
write_c ~~ write.var*write_c
#index of moderated mediation
imm := a*b3
# indirect effects conditional on moderator (a1 + a3*b2.value)*a
indirect.SDbelow := a*b1 + a*-sqrt(write.var)*b3
indirect.mean := a*b1 + a*write.mean*b3
indirect.SDabove := a*b1 + a*sqrt(write.var)*b3
'
# fit model
sem4 <- sem(model = moderated_mediation_stage2, data = df, se = "bootstrap", bootstrap = 1000)
## Warning in lavaan::lavaan(model = moderated_mediation_stage2, data = df, :
## lavaan WARNING: syntax contains parameters involving exogenous covariates;
## switching to fixed.x = FALSE
# fit measures
summary(sem4, fit.measures = TRUE, standardize = TRUE, rsquare = TRUE)
## lavaan 0.6-3 ended normally after 43 iterations
##
## Optimization method NLMINB
## Number of free parameters 16
##
## Number of observations 200
##
## Estimator ML
## Model Fit Test Statistic 133.763
## Degrees of freedom 4
## P-value (Chi-square) 0.000
##
## Model test baseline model:
##
## Minimum Function Test Statistic 390.617
## Degrees of freedom 9
## P-value 0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI) 0.660
## Tucker-Lewis Index (TLI) 0.235
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -1729.435
## Loglikelihood unrestricted model (H1) -1662.553
##
## Number of free parameters 16
## Akaike (AIC) 3490.870
## Bayesian (BIC) 3543.643
## Sample-size adjusted Bayesian (BIC) 3492.953
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.403
## 90 Percent Confidence Interval 0.346 0.463
## P-value RMSEA <= 0.05 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.220
##
## Parameter Estimates:
##
## Standard Errors Bootstrap
## Number of requested bootstrap draws 1000
## Number of successful bootstrap draws 1000
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## read_c ~
## math_c (a) 0.662 0.051 13.097 0.000 0.662 0.662
## science ~
## read_c (b1) 3.207 0.755 4.250 0.000 3.207 0.348
## write_c (b2) 1.636 0.700 2.337 0.019 1.636 0.178
## rd_x_wr (b3) -0.933 0.526 -1.772 0.076 -0.933 -0.093
## math_c (cdsh) 3.157 0.806 3.916 0.000 3.157 0.343
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## math_c ~~
## read_x_write 0.049 0.072 0.675 0.500 0.049 0.053
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## write_c (wrt.) 0.000 0.070 0.000 1.000 0.000 0.000
## .read_c -0.000 0.053 -0.000 1.000 -0.000 -0.000
## .science 52.404 0.589 88.954 0.000 52.404 5.701
## math_c -0.000 0.070 -0.000 1.000 -0.000 -0.000
## rd_x_wr 0.594 0.064 9.296 0.000 0.594 0.647
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## write_c (wrt.) 0.995 0.078 12.725 0.000 0.995 1.000
## .read_c 0.559 0.055 10.249 0.000 0.559 0.561
## .science 48.102 5.201 9.249 0.000 48.102 0.569
## math_c 0.995 0.077 12.992 0.000 0.995 1.000
## rd_x_wr 0.841 0.107 7.853 0.000 0.841 1.000
##
## R-Square:
## Estimate
## read_c 0.439
## science 0.431
##
## Defined Parameters:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## imm -0.618 0.353 -1.751 0.080 -0.618 -0.062
## indirect.SDblw 2.740 0.623 4.395 0.000 2.740 0.292
## indirect.mean 2.124 0.485 4.375 0.000 2.124 0.230
## indirect.SDabv 1.507 0.569 2.647 0.008 1.507 0.169
#compute bias-corrected confidence intervals
parameterEstimates(sem3, boot.ci.type = "bca.simple", level = .95, ci = TRUE, standardized = FALSE)
lhs | op | rhs | label | est | se | z | pvalue | ci.lower | ci.upper |
---|---|---|---|---|---|---|---|---|---|
read_c | ~ | math_c | a1 | 0.464 | 0.068 | 6.794 | 0.000 | 0.327 | 0.587 |
science | ~ | read_c | b | 3.747 | 0.782 | 4.792 | 0.000 | 2.073 | 5.198 |
read_c | ~ | write_c | a2 | 0.315 | 0.065 | 4.841 | 0.000 | 0.194 | 0.448 |
read_c | ~ | math_x_write | a3 | 0.039 | 0.055 | 0.713 | 0.476 | -0.071 | 0.143 |
science | ~ | math_c | cdash | 3.763 | 0.786 | 4.786 | 0.000 | 2.308 | 5.410 |
write_c | ~1 | write.mean | 0.000 | 0.072 | 0.000 | 1.000 | -0.137 | 0.143 | |
write_c | ~~ | write_c | write.var | 0.995 | 0.080 | 12.390 | 0.000 | 0.838 | 1.160 |
read_c | ~~ | read_c | 0.501 | 0.048 | 10.421 | 0.000 | 0.418 | 0.603 | |
science | ~~ | science | 50.894 | 5.095 | 9.988 | 0.000 | 42.058 | 62.922 | |
math_c | ~~ | math_c | 0.995 | 0.080 | 12.482 | 0.000 | 0.850 | 1.167 | |
math_c | ~~ | math_x_write | 0.110 | 0.086 | 1.283 | 0.200 | -0.056 | 0.274 | |
math_x_write | ~~ | math_x_write | 0.806 | 0.084 | 9.574 | 0.000 | 0.648 | 0.976 | |
read_c | ~1 | -0.024 | 0.066 | -0.363 | 0.717 | -0.150 | 0.097 | ||
science | ~1 | 51.850 | 0.513 | 101.080 | 0.000 | 50.838 | 52.796 | ||
math_c | ~1 | 0.000 | 0.071 | 0.000 | 1.000 | -0.143 | 0.142 | ||
math_x_write | ~1 | 0.614 | 0.063 | 9.788 | 0.000 | 0.500 | 0.743 | ||
imm | := | a3*b | imm | 0.146 | 0.211 | 0.690 | 0.490 | -0.259 | 0.600 |
indirect.SDbelow | := | a1b+a3-sqrt(write.var)*b | indirect.SDbelow | 1.592 | 0.482 | 3.300 | 0.001 | 0.858 | 2.728 |
indirect.mean | := | a1b+a3write.mean*b | indirect.mean | 1.737 | 0.393 | 4.425 | 0.000 | 1.026 | 2.575 |
indirect.SDabove | := | a1b+a3sqrt(write.var)*b | indirect.SDabove | 1.883 | 0.406 | 4.631 | 0.000 | 1.086 | 2.729 |
In a two-group case, treating moderator as a 0/1 variable is an easy way to handle mediation in SEM. When a moderating variable, \(W\), is categorical, but has many levels (e.g., 5 groups), setting up dummy codes and computing all of the relevant interactions is very painful and laborious.
The better route is to treat the moderator \(W\) as a grouping variable such that any path in the model could be moderated by group. This allows one to test categorical moderation of parameters not often considered in regression, such as latent variable variance estimates across groups.