You are here

Does OpenMx give the information that allows us to evaluate structural models?

2 posts / 0 new
Last post
brauer's picture
Offline
Joined: 01/28/2012 - 11:34
Does OpenMx give the information that allows us to evaluate structural models?

Hi all,

Can I ask you a general question about OpenMx? As a teacher of SEM classes and as a supervisor of graduate students doing research in social psychology I am looking for a R-based SEM package that allows me to generate quite easily (with a relatively simple script) different pieces of information that my colleagues, the reviewers of my manuscripts, and I use to evaluate structural models. The check list that I typically give my students is pasted below. Unless I am mistaken, OpenMx does not allow me to easily obtain the relevant information (except (1), (2), (7), (9) and (12)). If I understand, you (the developers of OpenMx) are mostly interested in advanced features, such as parallel runs, simulations, bootstrap, etc. Does this mean I should use a different package in the future? If yes, do you know of a package that allows me to get the information listed below?

Thanks a million in advance.

-- M

Criteria to evaluate a hypothesized structural model:

1) Is chi-square non-significant?
2) Is RMSEA < .05 ?
3) Is the lower bound of the 90% CI of the RMSEA < .01 ?
4) Is the upper bound of the 90% CI of the RMSEA < .10 ?
5) Is p close non-significant ? [Note: p close tests the null hypothesis that RMSEA in the population is < .05]
6) Is the SRMR < .08 ?
7) Is the CFI > .95 ?
8) Are other classic fit indices satisfactory (GFI, TLI, etc.)?
9) Are all correlation residuals < .10 ?
10) Are all standardized residuals < 1.96 ? [less important in large samples]
11) Does the quantile plot of standardized residuals look OK (do the standardized residuals fall along a diagonal line)?
12) Are the parameter estimates OK: do they make sense? are they significant?
13) Are indirect effects statistically significant? [test with bootstrap method]
14) Do we have sufficient statistical power for the test of the close-fit hypothesis and the test of the not-close-fit hypothesis? [generate script with Preacher's web site]
15) Can we argue against equivalent and near-equivalent models?

Model comparison:
16) Is the difference chi-square significant?
17) Does one of the models have a lower AIC, BIC?

neale's picture
Offline
Joined: 07/31/2009 - 15:14
Depends on definition of easily

Several of the quantities you desire are automatically generated, and there are some helper functions posted to this site for others. For items 3-5, Classic Mx did produce CI's on RMSEA but I don't think we have a function to compute these quantities with OpenMx - it seems worth adding to the wish list (http://openmx.psyc.virginia.edu/forums/openmx-help/openmx-wishlist). Other quantities are relatively straightforward to produce if one is analyzing covariance or correlation matrices. The way to do this would be to write a function that generated additional quantities. One could even write it so that it prints the questions and answers the questions in English. Writing one's own functions (and sharing them) is a good approach because in the case (Heaven forbid!) where yet another quantity of interest is added to the list, it too could be incorporated into the function and computed.

Note, however, that if there are missing data then items like 9 & 10 are suspect, because (if the data are not missing completely at random) FIML estimates of covariances etc. may not match their sample counterparts and residuals could exceed arbitrarily set thresholds for deviation. In addition, item 1 with raw data may be costly to obtain because it requires fitting a model with as many parameters as there are means and covariances. In my experience, most studies have at least some missing data. Analyses that discard or impute data are typically at greater risk of generating biased results compared to FIML, so it may be worth the loss of certain measures of fit to use FIML.

11 is yes. R has great graphical capabilities, and if the missing data caveat does not apply, it would be a one- or two-liner to generate the QQ plot.

I would address question 13 with a likelihood-based confidence interval on the function of parameters of interest - this would be very easy to request and faster than bootstrap (though that can usually be done quite easily as well).

16 & 17, yes: there are good model comparison helper functions - look for mxCompare() in the wiki.

HTH!