Greetings!

Is a total measurement model always needed? I have a model with 14 latent variables, with 5-9 indicators for each variables. A total measurement model is such a huge total measurement model. Can I just simply use individual measurement models for each construct and then use path analysis (to use the mean of the indicators for the latent variables, which means, transform the latent variables to observed variables)?

Many thanks!

Best regards

Dan

The answer to your question is unfortunately more a matter of opinion than consensus best practice. The general issue of "my model is too big/has too many factors/has too many items" is a classic one, and there have been many different ways to scale down these models, some of which I'll review here before giving my opinion.

Your model reduction method is a version of what is called "factor scoring", which is a class of methods for estimating scores on latent variables based on the observed variables they predict. I can selfishly plug my and Mike Neale's 2013 MBR paper on factor scoring, which gives a brief overview of some factor scoring methods and one new alternative, but Stan Mulaik's chapter in the book "factor analysis at 100" is a more comprehensive review. The simplest of these methods becomes a (weighted) sum score, just like the method you're proposing. Similarly, "item parcelling" is a method where items that load on the same factor are combined into new variables called parcels, which reduce the dimensionality of your data and make the factor model more likely to fit well. I'm not as knowledgeable or as much of a fan of these methods, but some very smart people are.

When you split your model up in the way we're discussing (generate scores for each latent variable, then fit a path model to these scores), you run into a few conceptual and methodological problems. First, your standard errors will be wrong. Your factor scores aren't exact estimates and contain error. When you fix them at the estimated values, you're ignoring all of the error in the measurement model, and you'll get smaller standard errors than you'd get fitting the whole thing as a single model. This is a problem with all "two-step" methods: standard errors/fit in the second step are conditional on the errors in the first step, and treating the output of the first step as data ignores those errors.

Second, just because the first 10 items are well fit by one factor and the second 10 are well fit by one factor, it does not mean that the 20 items together will be well fit by two factors. The simultaneous model can tell you a lot about the structure of your data. You won't necessarily like what you're told, but you'll build better models that way.

The only downside to fitting all of your data at once is computational. If you have more than 20 ordinal variables, OpenMx can't handle your model yet. Processing time for larger models goes up exponentially with the number of variables. Its possible that the model you want to fit with dozens or hundreds of variables isn't feasible.

Were it me, I'd:

-fit each group of items individually to make sure they work as a scale.

-fit a simultaneous model to all variables.

-see how bad the fit is and move from there.

Good luck! Hope this wall of text helps!

Thank you very much Ryne. It is really useful. Thank you for your time and efforts. I will read your article (and may cite it) and follow your suggestions to see how it goes. Thanks again. Have a great weekend!

Dan