You are here

Comparing nested ACE models

3 posts / 0 new
Last post
henning's picture
Offline
Joined: 01/03/2012 - 10:35
Comparing nested ACE models

Hello!

I hope someone can help with a rather simple question:
When you compare the fits of an ACE model to the nested AE, CE and E submodels should you then compare the E model to the AE (or CE) sbmodel or to the ACE model?
I would tend to compare the E model to the "nextlarger" submodel to test for a significant deterioration of the fit, but I have the impression that others compare all models to the ACE model.

Thanks for your help

Henning

tbates's picture
Offline
Joined: 07/31/2009 - 14:25
Compare E to AE or ACE?

All comparisons are simply that: Valid comparisons. They each accurately reflect whether two things differ significantly.

When deciding if there is a significant loss of fit moving from a saturated model through a series of nested sub-models, most people compare each model to the model above it, often in blocks corresponding to an important theoretical class of model; So we might move from ACE to AE with ACE as the reference, then successively drop paths within AE with the full AE model as the reference.

So both the fact that, say, your E model is worse than the ACE model, AND the fact that it is not worse than the CE model can be true statements. If you arrive in this hypothetical situation, I would suspect that it was a mistake to drop A from the ACE model.

You can get cases where you can drop any one of three small paths, but not any two from a reference model, but perhaps can drop each one at a time with respect to that sequence of reference models... These sorts of cases, I feel, just reflect low power, and it is a mistake to force them all out, or to choose one to drop...

AIC is very helpful across all the steps of model reduction: When it goes in the wrong direction, you know you are doing the wrong thing.

That said, in 90% of cases, a meaningful AIC movement upwards corresponds to p < .05, and significant differences in fit nearly always show up no matter which comparison models you choose, i.e., a model which fits worse than the saturated model will usually also fit less well than the next-up nested model.

Whether significance testing hits the true model is power-dependent: Our quasi-always too-small designs lack the power to retain small paths, though nature probably uses them. Ergo, many journals will now request confidence intervals, so we don't end up saying that there is no A, for instance, when the CI is -.1 to .5 If only the funders would build new, very large extended twin designs, we wouldn't have to fuss over such questions....

Finally, researchers often adopt a reduction strategy of dropping columns of paths (from a matrix), starting from the right. Then when you hit a significant column, start dropping single paths. One way to think of this is getting the number of factors right before rotating them to a simple structure by choosing which paths to keep.

Best, tim

PS: The worse sin, IMHO, is burying the loss of a one or more significant parameters in a big block of smaller parameters removed at the same time.

wuhao_osu's picture
Offline
Joined: 09/07/2010 - 17:14
Always comparing to the ACE

Always comparing to the ACE model has the advantage that the ACE model is assumed true so the comparison is valid. If you compare E vs AE, the test is only valid if AE as H1 holds at least approximately.

The benefit of comparing the adjacent model has the advantage that the test statistics are asymptotically independent chi squares or their non central counterparts, so some procedure such as the BH adjustment of multiple testing can be used.

I would prefer testing the adjacent ones. Of course the test of E vs AE may not be valid if AE has been soundly rejected, but this comparison will not be interpreted anyway in this case.