Hi All,

I run the ACE model and submodels, and the AIC are really similar.

ACE 2579

AE 2577.5

CE 2577.3

E 2584

Overall, I always get a difference of 1 between the AIC.

The best winning model is always coherent with the results of the correlation analysis, so I tend to trust it, however, the difference is really little.

Is there a way to establish what is a significant difference between AIC?

Thank you all!

Valentina

Have a look at

`omxAkaikeWeights()`

.To add to Valentina’s question, is there a correspondence between Akaike weights and p-value for AIC difference? In other words, is it the case that Akaike weight = .98 can be seen as AIC difference significant at p < .02? If this reasoning is incorrect, is there a way to calculate a p-value for AIC difference?

The reason I am asking this is because sometimes it is not possible to get a p-value when comparing models. For example, when comparing ACE and ADE models I do not get a p-value since the number of parameters is the same in both models.

Sometimes delta AIC > 2 is recommended as a rule of thumb, but perhaps there is a formal test for AIC difference significance.

Thanks,

Mike

Hi Mark,

so the AIC difference needs to be higher than 2 to be significant?

Is there any references for this?

Best

Valentina

The simple answer is that if your candidate models all have similar AICs, then you don't have enough data to clearly distinguish them from one another in terms of merit. If you need to select a single "best" model, then select the one with the smallest AIC. Part of the point of the confidence set of models reported by

`omxAkaikeWeights()`

is to highlight which models appear to have merit despite not having the smallest AIC.The question of "significant difference in AIC" is ill-posed. Model-comparison via AIC is not a statistical test in any sense. Besides, unless you only have two models in your candidate set, you should be simultaneously comparing each model's AIC to all the other models' AICs, and not getting preoccupied with pairwise AIC comparisons.

Again, I refer you to the documentation of

`omxAkaikeWeights()`

and the references it cites.Thank you for your kind help!

I tried to run it and I obtained

omxAkaikeWeights(models=list(fitACE,fitAE,fitCE,fitE))

model AIC delta AkaikeWeight inConfidenceSet

2 oneAEvc 55.85886 0.0000000 0.44719740 *

3 oneCEvc 56.27869 0.4198268 0.36252257 *

1 ACEvc 57.83433 1.9754648 0.16654536 *

4 oneEvc 61.73099 5.8721260 0.02373468

But I am not sure how I should interpret this, that the first three models are all plausible?

Furthermore, the AIC obtained with this function are different from the one I obtained previously

mxCompare( fitACE, nested <- list(fitAE, fitCE, fitE) )

base comparison ep minus2LL df AIC diffLL diffdf p

1 ACEvc 5 47.83433 515 -982.1657 NA NA NA

2 ACEvc oneAEvc 4 47.85886 516 -984.1411 0.02453522 1 0.87553076

3 ACEvc oneCEvc 4 48.27869 516 -983.7213 0.44436197 1 0.50502460

4 ACEvc oneEvc 3 55.73099 517 -978.2690 7.89666118 2 0.01928687

Basically, yes.

If you use

`summary()`

on a fitted MxModel object, you will see two values of AIC. One is calculated in terms of a "df Penalty", and the other is calculated in terms of a "Parameters Penalty". The AIC calculated in terms of a "Parameters Penalty" is AIC as originally explicated by Hirotugu Akaike. I don't know the basis for the AIC calculated in terms of the "df Penalty", so I always just ignore it. I coded`omxAkaikeWeights()`

, so it always uses the "Parameters Penalty" AIC.