You are here

really similar AIC

8 posts / 0 new
Last post
valentinav's picture
Offline
Joined: 06/15/2020 - 08:45
really similar AIC

Hi All,
I run the ACE model and submodels, and the AIC are really similar.
ACE 2579
AE 2577.5
CE 2577.3
E 2584

Overall, I always get a difference of 1 between the AIC.
The best winning model is always coherent with the results of the correlation analysis, so I tend to trust it, however, the difference is really little.
Is there a way to establish what is a significant difference between AIC?

Thank you all!
Valentina

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
confidence set

Have a look at omxAkaikeWeights().

Micanzach's picture
Offline
Joined: 10/05/2020 - 20:37
Formal test for AIC difference significance

To add to Valentina’s question, is there a correspondence between Akaike weights and p-value for AIC difference? In other words, is it the case that Akaike weight = .98 can be seen as AIC difference significant at p < .02? If this reasoning is incorrect, is there a way to calculate a p-value for AIC difference?

The reason I am asking this is because sometimes it is not possible to get a p-value when comparing models. For example, when comparing ACE and ADE models I do not get a p-value since the number of parameters is the same in both models.

Sometimes delta AIC > 2 is recommended as a rule of thumb, but perhaps there is a formal test for AIC difference significance.

Thanks,
Mike

valentinav's picture
Offline
Joined: 06/15/2020 - 08:45
Hi Mark,

Hi Mark,
so the AIC difference needs to be higher than 2 to be significant?
Is there any references for this?

Best
Valentina

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
answers

The simple answer is that if your candidate models all have similar AICs, then you don't have enough data to clearly distinguish them from one another in terms of merit. If you need to select a single "best" model, then select the one with the smallest AIC. Part of the point of the confidence set of models reported by omxAkaikeWeights() is to highlight which models appear to have merit despite not having the smallest AIC.

The question of "significant difference in AIC" is ill-posed. Model-comparison via AIC is not a statistical test in any sense. Besides, unless you only have two models in your candidate set, you should be simultaneously comparing each model's AIC to all the other models' AICs, and not getting preoccupied with pairwise AIC comparisons.

Again, I refer you to the documentation of omxAkaikeWeights() and the references it cites.

valentinav's picture
Offline
Joined: 06/15/2020 - 08:45
thanks!

Thank you for your kind help!

I tried to run it and I obtained
omxAkaikeWeights(models=list(fitACE,fitAE,fitCE,fitE))
model AIC delta AkaikeWeight inConfidenceSet
2 oneAEvc 55.85886 0.0000000 0.44719740 *
3 oneCEvc 56.27869 0.4198268 0.36252257 *
1 ACEvc 57.83433 1.9754648 0.16654536 *
4 oneEvc 61.73099 5.8721260 0.02373468

But I am not sure how I should interpret this, that the first three models are all plausible?
Furthermore, the AIC obtained with this function are different from the one I obtained previously

mxCompare( fitACE, nested <- list(fitAE, fitCE, fitE) )
base comparison ep minus2LL df AIC diffLL diffdf p
1 ACEvc 5 47.83433 515 -982.1657 NA NA NA
2 ACEvc oneAEvc 4 47.85886 516 -984.1411 0.02453522 1 0.87553076
3 ACEvc oneCEvc 4 48.27869 516 -983.7213 0.44436197 1 0.50502460
4 ACEvc oneEvc 3 55.73099 517 -978.2690 7.89666118 2 0.01928687

AdminRobK's picture
Offline
Joined: 01/24/2014 - 12:15
more
But I am not sure how I should interpret this, that the first three models are all plausible?

Basically, yes.

Furthermore, the AIC obtained with this function are different from the one I obtained previously

If you use summary() on a fitted MxModel object, you will see two values of AIC. One is calculated in terms of a "df Penalty", and the other is calculated in terms of a "Parameters Penalty". The AIC calculated in terms of a "Parameters Penalty" is AIC as originally explicated by Hirotugu Akaike. I don't know the basis for the AIC calculated in terms of the "df Penalty", so I always just ignore it. I coded omxAkaikeWeights(), so it always uses the "Parameters Penalty" AIC.

mirusem's picture
Offline
Joined: 11/19/2018 - 19:32
I had a follow-up question to

I had a follow-up question to the above. I was wondering outside of the omxAkaikeWeights function, if two models have the same AIC (or similar), whether there is a heuristic based on what is conceptually simpler of what to pick? Say ACE vs ADE, or CE vs AE? Or would it depend on the question of interest, etc.?