Here's the output of comparing two nested models (just setting var = across twin and zygosity...)
> mxCompare(twinSatSub1Fit,c(twinSatSub2Fit)) base comparison ep minus2LL df AIC diffLL diffdf p 1 twinSat4 1600 2284 -2970 NA NA NA 2 twinSat twinSat 3 1610 2285 -2960 2.67 1 0.102
*NOTE*: minus2LL is 1600 and 1610
But in the actual models (accessed via summary) the values are 1602.957 and 1605.626
> summary(twinSatSub1Fit) # observed statistics: 2288 # estimated parameters: 4 # degrees of freedom: 2284 # -2 log likelihood: 1602.957
and
> summary(twinSatSub2Fit) # observed statistics: 2288 # estimated parameters: 3 # degrees of freedom: 2285 # -2 log likelihood: 1605.626
The diffLL seems correct (2,67 and not 10 as the table would imply), but the p value is wrong.
Should be
> 1-pchisq(1605.626-1602.957, 1)
# 0.1023
All done 2010-07-27 under
openmx version number: 0.3.3-1264
I wonder if it is a side effect of mxCI leaving the df and fit changed (as mxCompare sees it) compared to the fitted model values (as summary sees them)?
#1
> tableFitStatistics(twinSatSub1Fit,c(twinSatSub2Fit))
Name ep -2LL df AIC diffLL diffdf p
Model 1 : twinSat 4 1602.96 2284 -2965.04 - - -
Model 2 : twinSat 3 1605.63 2285 -2964.37 2.67 1 0.1
>
Log in or register to post comments
#2
mxCompare gives the AIC as 1050 9 8350 3649 1050 NA NA NA
> mxCompare(univHetADE5Fit, univHetADE5Fit)
base comparison ep minus2LL df AIC diffLL diffdf p
1 univHetADE
tableFitStatistics correctly returns 1047.39
tableFitStatistics(univHetADE5Fit)
Name ep -2LL df AIC
Model 1 : univHetADE 9 8345.39 3649 1047.39
Log in or register to post comments
#3
Something like this would work (when comparison is missing, just do the base models)
mxCompare = function (base, comparison, digits = 3, all = FALSE) {
if (missing(base)) {
stop("'base' argument be a MxModel object or list of MxModel objects")
}
if (length(digits) != 1 || is.numeric(digits) == FALSE ||
is.na(digits) == TRUE) {
stop("'digits' argument must be a numeric value")
}
if (is.list(base)) {
base <- unlist(base)
} else {
base <- list(base)
}
if (!all(sapply(base, is, "MxModel"))) {
stop("The 'base' argument must consist of MxModel objects")
}
baseSummaries <- omxLapply(base, summary)
if (missing(comparison)) {
resultsTable <- showFitStatistics(baseSummaries, digits, all)
# stop("'comparison' argument be a MxModel object or list of MxModel objects")
} else {
if (is.list(comparison)) {
comparison <- unlist(comparison)
} else {
comparison <- list(comparison)
}
if (!all(sapply(comparison, is, "MxModel"))) {
stop("The 'comparison' argument must consist of MxModel objects")
}
compareSummaries <- omxLapply(comparison, summary)
resultsTable <- showFitStatistics(baseSummaries, compareSummaries, digits, all)
}
return(resultsTable)
}
PS: Is there a way for users to access "showFitStatistics"? It is not exported from the OpenMx namespace...
Log in or register to post comments
#4
Log in or register to post comments
#5
Log in or register to post comments
#6
Log in or register to post comments
#7
I think this can be closed.
The feature of allowing 1 model was added.
mxCompare() prints raw numbers that can be confusingly precise or unnecessarily reformatted into 1e-n notation.
But the resolution of the report was that instead of adding digits = 3 to the function, users should learn
old = as.numeric(options('digits')) ; options('digits' = 1); mxCompare(fit1, fit2); options('digits' = old)
The Rd now shows an example of using this to control printing.
Log in or register to post comments
#8
Log in or register to post comments