Positive Log-likelihood

When fitting a univariate model with continuous moderator, I keep getting positive log-likelihood (and naturally negative -2LL). The main variable is log transformed BMI. As far as I understand this is caused due to its small SD (SD=0.13 with mean=3.13). I saw in one thread (http://openmx.psyc.virginia.edu/thread/329) that it is recommended to avoid using variables with small variance. Would it be advised to use original BMI variable instead of log-transformed despite its skewness (1.22 vs 0.62 of the log-transformed)?
I fit models with BMI as well and it seems more natural to compare positive -2LL. Besides, models with BMI were more stable (with log_BMI Mx code RED was obtained quite often). The results though were slightly different, but this might be due to the model choice. When comparing nested models using log_bmi, in many cases reduced model was significantly worse than the model above, but appropriate in comparison to the saturated model. With BMI, this happened only a few times.
Would be very grateful for an advice!
Thank you.
try scale(logBMI)
To make scale up the (numerically) small differences between people, just do:
logBMI <- scale(logBMI)
Log in or register to post comments
In reply to try scale(logBMI) by tbates
Great! Thank you so much!
although positive log-likelihood in general is not a problem, I was still getting confused when choosing best models. LRT test is still working with negative -2LL, but when running multiple models in order to choose the best starting vaues, I used to think that the model with the lowest -2LL is the best. However, it seemed otherwise in this case (at the end of the analysis reduced models had smaller -2LL than the saturated model).
Scaling og log_bmi resolved it:)
Log in or register to post comments
So there's nothing inherently
The issue with low variance items is not about how weird positive likelihoods look, but that variances can't go below zero, and very low variances run an increased risk of the optimizer picking a negative variance or your model bumping up against whatever bound you enforce.
Happy modeling!
Log in or register to post comments