You are here

Positive Log-likelihood

4 posts / 0 new
Last post
Julia's picture
Offline
Joined: 03/29/2012 - 09:40
Positive Log-likelihood

Hi.

When fitting a univariate model with continuous moderator, I keep getting positive log-likelihood (and naturally negative -2LL). The main variable is log transformed BMI. As far as I understand this is caused due to its small SD (SD=0.13 with mean=3.13). I saw in one thread (https://openmx.ssri.psu.edu/thread/329) that it is recommended to avoid using variables with small variance. Would it be advised to use original BMI variable instead of log-transformed despite its skewness (1.22 vs 0.62 of the log-transformed)?
I fit models with BMI as well and it seems more natural to compare positive -2LL. Besides, models with BMI were more stable (with log_BMI Mx code RED was obtained quite often). The results though were slightly different, but this might be due to the model choice. When comparing nested models using log_bmi, in many cases reduced model was significantly worse than the model above, but appropriate in comparison to the saturated model. With BMI, this happened only a few times.
Would be very grateful for an advice!
Thank you.

tbates's picture
Offline
Joined: 07/31/2009 - 14:25
try scale(logBMI)

You don't want to leave it (badly) skewed, especially in multivariate analyses.

To make scale up the (numerically) small differences between people, just do:

logBMI <- scale(logBMI)

Julia's picture
Offline
Joined: 03/29/2012 - 09:40
Great! Thank you so much!

Great! Thank you so much! This indeed help!
although positive log-likelihood in general is not a problem, I was still getting confused when choosing best models. LRT test is still working with negative -2LL, but when running multiple models in order to choose the best starting vaues, I used to think that the model with the lowest -2LL is the best. However, it seemed otherwise in this case (at the end of the analysis reduced models had smaller -2LL than the saturated model).
Scaling og log_bmi resolved it:)

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
So there's nothing inherently

So there's nothing inherently wrong with positive log likelihoods, because likelihoods aren't strictly speaking probabilities, they're densities. When they occur, it is typically in cases with very few variables and very small variances. For raw data, we define the log likelihood of a model as the density of the model-implied multivariate normal distribution for each observed data raw. If we had three values (0, 1, 2) and fit a model with a mean of 1 and variance of 2/3, we'd get densities of .231, .487 and .231. If we use 0, .01 and .02 and fit mean .01 variance 2/300 instead, those densities become 2.31, 4.87 and 2.31. The likelihoods change with the different scaling, and one yields a positive log-likelihood and one a negative, but they're the same model.

The issue with low variance items is not about how weird positive likelihoods look, but that variances can't go below zero, and very low variances run an increased risk of the optimizer picking a negative variance or your model bumping up against whatever bound you enforce.

Happy modeling!