# Output from Mixture Model - How to Determine Latent Scores

3 posts / 0 new
Offline
Joined: 01/14/2010 - 16:47
Output from Mixture Model - How to Determine Latent Scores
AttachmentSize
1.31 KB

I fit a mixture model using mxExpectationMixture and mxFitFunctionML. I estimate means and residual variances for the second class (the values for the first class are arbitrarily fixed). How can I determine how the observations are classified?

I've attached a file with the relevant code.

Thanks.

Rick

Offline
Joined: 03/01/2013 - 14:09
Fiddle with the vectors of likelihoods

The individual likelihoods are a bit buried, but they can be accessed this way:

likeC1 <- attr(out$class1$fitfunction,'result')
likeC2 <- attr(out$class2$fitfunction,'result')
likeTot <- likeC1+likeC2
class1Probs <- likeC1/likeTot
class2Probs <- likeC2/likeTot

In your case, optimization seemed to go haywire so all the likelihoods in class 2 were zero. Possibly, using a different OpenMx optimizer, such as simulated annealing, might work better. Or perhaps it was just the dummy data for x1:x4 that I used that caused the issue.

You referred to 'classified' which is often a partner to mixture modeling. However, participants' class membership vector is rarely elementary with 1's and 0's. In the event that classification is probabilistic, it may be better to include the uncertainty in their class membership when evaluating the validity of the classes with external variables. It turns out it's just another mixture likelihood :).

Offline
Joined: 01/14/2010 - 16:47
Thanks

I wrote code to compute the density under each class using the class means and sds. I guess you could classify by just using the class with the higher density. Using my data, the model converges readily with results that seem reasonable when compared with the observed data. (I have class identifiers but I'm trying to construct a model without using known classes given it is very very difficult to determine the actual class.) I'll compare your code with my results - hopefully they will agree! Thanks again.