You are here

Strange result I can't explain

5 posts / 0 new
Last post
rabil's picture
Offline
Joined: 01/14/2010 - 16:47
Strange result I can't explain
AttachmentSize
PDF icon 5_method 2 eyes_corr_v3.pdf17.63 KB

Subjects have 5 measures on each eye. DA is an area measurement and the C's are counts of area size categores. The C's can be linearly combined to closely estimate DA. The idea is that DA and the C's are indicators of the true area. (It's far easier to count size categories using a template than it is to try to make an accurate area measurement.) I've attached a pdf of the path diagram (OS = left and OD = right, the one latent variable is shown as OS when it should be OD.) The goal for this part is to just estimate the correlations between the "true" area and the indicators. (I use another SEM to calibrate measured area from estimated area (where the estimate is based on the counts) using data from both eyes which shows that there is little if any bias between the approaches.) Typically I constrain the betas and the sigmas to be the same for both eyes. I standardized the data so that each observed variable has a mean of zero and sd of 1. I constrained the latent variables to have sigma's equal to one. My understanding is in this situation, the beta's are then also standardized so that they are correlations.

When I run this, I get good estimates that have reasonably narrow confidence intervals. There is no indication of any problems. The rho between eyes is estimated to be about 0.7 (again with a very narrow ci) which is very typical for ophthalmic data. What is strange is that the beta for DA is estimated as 1.0000 (95% CI: 0.9418 to 1.0657, note upper bound for correlation can't really be higher than 1) and the sigma for the error is estimated as 0.0000 (0.0000 to 0.0400). I interpret this to mean that DA (measured area) in this model is close to being perfectly accurate with very little imprecision as an estimate of the true area (I understand that the latent variables are hypothetical and we never really know what the "true" values are).

I was thinking this result (getting a beta of 1 for DA) occurred because DA is nearly identical to a linear combination of the counts, but I'm not sure if this really explains the result. Although, if I drop out one or more of the counts, the beta for DA drops below 1 and the sigma for DA is above 0.

I've also fit a similar model using the covariance matrix instead of the correlation matrix. I fix the beta for DA to be 1 and let the sigma's for the latent variables be free. This would enable me to calibrate each count to DA, for example. The sigma for DA is again 0 with a narrow confidence interval. The results agree as you would expect with the model based on correlation.

I believe my results are correct and the models reasonable but I would just like to be able to explain them. I would appreciate the insight of others who are much more knowledgeable than I am. Thanks.

rabil's picture
Offline
Joined: 01/14/2010 - 16:47
In thinking some more about

In thinking some more about this, it occurs to be that the sigma for the error for measured DA is near zero because the error sigma's are negatively correlated and thus tending to cancel since measured DA can be closely estimated by a particular linear combination of the counts for each area range.

rabil's picture
Offline
Joined: 01/14/2010 - 16:47
I allowed for correlations

I allowed for correlations among the errors for the counts. The estimates showed substantial negative correlations so that explains why DA has such a high correlation with the latent variable. The estimated correlation for DA dropped from 1 to 0.98 and the sd for the DA errors rose from 0 to 0.2 and the RMSEA dropped from 0.15 to 0.05. So I think I answered my own question.

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
This is an interesting

This is an interesting problem. Apologies for not getting back to you sooner.

While I'm not entirely sure what is occurring in your data, I have a theory or two. I suspect that you constrained the error variances to be positive, but made no other bounds in the model. Structural models like the one you presented have no "knowledge" of the bounds for the parameters; the objective function simply finds values of the parameters you specify that best approximate the observed covariance matrix. In your case, your DA variables seem to be more strongly correlated with one another than the count variables. By forcing all of the eye-to-eye covariances to be modeled with a single factor covariance (rho when you standardize), you force some misfit. To compensate, the DA factor loading goes up (as the DA-DA correlation is beta_da * rho * beta_da, so beta_da compensates for the constraint) and the count loadings go down. The program doesn't know that the beta coefficient shouldn't go above 1 in this case; it just knows that fit is best when the parameter is right at 1.

When you allow for residual covariances between the count variables, the total eye-to-eye correlation between count variables becomes beta_crhobeta_c+cov_resid. That residual covariance will improve fit in the way you described, as it can functionally adjust the eye-to-eye correlation downward for any pair of variables.

I'm a little confused about the object of your model. Are you trying to discover a simpler linear combination of counts that can approximate DA (i.e., is DA functionally the latent factor?), or do you think that there is something that the counts add above and beyond DA? While I don't want you to post code if you're not comfortable, a statement of constraints at least would be helpful for diagnoses.

ryne

rabil's picture
Offline
Joined: 01/14/2010 - 16:47
Yes, I only constrained the

Yes, I only constrained the error variances to be nonnegative which seems to be a reasonable restriction (although typically not necessary if you have more than three indicators). I realize that the estimated correlations could be greater than one, but I was trying to understand what was happening to the beta for DA and why the variance of its disturbance was so small. Obviously, there is no way for the model to know that DA was actually measured directly. The negative correlations between the count variables make perfect sense. If one count category happens to be too low, the next adjacent category will tend to be too high. (A calibration model relating the counts to DA had shown that DA could be quite accurately reconstructed from the counts.) Once I allowed for correlation among the errors for the counts, the fit statistics improved greatly. And I don't think this has anything to do with constraining the variances. In my measurement error models I typically constrain the variances and it doesn't induce negative correlations among the error terms. In most cases, the error terms are uncorrelated for different methods. A positive correlation between error terms occurs when one method is mathematically derived from another, or other special circumstance.
(Not sure why the path diagram I uploaded is not viewable.)