Hei OpenMx-Community,

is there a way to measure the reliability of the indicators in a SEM?

Is it correct to standardise the run model by the function standardizeRAM(model) and computing the squares of the path coefficients/factor loadings (squared multiple correlation)?

I found the function standardizeRAM(model) here: http://openmx.psyc.virginia.edu/thread/1095

Thanks in advance!

These squared multiple correlations between items and factors are referred to as communalities. Factor communalities are the multiple correlation between a factor and all items, and are an estimate of the reliability of factor scores. Item communalities are squared correlations of an item with a factor: 1-communalitiy is an item's uniqueness.

I'm not so sure that I would necessarily call that reliability, though others may disagree. I'd also add that you may want to keep items with lower factor loadings for any number of reasons. Removing items always lowers the factor communality (unless those items' loadings are zero). Including items with lower loadings may be necessary for representing the "whole of the factor space."

For example, say you have a measure of affect that includes three items: sad, angry and upset. Ignore the fact that there are only three items: it's an example. You'd probably expect relatively high loadings for the angry and upset items, and a lower one for sadness, because angry and upset probably have more in common with each other than they do with sadness. However, how you interpret the factor greatly depends on whether the sad item is included. With it, you're measuring what sadness and anger have in common. Without it, your factor represents just anger.

Why do you want to know how reliable your items are? What does reliability mean for your research?

I see your point. Your answer gave me something to think about.

But I don't want to remove any indicators. My adviser said that those squared correlations would help identifying errors I made, e.g. when squared correlations are negativ I probably specified something incorrect. Furthermore it seems to me, that interpreting communalities is standard procedure.

My starting point was to compute the explained variance of an indicator by a latent variable. In the case of standardised indicators this is the square of the estimated path coefficient. That is the same a computing the reliability/communality, correct?

Can I compute this with non-standardised indicators? Is the way I suggested is the previous post correct?

The explained variance of an indicator tells you how much of the variance of the indicator is due to the factor, which is not the same thing as reliability. Factor communality will tell you how much of the variance in the factor is explained by the items. If your factor loading matrix is lambda and your expected covariance matrix is sigma, then factor communality (rho^2) is:

rho^2 = lambda %*% solve(sigma) %*% t(lambda)

where solve() is inversion, t() is transposition.

I'll add again that this is not exactly reliability, though it is a very good estimate of the reliability of factor scores.

Finally, you don't need standardized data to get the R^2 for an indicator. loading^2 divided by (loading^2 + residual variance) will yield the explained variance regardless of the scaling/standardization.

Dear Sophia

There is material about this in Graham Dunn's book Statistics in Psychiatry which you may find helpful. Not much knowledge of psychiatry, if any, is needed.

Michael