You are here

Reliability of indicators

5 posts / 0 new
Last post
sbremer's picture
Offline
Joined: 11/08/2011 - 12:23
Reliability of indicators

Hei OpenMx-Community,

is there a way to measure the reliability of the indicators in a SEM?

Is it correct to standardise the run model by the function standardizeRAM(model) and computing the squares of the path coefficients/factor loadings (squared multiple correlation)?
I found the function standardizeRAM(model) here: https://openmx.ssri.psu.edu/thread/1095

Thanks in advance!

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
These squared multiple

These squared multiple correlations between items and factors are referred to as communalities. Factor communalities are the multiple correlation between a factor and all items, and are an estimate of the reliability of factor scores. Item communalities are squared correlations of an item with a factor: 1-communalitiy is an item's uniqueness.

I'm not so sure that I would necessarily call that reliability, though others may disagree. I'd also add that you may want to keep items with lower factor loadings for any number of reasons. Removing items always lowers the factor communality (unless those items' loadings are zero). Including items with lower loadings may be necessary for representing the "whole of the factor space."

For example, say you have a measure of affect that includes three items: sad, angry and upset. Ignore the fact that there are only three items: it's an example. You'd probably expect relatively high loadings for the angry and upset items, and a lower one for sadness, because angry and upset probably have more in common with each other than they do with sadness. However, how you interpret the factor greatly depends on whether the sad item is included. With it, you're measuring what sadness and anger have in common. Without it, your factor represents just anger.

Why do you want to know how reliable your items are? What does reliability mean for your research?

sbremer's picture
Offline
Joined: 11/08/2011 - 12:23
I see your point. Your answer

I see your point. Your answer gave me something to think about.
But I don't want to remove any indicators. My adviser said that those squared correlations would help identifying errors I made, e.g. when squared correlations are negativ I probably specified something incorrect. Furthermore it seems to me, that interpreting communalities is standard procedure.

My starting point was to compute the explained variance of an indicator by a latent variable. In the case of standardised indicators this is the square of the estimated path coefficient. That is the same a computing the reliability/communality, correct?

Can I compute this with non-standardised indicators? Is the way I suggested is the previous post correct?

Ryne's picture
Offline
Joined: 07/31/2009 - 15:12
The explained variance of an

The explained variance of an indicator tells you how much of the variance of the indicator is due to the factor, which is not the same thing as reliability. Factor communality will tell you how much of the variance in the factor is explained by the items. If your factor loading matrix is lambda and your expected covariance matrix is sigma, then factor communality (rho^2) is:

rho^2 = lambda %% solve(sigma) %% t(lambda)

where solve() is inversion, t() is transposition.

I'll add again that this is not exactly reliability, though it is a very good estimate of the reliability of factor scores.

Finally, you don't need standardized data to get the R^2 for an indicator. loading^2 divided by (loading^2 + residual variance) will yield the explained variance regardless of the scaling/standardization.

mdewey's picture
Offline
Joined: 01/21/2011 - 13:24
Reliability in SEM

Dear Sophia

There is material about this in Graham Dunn's book Statistics in Psychiatry which you may find helpful. Not much knowledge of psychiatry, if any, is needed.

Michael