Hello

I want to examine the relationships between three latent variables namely: Emotional Intelligence, Self Efficacy and Geography Achievement using SEM with Amos. I used three instruments to collect data. Emotional intelligence was tested using scale consists of 39 items divided to seven subscales, self efficacy was tested using scale consists of 22 items divided to two subscales and geography achievement was tested using test consists of 40 items divided by three subscales.

First, the validity of these three instruments was tested using Rasch model. Then the interval score for each subscale was calculated using WINSTEPS software. Finally, each subscale used as indicators to their latent variables (instead to use items) to conduct the CFA and SEM.

My question: Is this procedure correct? I mean using subscales as indicators especially in this case when there are many latent variables and a lot of items (the model in attachment).

Yours truly,

Al sarmi

Attachment | Size |
---|---|

Model.doc | 132.5 KB |

Your procedure seems defensible; it's not ideal, but likely a good compromise given the complexity of your data. I'll give you a few comments/suggestions for you to use as you see fit.

-You may get some criticism on review for the two-stage procedure. Single-stage estimations are preferred in general, because you're CFA doesn't reflect that there is some degree of uncertainty around your "observed" measures r1-r14. The error terms on the observed data do help in some way, but the standard errors of the model parameters are probably smaller than they would be if you did this all as a giant model with 111 variables.

-That said, your only real option for a single step estimation is a weighted least squares optimization, as maximum likelihood estimation is not feasible for your 111 variables. AMOS may be able to do this, and OpenMx can't yet but will in a future release.

-You should be aware of some assumptions of the Rasch model and how that maps onto ordinal SEM. The Rasch model holds item discriminations constant across items, which is equivalent to holding factor loadings constant across items. I would be curious as to why all items in a subscale have constant loadings, but the loadings in the second order factor are free to vary. You could try 2PLs instead of Rasch models and compare fit.

-You haven't explained what r13 and r14 are. They're being treated as covariates in your drawn model; make sure you can explain what they are, why they predict one and only one factor each and why the arrows don't go the other way.

Thank you for your answer

I already validated the measurement model using Rasch. The Rasch analysis confirmed the construct validity of instrument as well as the reliability. The Rasch analysis verified that each instrument measure one construct and in the seam time proved that the items in each subscale reflect only one dimension in their scale. After this validation I calculated the score for each subscale which became as manifested variables or indicators for the latent variables. Here I want to add more evidence of convergent validity and discriminant validity and in the same time examine the relationships between these latent variables.

The model attached is a full structural model which included the test of the measurement model as well as the test of structural model. The r13 and r14 are the error terms of these two endogenous variables.

Thank you again and your suggestions are welcome.

Yours truly,

Al Sarmi