Attachment | Size |
---|---|
example.R [6] | 2.53 KB |
fake.data_.R [7] | 7.95 KB |
When I run this code (a threshold model for ordinal data), the standard errors are incredibly large but the confidence intervals are relatively narrow. Are any of the results trustworthy? I've tried simplifying the thresholds (using labeling to reduce the number of parameters) but it doesn't help. I'm not sure why this data is so hard to model.
If I treat the ordinal data as if it were quantitative, I have no trouble fitting a common factor model.
I've also noticed that when I used mxGenerateData, the resulting data is noticeably different from the original. The original ordinal data goes from 0 to 9. mxGenerateData changes this to 1 to 10 - which I could live with, but the percentage of 1s is MUCH larger than the percentage of 0s for each of the seven ordinal variables.