You are here

Heywood case when modelling second-order factor

6 posts / 0 new
Last post
shaun.hkw's picture
Offline
Joined: 07/04/2021 - 11:02
Heywood case when modelling second-order factor

Hi all,

First time poster here.

Query 1 ---

I am running the metaSEM package and I am fitting a measurement model with the following specification:
- 10 indicators
- 4 first order factors
- 1 second/higher order factor

All input matrices are correlation matrices.

The issue I am having is that I am encountering a Heywood case with my first factor (F1) such that the standardised loading of the second order factor on F1 is 1.062, and its residual variance is -0.128. What is causing this to happen and is there any way to constrain these to be greater than 0 but less than 1?

I have attached my code, datafile (masemcode.R), and diagram of the measurement model. Any help would be of great value.

Query 2 ---

This one is something that is related to the datafile and measurement model above, but without the second order factor. The code is in masemcode2.R

For some reason, the upper bound of the confidence interval of the residual variance of AwithA cannot be estimated, even after using rerun(). What may be the reason for this?

Thank you in advance.

Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Hi Shaun,

Hi Shaun,

There are 7 studies in your dataset. Most researchers would agree that it is challenging to conduct a meta-analysis with only 7 studies. However, you now fit a multivariate model with 45 effect sizes (correlation coefficients) with such a sample size. So, it is very likely that these issues are related to your small sample size.

Mike

shaun.hkw's picture
Offline
Joined: 07/04/2021 - 11:02
Thanks Mike. It's good to

Thanks Mike. It's good to receive confirmation of my suspicions.

shaun.hkw's picture
Offline
Joined: 07/04/2021 - 11:02
Hi Mike,

Hi Mike,

I'm just following up on Query 2. I took a step back and did some exploration. When I only used the first 6 studies, the problem of the NA in the upper bound of AwithA went away. This was surprising because why would the estimation problem disappear when there is a smaller number of studies?

Shaun

Mike Cheung's picture
Offline
Joined: 10/08/2009 - 22:37
Hi Shaun,

Hi Shaun,

I don't know the reason. But with only 7 data points, dropping any 1 of them may have a massive impact on the final results. So I won't be surprised that some of these variants behave differently. But I will not jump to the conclusion that having less data is better.

Mike

shaun.hkw's picture
Offline
Joined: 07/04/2021 - 11:02
Thanks Mike.

Thanks Mike.