Dear Mike,
I am currently working on a metaSEM study. I've got some error, even I tried to use different methods (see below!). I also asked for a pattern of NA. I would like to ask if, in your opinion, the error messages are related to missing correlations? I have 379 lines, but only 1 or 2 correlations for some variables. Is there a way to continue this study? Maybe dropping some variables?
All the best,
Robert
sstage1fixed <- tssem1(Cov=cordat, n=data$N, method="FEM")
Error in if (!all(isPD)) warning(paste("Group ", (1:no.groups)[!isPD], :
missing value where TRUE/FALSE needed
stage1fixed <- tssem1(Cov=cordat, n=data$N, method="REM")
Error in solve.default(t(X) %% V_inv %% X) :
Lapack routine dgesv: system is exactly singular: U[1,1] = 0
> pattern.na(cordat,show.na=FALSE)
v1 v2 v3 v4 v5 v6
v1 1 10 2 8 19 7
v2 10 0 25 30 210 16
v3 2 25 3 7 68 7
v4 8 30 7 4 40 18
v5 19 210 68 40 3 17
v6 7 16 7 18 17 341
Hi Robert,
Could you please post the code and data? Thanks.
Mike
Hi Mike,
I've attached the R code and the data file. I am eager to find out your opinion regarding my data. Is it possible to extract and use a summary matrix?
Robert
Hi Robert,
You may try the following approaches:
Mike
Dear Mike,
This is a great help. Thank you so much!
Best wishes
Robert
Dear Mike,
With your help, I managed to aggregate the correlations in the first stage of a random effect TSSEM metaSEM analysis. While trying to estimate the fit the model-data consistency, I encountered two error messages:
1: In .solve(x = object$mx.fit@output$calculatedHessian, parameters = my.name) :
Error in solving the Hessian matrix. Generalized inverse is used. The standard errors may not be trustworthy.
2: In checkRAM(Amatrix = Amatrix, Smatrix = Smatrix, cor.analysis = cor.analysis) :
The variances of the independent variables in 'Smatrix' must be fixed at 1.
I have attached the entire Rcode, the obtained output and data files. I would appreciate your help, time and consideration!
Regards,
Robert
Hi Robert,
It seems that you are using an old version of metaSEM. Could you please update it and rerun it again?
When you post the R code, could you also remove the ">" and "+" symbols so that we can run the code? Thanks.
Best,
Mike
Hi Mike,
Thank you for your suggestion, I have updated my metaSEM and now I have an output. I encountered some difficulties in running stage1random <- rerun(stage1random, autofixtau2 = TRUE) code, several times appeared a warning message " Not all eigenvalues of Hessian are greater than 0:" (see the attached output). Finally, after specifying the model I managed to estimate the parameters of the model, but I have some concerns about the last warning message I got regarding OpenMx status1 (see below). Sorry for the inconvenient format of the sent code, now I have removed all the mentioned elements and it can be run.
You already helped me a lot, many thanks!
Best,
Robert
"OpenMx status1: 6 ("0" or "1": The optimization is considered fine.
Other values indicate problems.)
Hi Robert,
The stage 2 analysis seems to work better without the diagonal constraints, i.e., diag.constraints=FALSE.
It works fine for me.
Best,
Mike
Dear Mike,
I have to thank you once again for your time and priceless help. However, I would like to ask two more questions that would help me to complete this project.
The first question concerns the possibility of using metaSEM for exploratory purposes, more precisely, is there a possibility to calculate modification indices in metaSEM?
The second question is about non-independent correlations. I read a lot about how to handle such dependencies in MASEM, but I am not sure how should I handle them in metaSEM? Can you help me with some advice in this regard? Perhaps, should we use the same strategy when we have multiple, correlated dependent variables or multiple, correlated independent variables?
Best,
Robert
Dear Robert,
For the first question, you may try the
mxMI()
inOpenMx
. As I haven't tried it before, I don't know how good it is. Using theDigman97
example in themetaSEM
package, you may try:Regarding the second question, I don't have a good answer yet. There are different types of dependence. For example, repeated measures and nested samples.
Best,
Mike
Dear Mike,
sorry for the delayed response, but I was out of the office for a while. Regarding dependence, essentially I am facing to problems. The first is linked to the already discussed project, which includes two types of effect sizes: i) correlation between two or more variables obtained from a single study; ii) correlation between two or more variables estimated repeatedly in different waves of a longitudinal study. In this way, some correlations are related given the same sample.. The second problem with dependent effect sizes is linked to another, ongoing project in which I have to test the effect of three correlated predictors (different facets of the same construct) on a single outcome variable. In this case, I have to handle the correlation between the predictors, and I think it would not be a good idea to run three univariate MA? Should I use MASEM estimating a mean direct/indirect effect simultaneously ignoring the possibility of heterogenous direct/indirect effects? Or should I choose a multivariate or three-level SEM-based MA, trying to take into account and model such effects? I feel already deeply indebt, even for the opportunity to discuss methodological problems. Sincerely, Robert
You may use whatever to handle the dependent correlation matrices. This includes, e.g., a multivariate, three-level meta-analyses or robust standard errors. The average correlation matrix with its asymptotic covariance matrix can be fitted with the wls() function.
Dear Mike,
thank you for your response. Following your suggestion, I found a paper published by Wilson et al (2016) Fitting meta‐analytic structural equation models with complex datasets. Res. Syn. Meth., 7: 121– 139, in which a five-step approach is described, the first four steps are using a three-level ma in order to reduce unwanted heterogeneity, and then, as you mentioned, a WLS fitting function is used in order to test model fit. If you can recommend other (preferable worked) examples, it would be helpful.
Meanwhile, I updated my initial data set and tried to rerun all the analyses already discussed with you, but I've received an error message whose interpretation is difficult for me. Namely, when I am trying to run the stage1 random model, I got the next message
Error in if (!is.pd(x.new)) stop("x is not positive definite!\n") :
missing value where TRUE/FALSE needed
In addition: Warning message:
In cov2cor(my.x) :
diag(.) had 0 or NA entries; non-finite result is doubtful
Do you have any idea whats happened? I did the same thing as before, it is true that the data set has been extended, including now more rows. But I think it should not be altering my initial code.
Best regards,
Robert
Hi Mike!
I worked on my data today and found that I had an NA for a sample size value, which gives the reported error message. After eliminating that study the code now is working properly. Now I am encountering another error message, while I am trying to estimate indirect effects. Using the Hunter83 and Becker83 examples I wrote the next code for indirect effects
stage2 <- tssem2(stage1random, Amatrix=A, Smatrix=S,
RE.constraints=0, intervals="LB", mx.algebras= list( ind=mxAlgebra(V12V2V22V5+ V12V2V22V6+ V12V4V42V5+V12V4V42V6+V32V2V22V5+ V32V2V22V6+V12V2V22V4V42V5+ V12V2V22V4V42V6, name="ind") ),
model.name="TSSEM2 random effects model")
and I've got this
Error in running the mxModel:
Warning message: mxRun does not accept ... arguments. The first parameter in ... was named 'RE.constraints' with value '0'
What do you think, it could be a problem that my labels used to name variables includes numbers? Just to have an example, in this way, I defined a specific indirect effect as v22v3*v32v5. Or is this message linked to the imposed constraints to Random Effect, as it said?
Many thanks,
Robert
Hi Robert,
Could you post the data and R code that generate the errors?
Mike
The indirect effect problem is solved. Sorry for bothering you. Now I am working on the subgroup analysis part, using the R codes published in Jak (2015) Meta-analytic structural equation modeling. and then extended in Jak & Cheung (2018) Meta-Analytic Structural Equation Modeling with Moderating Effects on SEM Parameters. https://doi.org/10.31234/osf.io/ce85j. I encountered some difficulties running the stage1 code
stage1random_lo <- tssem1(cov=cordat_lo, n=N_lo,
method="REM", RE.type="Diag")
stage1random_hi <- tssem1(cov=cordat_hi, n=N_hi,
method="REM", RE.type="Diag")
stage1random_lo <- tssem1(cov=cordat_lo, n=N_lo, method="REM", RE.type="Diag")
and got this error message
Error in tssem1REM(Cov = Cov, n = n, cor.analysis = cor.analysis, RE.type = RE.type, :
argument "Cov" is missing, with no default
It is somehow strange that when I am running your examples everything is working easily. Maybe I should transform my data to be built-in (I read something about .rda files), is it possible?
Best regards,
Robert
All the best,
Robert
Hi Robert,
Could you please post the code and data? Thanks.
Mike
Hi Mike,
you have the data and the RCode attached. The stage1 and stage2 codes are working fine. I have a problem when I am trying to use the subgroup section, this is the error message that I've got.
Error in tssem1REM(Cov = Cov, n = n, cor.analysis = cor.analysis, RE.type = RE.type, :
argument "Cov" is missing, with no default
It is just a guess, maybe the problem is that when I defined "data", all the columns were included, "n" and "moderators" (m1-m6) also?
Thanks,
Robert
Hi Robert,
There are some syntax errors. For example, there is no
data$m1
(should it bedata$M1
?) andtssem1(cov=cordat_lo, ...)
(it should betssem1(Cov=cordat_lo, ...)
as R is case sensitive). Moreover, there areNA
incordat_lo
,N_lo
,cordat_hi
, andN_hi
.The following is the closest R syntax I have. As you can see, there is no correlation between v3 and v6 in
cordat_lo
. Thus, it won't work. There are only a few data points incordat_hi
. It may be challenging to fit it.Mike
Hi Mike,
first of all, I would like to thank for the detailed feedback and all the correction/suggestion you made/offered. The problem is that predictor variables rarely appear simultaneously in studies, which is why for some of them we have collected very few correlations. Given their small number, we may need to exclude some predictors and simplify the tested model.
However we decide, we remain indebted for the help offered.
Sincerely,
Robert
Hi Mike,
I would like to ask one more question regarding the outcome of the second stage of the TSSEM approach in metaSEM. Can I have the Rsquare for each endogenous variable of the model, like in any other software (ex. Mplus, or AMOS etc). Or, can I have (or compute) a standardized form of the estimated parameters?
Best regards,
Robert
Hi Robert,
R^2 = 1 - error variance.
You may refer to the examples in Becker92 in the metaSEM package.
Mike
Hi Mike,
I would like to ask you something about STAGE 1 of the TSSEM approach. More precisely, you wrote that essentially the pooled correlation matrix obtained in TSSEM1 is the result of an SEM-based multivariate MA. As such, is it possible to use/report each piece of the pooled correlation matrix obtained in TSSEM1 as the result of a "classic" MA?
best regards,
Robert
Hi Robert,
It depends on what you mean by a "classic" meta-analysis. It uses a multivariate meta-analysis with a maximum likelihood estimation method. If you refer to the traditional univariate meta-analysis with either Hunter and Schmidt approach or Hedges and Olkin approach, it is not.
Best,
Mike
Hi Mike,
before all, I have to thank you once again for your invaluable help. Now I am working on a partial mediation model (stage2) the numbers of correlations on which the stage 1 pooled matrix is obtained being a reduced one (see the attached Data.dat). Using a REModel I've got the next output for stage 1.
My questions are:
1. There is no Tau Square estimation for the first variable. There is no sufficient information to estimate this component?
2. If I had only one piece of correlation for one locus, perhaps r12 (just, for example, study 3, r12=0.49), stage 1 analysis would result in this value (r=0.49) as an outcome estimate?
3. Heterogeneity test for intercept 1 and 3, both resulted the same p-value 0.0000. Tau 3 was estimated, and the corresponding Q statistic could be interpreted, but how about the Q test for intercept 1, it has no Tau2 value?
4. My last question is there any influence of this missing information (tau2 for intercept 3) upon the estimated parameters in stage 2 (direct and indirect effect), their estimated standard error (or confidence interval).
Best regards,
Robert
95% confidence intervals:
Coefficients:
Estimate Std.Error lbound ubound z value Pr(>|z|)
Intercept1 -.15637 3.0256e-02 -2.1567e-01 -9.7071e-02 -5.1683 2.362e-07 ***
Intercept2 .19145 3.9410e-02 1.1421e-01 2.6870e-01 4.8579 1.186e-06 ***
Intercept3 -.42213 2.9671e-02 -4.8029e-01 -3.6398e-01 -14.2269 < 2.2e-16 ***
Tau2_2_2 .0046089 5.8208e-03 -6.7997e-03 1.6018e-02 0.7918 0.4285
Tau2_3_3 .00000010197 2.4987e-03 -4.8974e-03 4.8974e-03 0.0000 1.0000
Signif. codes: 0 ‘’ 0.001 ‘’ 0.01 ‘’ 0.05 ‘.’ 0.1 ‘ ’ 1
Heterogeneity indices (based on the estimated Tau2):
Estimate
Intercept1: I2 (Q statistic) 0.0000
Intercept2: I2 (Q statistic) 0.5655
Intercept3: I2 (Q statistic) 0.0000
Hi Robert,
Could you post the R code to read the data and generate the errors?
Mike
Hi Mike!
Thank's for your quick response. You have the R code attached.
Best regards,
Robi
Hi Robert,
Regarding your questions,
(1) You used
stage1random <- rerun(stage1random, autofixtau2 = TRUE)
in your analysis. The argument "autofixtau2" attempts to drop variances that are close to zero. You may trystage1random <- rerun(stage1random, autofixtau2 = FALSE)
.(2) Yes, if you fix its variance at zero.
(3) See (1).
(4) It could be as the stage 2 estimates depend on the stage 1 estimates. If your findings are questionable in stage 1, so will the findings in stage 2.
By the way, the numbers of studies are very small (4 to 8) for a typical meta-analysis. MASEM requires much more data as it fits the multivariate relationship. There are concerns about the stability of the findings.
Best,
Mike
Hi Mike!
regarding your last comment, I understand that more information offer stability of the estimated parameters, my question is, is there a minimal N (N = number of correlations) that would validate a multivariate MA? Are there simulation studies targeting this aspect of MASEM? Thank's!
best,
Robert
Hi Robert,
There are some simulation studies (see the below samples). However, they are not meant to answer the minimum N or k question. Moreover, their settings are likely different from yours. The best way is answer your question is to conduct a small simulation study using your own settings.
Best,
Mike
Cheung, M. W.-L. (2018). Issues in solving the problem of effect size heterogeneity in meta-analytic structural equation modeling: A commentary and simulation study on Yu, Downes, Carter, and O’Boyle (2016). Journal of Applied Psychology, 103(7), 787–803. https://doi.org/10.1037/apl0000284
Jak, S., & Cheung, M. W.-L. (2019). Meta-analytic structural equation modeling with moderating effects on SEM parameters. Psychological Methods. https://doi.org/10.1037/met0000245
Dear Mike,
I am still trying to deepen my MASEM knowledge and skills and now I am working on subgroup analysis. I have a moderator variable with two values (low and high) and a mediation model (x, m, and y). Following your guidelines (published in Jak & Cheung, 2018) I managed to understand and test a possible moderator effect on direct paths. Still has some questions:
1. First, just to check, if the unconstrained model si just-identified, then any comparison with this model (delta chi-square) makes no sense, is it true?
2. According to your example, in the constrained model constraints all direct effects were constraint to equality. If I would be interested to constrain only a specific effect (perhaps only "a", or only "b"), can I do this with this method? Maybe I should change the S matrix, but how?
3. My last question would be if it is possible to use the same procedure (or a similar procedure) in order to test a possible moderator effect on an indirect path (a*b)? If not, can you recommend another method to compare the estimated indirect effects, given that I have estimated the indirect effect for both populations (low and high).
Beste regards,
Robert
Dear Robert,
Are you using the subgroup analysis (multiple-group analysis) or the osmasem?
Regarding your questions,
1) When you apply constraints, your model becomes the constrained model (over-identified model). You may test them with the likelihood-ratio test.
2) You should be able to test the direct or specific effects via the A matrix.
3) I don't think that moderators can be used to predict a*b. The moderators can be used to predict a and b separately.
Best,
Mike
Dear Mike,
I am using the TSSEM multi-group approach. I am not familiar with OSMASEM yet, I read about it but nothing specific. It will be the next step in my professional development. I wonder if your answer to my questions would be different if I would used OSMASEM?
Best regards
Robert
Dear Robert,
Since your moderator is binary, both the TSSEM multi-group analysis and OSMASEM are the same. If your moderators are continuous, you may try OSMASEM.
Best,
Mike
Dear Mike,
I'm still struggling with my mediation model, more specifically with the moderated indirect effect. Can you recommend a method within MASEM that allows the estimation of a possible moderation of indirect effect? My research setting is based on correlation matrices. What do you think about the possibility to compute effect sizes for the indirect effect in both subgroups and then compare them? Or maybe, can I compote a difference between effect sizes, like a t-test?
Best regards,
Robert
Dear Robert,
If you want to model the indirect effect, not the individual coefficients, you may compute the indirect effect as an effect size. Then you can apply a mixed-effects meta-analysis on the indirect effect. This is called the parameter-based MASEM in the following paper.
Cheung, M. W.-L., & Cheung, S. F. (2016). Random-effects models for meta-analytic structural equation modeling: Review, issues, and illustrations. Research Synthesis Methods, 7(2), 140–155. https://doi.org/10.1002/jrsm.1166
Best,
Mike