Hi,
I've been using the rerun() function with the autofixtau2 argument, which is leading our models to converge. While I appreciate that it may be choosing suitable starting values for estimated parameters, I’m not entirely sure how the models have converged ( low number of studies and small N). In an effort to ‘fix’ tau2 could it end up running a fixed effects model? Are there conditions where autofixtau2 should/should not be used and how it should be interpreted?
If you could point me to any reading or explanations I would be very grateful. I have done some reading up on mxTryHard() but I’m still not clear.
Thanks
Many problems associated with the tssem1() and osmasem() are related to the variance component. Low number of studies and missing data are the primary reason. The autofixtau2 argument is an ad-hoc approach to handle these problems. It first identify variances without SEs (NaN) and then them at 0.
You are correct that the problem is solved by gradually converting a random-effects model to a fixed-effects model. I hope to do some simulation work to evaluate its empirical performance. The implementation is not long. It is available at https://github.com/mikewlcheung/metasem/blob/master/R/rerun.R (lines 9-50).
Mike
This is very helpful thank you.
I'm getting an error now that I wasn't getting before, I wonder if you know what it might be?
When I run tssem1 it runs fine (with the optimisation issue).
But when I run rerun I get an error
Model1 <- tssem1(Cov=Model_A, Model_n, method="REM", RE.type="Diag")
Model1 <- rerun(Model1, autofixtau2 = TRUE)
Error in vector("list", .n) : invalid 'length' argument
Can you post the data and the R code, please?
Thanks, data and code attached.
You have loaded
purrr
aftermetaSEM
. Thererun()
is called frompurrr
. You may usemetaSEM::rerun()
.By the way, there are only 4 studies. Results of the random-effects models are questionable. You may try to collect more data.