I have metaSEM model (fit using metaSEM::meta) for which I wish to test constraints on the estimated intercepts. I've been using car::linearHypothesis because it's extremely flexible -- it works on any model that provides a variance-covariance matrix (via vcov()) and a vector of estimated coefficients (via coef()). Given a metaSEM model, car::linearHypothesis seems to allow the testing of joint hypotheses (ex, the hypothesis that Intercept1 = 0 AND Intercept2 = 0) and complex hypotheses (ex, the hypothesis that Intercept1 = Intercept2). It provides Chi2 tests of the full vs the restricted models in the output.
My question is this: Is this use of car::linearHypothesis valid? Is there a better way to test these hypotheses?
I believe that car::linearHypothesis uses coef() and vcov() to extract the parameter estimates and their sampling covariance matrix. It then calculates a Wald test on the restricted model.
Since the metaSEM uses an SEM approach, I prefer to use a likelihood ratio test. Please see the attached example.
Thanks, yes, I think you're right that car::linearHypothesis is doing a Wald test.
Can you say more on why you prefer a likelihood ratio test? Or do you have a reading that I can consult that describes the tradeoffs of using Wald vs LR? (FYI: I've found car::linearHypothesis() to be great for complex types of linear constraints).
To give an explicit example of what a "complex linear constraint" might mean --
Say I have three intercepts, "Intercept1", "Intercept2", and "Intercept3". I want to test the hypothesis that
Intercept1 = Intercept2 + Intercept3
This is easy using car::linearHypothesis but I can't see an easy way to do it using the intercept.constraints argument to meta(), as I can't just apply the same label to Intercept1, Intercept2, and Intercept3.
The Wald test depends on how the model is parameterized whereas the LR test is invariant to the parameterization . The followings are a few references.
Arbitrary constraints can be included with the mxConstraint function (see the attached example).
Engle, R. F. (1984). Wald, likelihood ratio, and Lagrange multiplier tests in econometrics. In Zvi Griliches and Michael D. Intriligator (Ed.), Handbook of Econometrics: Vol. Volume 2 (pp. 775–826). Retrieved from http://www.sciencedirect.com/science/article/pii/S1573441284020055
Gonzalez, R., & Griffin, D. (2001). Testing parameters in structural equation modeling: Every “one” matters. Psychological Methods, 6(3), 258–269. https://doi.org/10.1037/1082-989X.6.3.258
Neale, M. C., & Miller, M. B. (1997). The use of likelihood-based confidence intervals in genetic models. Behavior Genetics, 27(2), 113–120. https://doi.org/10.1023/A:1025681223921
Thanks, this is super helpful, as always. :)