lower bounds on slopes of dichotomous and graded item response models
Posted on
falkcarl
Joined: 10/29/2015
Forums
When doing estimation of item factor analysis models in OpenMx, I'm wondering if there's a way to override the lower boundary on the item slopes (currently 1e-6, I think) for the dichotomous and graded item response models. e.g., imagine one wants to fit a multidimensional model, perhaps w/ crossloadings, or one has reverse worded items and it is expected that some item slopes will go negative. I believe these bounds may be set in the rpf package as rpf.ParamInfo returns this lower bound for the rpf.drm and rpf.grm item models. Currently when attempting to fit such a model I get an error such as:
Error in runHelper(model, frontendStart, intervals, silent, suppressWarnings, :
Starting value const.item[1,1] -1.000000 less than lower bound 0.000001
If you need code to reproduce this, I understand, let me know.
Many thanks!
lower bounds
Sure. Just set the itemMatrix$lbound layer of the mxMatrix before you pass the model to mxRun. rpf.ParamInfo is used as a default if the user does not specify a finite lower bound.
That said, my vague recollections is that the analytic derivatives assume that the slopes are positive. Go ahead and try it, but you may get non-finite derivatives.
> one has reverse worded items and it is expected that some item slopes will go negative
The way I recommend you handle this is to reverse the outcomes when I use mxFactor. So mxFactor(..., levels=rev(c("a","b","c"))) instead of the usual order.
> imagine one wants to fit a multidimensional model, perhaps w/ crossloadings
I'm not sure I follow you. Why does this require negative slopes?
Log in or register to post comments
In reply to lower bounds by jpritikin
Got it, thanks. Re: the
Log in or register to post comments
In reply to Got it, thanks. Re: the by falkcarl
multidimensional model
Oh, hm. I'm not sure. My impression was that you fit a multidimensional model with all slopes constrained positive and then rotate the loadings using your favorite factor rotation. After rotating, some of the slopes can be negative. My impression was that you don't gain anything (except unwanted degrees of freedom that result in unidentification) from allowing negative slopes during optimization. I might be wrong though.
Log in or register to post comments