lower bounds on slopes of dichotomous and graded item response models

Posted on
No user picture. falkcarl Joined: 10/29/2015
When doing estimation of item factor analysis models in OpenMx, I'm wondering if there's a way to override the lower boundary on the item slopes (currently 1e-6, I think) for the dichotomous and graded item response models. e.g., imagine one wants to fit a multidimensional model, perhaps w/ crossloadings, or one has reverse worded items and it is expected that some item slopes will go negative. I believe these bounds may be set in the rpf package as rpf.ParamInfo returns this lower bound for the rpf.drm and rpf.grm item models. Currently when attempting to fit such a model I get an error such as:

Error in runHelper(model, frontendStart, intervals, silent, suppressWarnings, :
Starting value const.item[1,1] -1.000000 less than lower bound 0.000001

If you need code to reproduce this, I understand, let me know.

Many thanks!

Replied on Sun, 01/17/2016 - 06:28
Picture of user. jpritikin Joined: 05/23/2012

> I'm wondering if there's a way to override the lower boundary on the item slopes (currently 1e-6, I think) for the dichotomous and graded item response models.

Sure. Just set the itemMatrix$lbound layer of the mxMatrix before you pass the model to mxRun. rpf.ParamInfo is used as a default if the user does not specify a finite lower bound.

That said, my vague recollections is that the analytic derivatives assume that the slopes are positive. Go ahead and try it, but you may get non-finite derivatives.

> one has reverse worded items and it is expected that some item slopes will go negative

The way I recommend you handle this is to reverse the outcomes when I use mxFactor. So mxFactor(..., levels=rev(c("a","b","c"))) instead of the usual order.

> imagine one wants to fit a multidimensional model, perhaps w/ crossloadings

I'm not sure I follow you. Why does this require negative slopes?

Replied on Sun, 01/17/2016 - 09:35
No user picture. falkcarl Joined: 10/29/2015

In reply to by jpritikin

Got it, thanks. Re: the multidimensional model, imagine you have an item that loads on two factors and you expect one slope to be negative and the other to be positive. If I understand correctly, reversing the outcomes may not work in this case. Not yet sure if the currently implemented derivatives work.
Replied on Sun, 01/17/2016 - 09:46
Picture of user. jpritikin Joined: 05/23/2012

In reply to by falkcarl

> imagine you have an item that loads on two factors and you expect one slope to be negative and the other to be positive.

Oh, hm. I'm not sure. My impression was that you fit a multidimensional model with all slopes constrained positive and then rotate the loadings using your favorite factor rotation. After rotating, some of the slopes can be negative. My impression was that you don't gain anything (except unwanted degrees of freedom that result in unidentification) from allowing negative slopes during optimization. I might be wrong though.