There MAY be a problem with the optimization using some likelihood functions in OpenMx. Specifically, the gradient and the standard errors from the output slots in an MxModel are congruent with minimizing -log(L). The calculatedHessian, however, is congruent with minimizing -2log(L). The attached R code provides an illustration.
The potential problem involves which function (-log(L) or -2log(L)) NPSOL minimizes. If it is -log(L), then everything is fine with respect to optimization, but OpenMx is incorrectly calculating the Hessian. If it is -2log(L), however, things MAY get gnarly. The estimated gradient elements will be twice as big as they should be: i.e., dX/d(-2log(L)) = 2dX/d(-log(L)). One of the convergence criteria is the norm of the gradient. If g is the gradient then the norm equals t(g) %*% g. If the function minimized is -2log(L), then the norm of the gradient will be FOUR times its appropriate value. This could lead to convergence problems, particularly with ill conditioned problems.