mxComputeGradientDescent {OpenMx} | R Documentation |
This optimizer does not require analytic derivatives of the fit function. The open-source version of OpenMx only offers 1 choice, CSOLNP (based on Ye, 1988). The proprietary version of OpenMx offers the choice of two optimizers, CSOLNP and NPSOL.
mxComputeGradientDescent(freeSet = NA_character_, ..., engine = NULL, fitfunction = "fitfunction", verbose = 0L, tolerance = NA_real_, useGradient = NULL, warmStart = NULL, nudgeZeroStarts = TRUE)
freeSet |
names of matrices containing free variables |
... |
Not used. Forces remaining arguments to be specified by name. |
engine |
specific NPSOL or CSOLNP |
fitfunction |
name of the fitfunction (defaults to 'fitfunction') |
verbose |
level of debugging output |
tolerance |
how close to the optimum is close enough (also known as the optimality tolerance) |
useGradient |
whether to use the analytic gradient (if available) |
warmStart |
a Cholesky factored Hessian to use as the NPSOL Hessian starting value |
nudgeZeroStarts |
whether to nudge any zero starting values prior to optimization (default TRUE) |
Ye, Y. (1988). Interior algorithms for linear, quadratic, and linearly constrained convex programming. (Unpublished doctoral dissertation.) Stanford University, CA.
data(demoOneFactor) factorModel <- mxModel(name ="One Factor", mxMatrix(type="Full", nrow=5, ncol=1, free=FALSE, values=0.2, name="A"), mxMatrix(type="Symm", nrow=1, ncol=1, free=FALSE, values=1, name="L"), mxMatrix(type="Diag", nrow=5, ncol=5, free=TRUE, values=1, name="U"), mxAlgebra(expression=A %*% L %*% t(A) + U, name="R"), mxExpectationNormal(covariance="R", dimnames=names(demoOneFactor)), mxFitFunctionML(), mxData(observed=cov(demoOneFactor), type="cov", numObs=500), mxComputeSequence(steps=list( mxComputeGradientDescent(), mxComputeNumericDeriv(), mxComputeStandardError(), mxComputeHessianQuality() ))) factorModelFit <- mxRun(factorModel) factorModelFit$output$conditionNumber # 29.5