At the Developers Meeting on 7/13/12 we discussed the following:
- The plan to implement Multilevel modeling in OpenMx. The group will be meeting in person at MPI starting on 8/13 with additional development team members chiming in via Visimeet. Prior to the 8/13 development at least two tasks need to be completed:
- 1. The front end change to OpenMx that will split mx*Objective into mxFit and mxExpectation. Tim is working on sending Michael Spiegel a detailed specification for these changes.
- 2. Beginning to develop the multilevel interface the group had previously discussed and agreed on for multi-level.
- The group discussed that analytical gradients had been turned on in the trunk of the OpenMx source code repository. All the tests included for OpenMx are passing with the gradients turned on. Currently the biggest speedups are seen in models with lots of free parameters and lots of covariance data. In some cases the gradients can achieve a 3x speedup. The group discussed a number of approaches to achieve in greater speedup in different cases as well as approaches to "blend" analytical and numeric gradients.
- The group discussed OpenMx's ability to provide users with continuous time modeling capabilities. Mplus claims to also support both capabilities, however, it is not obvious that the matrix exponentiation they have implemented actually meets the required specification.
- The group discussed the default number of iterations used during optimization and particularly 'Code 4' errors which do not converge in given number of iterations. Two changes were proposed and will be implemented in the trunk:
- 1. The default number of iterations should be determined by a function of the number of parameters and constraints in the model.
- 2. Currently users are able to specify the number of iterations with an option. Users should also be able to write their own function in terms of parameters and constraints that determines the number of iterations.
- The group discussed several cases where thresholds within a model crossed during optimization. The group began to discuss but did not fully process the ramifications of employing linear constraints and penalty functions to prevent this behavior.