The deveoper's meeting began with some talk of migrating the web server and subversion repository to a mac mini server. Some logistical issues regarding this issue were discussed. The next topic of conversation was designing an interface for the likelihood variables in a FIML objective function.

Two primary issues on this topic: how does the user gain access to the likelihood vector in OpenMx algebra statements? And what will be the protocol for the front-end and back-end to communicate w.r.t. the likelihood vector? How do we make this interface scalable, so that a future genetic algorithm function could return a vector of results.

Here is the current proposal on resolving the likelihood topic. We relax the restriction that objective functions must always return a 1 x 1 matrix. Objective functions must always return a r x c matrix. If the objective function will be used directly by the optimizer, then it must return a 1 x 1 matrix. The dimensions of the objective function return matrix cannot change over time. And the dimensions must be computable at the start of the call to mxRun(), before the optimizer is invoked. These requirements will allow conformability checking to succeed in the front-end.

Specifically, with regards to the likelihood vector in the FIML objective function, the details of this implementation were not completely discussed. One solution would be to add an argument to mxFIMLObjective() that accepts a boolean value. This argument would need a name (suggestions?). For cases when MxFIMLObjective objects return a likelihood vector, this will be a n x 1 matrix (correct?) where n is the number of rows in the data object (correct?)

Suggestions/comments/corrections are welcome on this proposal as implementation will begin shortly.

There are two goals with making the likelihood vector accessible to the front end. One is for outlier detection and to some extent trouble-shooting problematic FIML optimization. A single likelihood vector is returned and this could then be logged, summed and multiplied by -2 to get the usual -2lnL for an objective function. Obviously we don't want to be shunting this vector around too much (at all really); it should only be passed to the front end at the end of any optimization, because it can be large if the sample size n is large. Sorry for that glimpse of the blindingly obvious (GOTBO).

The second goal is to enable mixture distributions. I am presently not sure that this second aim should interfere with the first, but a possibility would be that when we have a mixture distribution of k components, then the mxFIMLObjective would return an n x k matrix M. A likelihood for each data vector for each component of the mixture. An open question is whether the likelihood or the weighted likelihood should be returned in the elements of M. If it is the unweighted likelihood then we should also return an n x k matrix of weights W. It is then relatively straightforward to multiply each row M[i,] by its corresponding row vector of weights W[i,], for each i=1...n. The likelihood of an individual data vector is then the sum of the weighted likelihoods for each component of the mixture, and this can then be logged, summed and multiplied by -2 as in the single component case. Note that due to definition variables, it is possible that the vector of weights w[i,] can be different for each row of the data M[i,]. However, for some models, the weight vectors will be the same for each row of W. In this case some economy could be made by returning just the single weight vector W[i,1].

A couple of things should be noted about checking the validity of likelihoods. It is common to check whether a likelihood is zero or less, avoid taking its log, and flag to the optimizer that the -2lnL cannot be evaluated for this particular set of trial values of the parameter estimates. In the case of a mixture distribution, it is ok for the weighted likelihood of some of the components to be zero (or indistinguishable from zero given machine precision) as long as the weighted sum over components is greater than zero. Mx1 uses a SAFELOG constant being the smallest number that it is safe to log without an overflow condition.

I agree there are two issues here. On the one hand, every omxFIMLObjective can already return a vector of likelihoods to the front-end when it finishes optimization. This is an effect similar to the way that an omxMLObjective returns the saturated model likelihood--it's implemented with the setFinalReturns() operator of the omxObjective structure. Returning the vector has not yet been implemented, but is easy to do.

For the other use, I think the proposal here would be to have the mxFIMLObjective return an nx1 vector of likelihoods if passed a specific option.

A mixture model would have k submodels, one for each component of the mixture. Then an algebra function would calculate the weighting vector and perform the mixing calculations.

An omxFIMLObjective could easily be modified to not return an error to the optimizer on a null value if options specify that it should return a vector of likelihoods.

There was one more implicit change that doesn't come up in Michaels' discussions.

That is, What can call an objective function? Currently, AFAIK, objective functions are only called by the optimizer.

However, this new relaxation of the rule of always returning 1x1 is a result of the fact that now objective functions can be called by functions other than optimizers, e.g., other objective functions.

This relaxation enables

(a) conformability checking by the front end (as an evaluate call to the back end)

(b) returning the raw vector of likelihoods for a given set of parameter estimate values (or starting values) to the front end (again as an evaluate call to the back end). As Mike mentions, this will be a boon for people trying to debug FIML models that get stuck on one outlier data row.

(c) mixture distribution models, where the algebra in one objective function makes a call directly to an objective function, bypassing the optimizer in a submodel.

(d) parallelized FIML evaluations in the backend by subsetting very large data sets and sending them to, e.g., GPUs using OpenCL.

(e) Other uses may be found. For instance, this may be useful in implementing MCMC or GA type optimizers.

One more thing to mention. We have now a requirement such that every optimizer must be able to understand what to do if it is instructed to call an objective function that returns a vector or matrix. In most cases this would be to throw an error, but that is up to the optimizer. This is a form of conformability checking and, I suppose, could be done in the front end.

OK, just one more thing. As I read through this thread I wonder if there is a place for a model that has an objective function for the optimizer and multiple objective functions not for the optimizer? Now _that's_ a can of worms. But it might lead to interesting extensions to how we think about optimization.