This proposal is for a new objective function that allows for a generic row-by-row computation on a data-set, followed by a final computation to aggregate all the row-by-row results. The objective function takes four arguments.

mxRowObjective(rowAlgebra, rowResults = NA, reduceAlgebra = NA, name = NA).

All four arguments expect character vector (string) values. rowAlgebra is the name of a MxMatrix algebra in the model. This algebra is applied to each row in a model. The result of the rowAlgebra must be a (1 x m) matrix. The results of the rowAlgebra computations are collected into a (n x m) matrix that will be named with the 2nd argument. n in this case is the number of rows in the data matrix. If reduceAlgebra is NA, then the return value of the objective function is the rowResults matrix. Otherwise, execute the reduceAlgebra and return the value of that algebra.

For those of you who weren't at the meeting on Friday (yes, we did end up meeting after all), replace all occurrences of "mapAlgebra" with the string "rowAlgebra" in the function description above.

Oops. Corrected.

If the rowAlgebra returns an (n x m) matrix, and reduceAlgebra = NA, then what does the optimizer operate on as an objective function value? Ordinarily, the optimizer expects a single scalar objective function, but I don't see it emerging in this case, without additional shenanigans.

Ok, so we might set loose a set of parallel, independent optimizations, one for each element of the result of rowAlgebra, but it seems unlikely that that is what the user would be trying to accomplish, and there are better courses of action for that sort of problem.

If the rowAlgebra returns an (n x m) matrix, and reduceAlgebra = NA, and the MxRowObjective is the top objective function, then an error is thrown. This is similar to the circumstances if a FIML objective function is the top objective function and the 'vector' argument is TRUE.

The goal of the MxRowObjective is to be as generic as possible. FIML can be implemented using a MxRowObjective by specifying as the reduceAlgebra an algebra that performs -2.0 * sum(log(likelihood)). By not providing a default reduction algebra, we are asking the user to explicitly think about and then provide the correct reduction operation for their model.

Right, it should be a very useful feature. Such things have been accomplished in the past by using dummy likelihoods (one variable, value zero for everyone in the sample, expected mean zero and expected variance 1/(2*pi) yields a likelihood of 1.0 which can be weighted by an arbitrary formula. I believe we need some backend equivalent of is.na and filter commands to handle NA's in a dataset in an efficient fashion. Something to discuss at the developers' meeting, perhaps.

Yes, I have been thinking about the NA issue as well. I think the solution we discussed at the last developer's meeting was to create a function: omxFilterNA(x, y) such that the value of 'y' was returned if 'x' was NA, or return 'x' otherwise. But I can't remember for sure.

We really need some trimming/filtering functions. Imagine programing by hand the raw full information maximum likelihood function for continuous normal data. Basically, for each data vector with up to m variables, we have to see which of the m variables are present, and filter the expected mean vector to match it. Then we have to do the same thing to the rows and the columns of the predicted covariance matrix. Given the predilection for overloading functions rather than proliferating them, which is pretty common usage in R, I'd suggest

mxTrim(x,y,dim=1)

Suppose that x is made up of some observed values and some NA's. It could self-flatten by removing all the NA's using

mxTrim(x,x,1)

the mean vector could be trimmed by

mxTrim(x,expmean,1)

and the covariance matrix could be trimmed in both rows and columns by

mxTrim(x,expcov,dim=2).

Well, this is just a suggestion. In the ordinary run of things, it will be much more efficient to use the built-in FIML function (where much less of the work would be needed to be done 'on the fly') but being able to handle data vectors generically in this fashion will open doors to being able to specify other likelihood or fit functions that involve data vectors with NA's.