Just a thought on how to get parallizability going between submodels.
This would be a major saving in time for many users of multi-group models, where the submodels have independent data and could be run as stand-alone independent models if they did not also share some parameters (typically located in a top model).
It's straightforward to compute which models a model is dependent on (just list up algebras etc. where the model name is not mine).
A model can't currently tell if anyone depends on it, but these "master" models need to be able to synchronize the copies of themselves they hadn out to models running on other cores/threads.
To get around this (and rather than ask users to manually maintain dependencies), at mxRun time, dependent models could register their requirements with the model upon which they depend.
To parallelize the supermodel, let's take a common example where a, c, e are in "top" and are used in submodels "mz" and "dz"
On each iteration of mz and dz, they check out a copy of the parameters in top that they need (or, for simplicity, check out a copy of the whole "top" model)
"top" sets a flag saying that iteration has been checked out.
When mz finishes and wants the new copy of parameters to estimate, it simply requests this.
top waits until all depends have requested new copies, and then does its own part of the optimization - jiggling new parameter values based on its fit algebra. It then hands out the new (synchronized) copies of its state to the submodels requesting them.
If something like this would work in the backend, most twin models would speed up by perhaps 50%?