# Error for testing assumptions of ACE model

7 posts / 0 new
Offline
Joined: 11/19/2018 - 19:32
Error for testing assumptions of ACE model

One last question hopefully for now (I am using one of the scripts by Hermine Maes that uses umx to test the general assumptions of the twin design: oneSATu.pdf

For a given phenotype I've gotten this error:

> All fit attempts resulted in errors - check starting values or model specification
> Error in if (rfu == "r'Wr") { : argument is of length zero

> Warning messages:
> 1: In model 'oneSATcu' Optimizer returned a non-zero status code 6. The model does not satisfy the first-order optimality conditions to the required accuracy, and no improved point for the merit function could be found during the final linesearch (Mx status RED)

My thought is maybe to do something with mxTryHard (increase the amount of attempts in some manner), but I am wondering now if there is also a way to alter the starting values for umx or just a general potential solution (albeit, I am not sure how that works with the umxEquate()/umxSuperModel()/umxRAM() aspects (as compared to the ACE model)).

Thanks as always!

Offline
Joined: 11/25/2020 - 13:24
Cant reproduce exactly

Stating the line where the error appears would help.

I tested using SLSQP and the umx github (developer release) I get subscript out of bound in the line:

fitSATcu  <- umxSuperModel( 'oneSATcu', modelMZc, modelDZc )
Running oneSATcu with 10 parameters
?umxSummary std=T|F', digits, report= 'html', filter= 'NS' & more
Running Saturated oneSATcu with 10 parameters
Running Independence oneSATcu with 8 parameters
Error incurred trying to run umxSummary
subscript out of bounds>

System:

OpenMx version: 2.19.6 [GIT v2.19.6]
R version: R version 4.1.1 (2021-08-10)
Platform: x86_64-solus-linux-gnu
Default optimizer: SLSQP

Offline
Joined: 11/19/2018 - 19:32
Thanks for this

Thanks for posting and looking in that.

I looked into it with the pure OpenMx script version and was able to modify the start values directly from there.

One concern, however, is that by just changing the mean start value from the baseline in the script, I can get very different results for some of the estimates, for example:

 base comparison ep   minus2LL  df        AIC    diffLL diffdf          p


1 oneSATc 10 -39.158196 106 -19.158196 NA NA NA
2 oneSATc oneEMOc 8 -36.768216 108 -20.768216 2.3899796 2 0.30270704
3 oneSATc oneEMVOc 6 -34.070176 110 -22.070176 5.0880203 4 0.27838472
4 oneSATc oneEMVZc 4 -32.634967 112 -24.634967 6.5232291 6 0.36719360

vs

 base comparison ep   minus2LL  df        AIC      diffLL diffdf             p


1 oneSATc 10 -39.158196 106 -19.158196 NA NA NA
2 oneSATc oneEMOc 8 -36.767492 108 -20.767492 2.3907044 2 3.0259737e-01
3 oneSATc oneEMVOc 6 193.829300 110 205.829300 232.9874961 4 3.0020837e-49
4 oneSATc oneEMVZc 4 193.849603 112 201.849603 233.0077988 6 1.7462459e-47

The minus2LL is much higher in the latter two cases (3 and 4) with that start value which is different from the default in the script. So I am thinking of just keeping it the same (specifically altering the lower bound of the variance allows convergence for me and leaving the rest the same, but changing the mean starting value causes a huge difference). I am wondering since it is clear how sensitive these things are to the start value, how you can be sure the estimate/p-value you get is reliable? This could be off, but is it just based/related to the minus2LL? And if that's the case, then would you need to run it multiple times to be certain you got a reasonable minus2LL?

Offline
Joined: 07/31/2009 - 14:25
umxSummary() for multi group RAM models

Hi, (and thanks @lf-araujo for the precise problem!)
Since the demo by Hermine, I added code to umxSummary to sort the parameters of RAM models by type (residuals, latents etc). This makes understanding model output easier, but I didn't check the code worked with multi-group RAM models. It doesn't :-)

Fixed now in the developer version. Will push to CRAN, perhaps prior to Christmas.

Offline
Joined: 11/19/2018 - 19:32
Results reported

Interesting!

I think this may have been a separate issue from my initial one (and that had strictly to do with starting values, when they didn't allow convergence from my understanding).

Does this related bug fix that you've mentioned (which I guess I didn't realize initially, because I had suppressed warnings), make any changes to the actual values you get from the final umxCompare result? Since I am able to grab values from that, and I think they make sense, just wanted to be sure that that part is fine.

Thanks a lot!

Offline
Joined: 07/31/2009 - 14:25
umxCompare values are all fine

umxCompare values are all fine, it was just the umxSupergroup() model which flummoxed umxCompare when trying to figure out what type (variance, residual) etc. each path is, because the top model doesn't have any...

The big fix now inspects the submodels

Offline
Joined: 11/19/2018 - 19:32
Makes sense

Got it--makes sense and sounds good! Glad it's fixed, for sure :)