Lower bound confidence interval NaN from time to time
I was wondering about some lower bound NaN values I've come across (or also when, for example the AE model has C set to 0, but the variance component has a lower bound CI NaN value even though all of them should have a value of 0 (lower bound, estimate, upper bound)). Specifically, I have had NaN values for the A standardized variance component lower bound of the confidence interval from time to time. Does anyone know why this may be? Is there a work-around on how to adjust it? It's for relatively few estimates, but it does come up.
I am using the latest OpenMx version and it seems to be running fine otherwise in general (on a univariate model, etc.).
I know there is also calculating a CI via bootstrapping, but I am not entirely sure how to do that as of yet (but maybe that would lend itself toward resolving this issue).
Definitely appreciate any advice!
verbose output?
Log in or register to post comments
In reply to verbose output? by AdminJosh
Thanks for the reply! I just
R[write to console]: OpenMx version: 2.19.5 [GIT v2.19.5]
R version: R version 4.1.0 (2021-05-18)
Platform: x86_64-conda-linux-gnu
Default optimizer: SLSQP
NPSOL-enabled?: No
OpenMP-enabled?: Yes
I thought NPSOL was the default optimizer (at some point, though maybe I am mistaken)? Not sure if that may have to do with it. It also gives NaN values for the upper CI bound for the C estimate (when C is fixed to 0) at times as well.
Log in or register to post comments
In reply to Thanks for the reply! I just by mirusem
verbose CI output
```
confidence intervals:
lbound estimate ubound note
common.A[1,1] 0.5566175 6.173024e-01 0.68400870
common.C[1,1] NA 2.406416e-13 0.05269798 !!!
common.E[1,1] 0.1537491 1.730463e-01 0.19563705
```
and here's the detailed output with `summary(model, verbose=TRUE)`:
```
CI details:
parameter side value fit diagnostic statusCode method a c e mean
1 common.A[1,1] lower 0.55661745 4071.507 success OK neale-miller-1997 0.7460680 5.399907e-04 0.4244517 21.39288
2 common.A[1,1] upper 0.68400870 4071.519 success OK neale-miller-1997 0.8270482 4.229033e-06 0.4095962 21.39341
3 common.C[1,1] lower 0.00000000 4067.663 alpha level not reached infeasible non-linear constraint neale-miller-1997 0.7856859 0.000000e+00 0.4159883 21.39293
4 common.C[1,1] upper 0.05269798 4071.549 success infeasible non-linear constraint neale-miller-1997 0.7560895 2.295604e-01 0.4181163 21.39237
5 common.E[1,1] lower 0.15374906 4071.505 success infeasible non-linear constraint neale-miller-1997 0.7968068 2.489554e-08 0.3921085 21.39306
6 common.E[1,1] upper 0.19563705 4071.512 success infeasible non-linear constraint neale-miller-1997 0.7729641 9.786281e-08 0.4423088 21.39289
```
Log in or register to post comments
In reply to verbose CI output by AdminJosh
Oh I see, yeah that looks
1) (for C lower bound NaN)
confidence intervals:
lbound estimate ubound note
MZ.StdVarComp[1,1] 0.07422233 0.3950067 0.6354808
MZ.StdVarComp[2,1] NA 0.0000000 0.0000000 !!!
MZ.StdVarComp[3,1] 0.36451917 0.6049933 0.9257777
CI details:
parameter side value fit diagnostic
1 MZ.StdVarComp[1,1] lower 0.07422233 -37.95743 success
2 MZ.StdVarComp[1,1] upper 0.63548083 -37.95903 success
3 MZ.StdVarComp[2,1] lower 0.00000000 -41.80563 alpha level not reached
4 MZ.StdVarComp[2,1] upper 0.00000000 -37.93677 success
5 MZ.StdVarComp[3,1] lower 0.36451917 -37.95903 success
6 MZ.StdVarComp[3,1] upper 0.92577767 -37.95743 success
statusCode method a e
1 OK neale-miller-1997 0.05779659 0.2041213
2 OK neale-miller-1997 0.18255742 0.1382638
3 infeasible non-linear constraint neale-miller-1997 0.13495746 0.1670206
4 infeasible non-linear constraint neale-miller-1997 0.17279164 0.1425522
5 infeasible non-linear constraint neale-miller-1997 0.18255742 0.1382638
6 infeasible non-linear constraint neale-miller-1997 0.05779659 0.2041213
2) for the A component case:
confidence intervals:
lbound estimate ubound note
MZ.StdVarComp[1,1] NA 4.752728e-19 0.184599 !!!
MZ.StdVarComp[2,1] 0.0000000 0.000000e+00 0.000000 !!!
MZ.StdVarComp[3,1] 0.8153433 1.000000e+00 1.000000
CI details:
parameter side value fit diagnostic
1 MZ.StdVarComp[1,1] lower 2.842761e-41 -252.6202 alpha level not reached
2 MZ.StdVarComp[1,1] upper 1.845990e-01 -248.7728 success
3 MZ.StdVarComp[2,1] lower 0.000000e+00 -248.7756 success
4 MZ.StdVarComp[2,1] upper 0.000000e+00 -248.7447 success
5 MZ.StdVarComp[3,1] lower 8.153433e-01 -248.7712 success
6 MZ.StdVarComp[3,1] upper 1.000000e+00 -248.8087 success
statusCode method a e
1 infeasible non-linear constraint neale-miller-1997 -5.366488e-22 0.10065144
2 infeasible non-linear constraint neale-miller-1997 -4.407308e-02 0.09262844
3 infeasible non-linear constraint neale-miller-1997 -2.513257e-02 0.10927582
4 infeasible non-linear constraint neale-miller-1997 -2.463694e-02 0.10243448
5 infeasible non-linear constraint neale-miller-1997 -4.407965e-02 0.09262449
6 infeasible non-linear constraint neale-miller-1997 -3.161669e-10 0.09930529
Log in or register to post comments
In reply to Oh I see, yeah that looks by mirusem
CI interpretation
If the upper bound is zero then you can probably just regard the lower bound as zero. The algorithm is very particular and wants to find the correct amount of misfit, but the model is already backed up into a corner and the optimizer gets stuck.
> 2) for the A component case:
This is similar to the first case. Here, the optimizer got closer the target fit of -248.77, but didn't quite make it because the parameters got cornered again. 2.842761e-41 can be regarded as zero.
It looks like you're using the ACE model that does not allow variance components to go negative. This model is miscalibrated and will result in biased intervals. For better inference, you should use the model that allows the variance components to go negative. You can truncate the intervals at the [0,1] interpretable region for reporting.
Log in or register to post comments
In reply to CI interpretation by AdminJosh
Got it. That makes a lot of
Do you have any references on setting it so the variance components are allowed to go negative? I am not too familiar with this, and have seen some posts, but am not entirely sure where would be the best place to look.
Log in or register to post comments
In reply to Got it. That makes a lot of by mirusem
reference
If a model fixes the shared-environmental component to zero, then under that model, the lower confidence limit for the shared-environmental component is trivially zero (as is the upper confidence limit).
See here.
Log in or register to post comments
In reply to Got it. That makes a lot of by mirusem
Verhulst & Neale
There's a paper in Behavior Genetics about estimating unbounded A C and E variance components, instead of the usual implicitly-bounded path coefficient specification, which constrains variance components to be non-negative. It's here: https://pubmed.ncbi.nlm.nih.gov/30569348/
Please let us know what you think, and if there are any remaining questions we could answer that would help you further.
Log in or register to post comments
In reply to Got it. That makes a lot of by mirusem
Thank you both (and I
As for umx (I am new to this), would it be sufficient to use the umxACEv function, along with the umxConfint function (to get the standardized estimates + standardized CIs)? Or is there some other pointer to look at? That's what I've gathered from one of the tutorials and also the documentation of it.
And if I am to go with this other (direct variance approach) approach, in order to limit the range of the bounds, would I just do that after obtaining the estimates and associated CIs?
And, lastly, it is okay to just approximate the bounds which are very close to 0 but the optimizer fails (like was said above, outside of the triviality of it already being constrained to 0)? So for example the A estimate?
I really appreciate all the great amount of help you all have given!
Log in or register to post comments
In reply to Thank you both (and I by mirusem
So, I figured out most of
I appreciate it. If not, I hope there is a way for umx to skip the estimate instead of crashing, but I am not sure that is feasible, also.
Log in or register to post comments
In reply to So, I figured out most of by mirusem
So it looks like this
If anyone has any advice, I appreciate it as always!
Log in or register to post comments
In reply to So, I figured out most of by mirusem
crashing?
Log in or register to post comments
In reply to crashing? by jpritikin
Hi jpritikin, thank you for
Hmm, if I am using umx, I am not sure of how to get large enough error variance. By error, do you mean the environmental variance in this case (or just in general the variance that needs to be large for the MZ/DZ twins)? Or would this be something to explore by multiplying the actual values of the data by 100 or so (and would that be okay to do with no other alterations, or would that cause issues elsewhere if it's not renormalized, so to speak)?
I am not sure if the freeToStart, or value variables of umxModify are relevant in this instance (with the freeToStart parameter, for example, or tryHard which didn't seem to work too well). Also maybe xmuValues or xmu_starts would be relevant in this case?
For reference, here are two errors I am explicitly getting:
Error: The job for model 'AE' exited abnormally with the error message: fit is not finite (The continuous part of the model implied covariance (loc2) is not positive definite in data 'MZ.data' row 20. Detail:
covariance = matrix(c( # 2x2
0.0132678091792278, -0.0151264409931152
, -0.0151264409931152, 0.0132678091792278), byrow=TRUE, nrow=2, ncol=2)
)
and
Error: The job for model 'CE' exited abnormally with the error message: fit is not finite (The continuous part of the model implied covariance (loc2) is not positive definite in data 'DZ.data' row 53. Detail:
covariance = matrix(c( # 2x2
0.142828309693162, 0.18385055831094
, 0.18385055831094, 0.142828309693162), byrow=TRUE, nrow=2, ncol=2)
)
Log in or register to post comments
In reply to Hi jpritikin, thank you for by mirusem
I just used the umxModify
Log in or register to post comments
In reply to I just used the umxModify by mirusem
starting values
Log in or register to post comments
In reply to starting values by jpritikin
That's actually exactly what
One question that came to mind is, does it matter if one identifies the regex parameter for any given component of interest as A_r.c. vs A_r1c1? I saw this post https://openmx.ssri.psu.edu/node/4229 where it's used to drop the entire free parameter set, but I am not sure if it makes any technical difference.
Log in or register to post comments
In reply to That's actually exactly what by mirusem
regex
Log in or register to post comments
In reply to regex by jpritikin
I see. I was actually
Also, in the path version of OpenMx the starting values I did by taking, say V = sqrt(phenotypic variance across all twin pairs)/3. Should this equivalently be set for all the parameters A, C and E in umx/is it possible/needed, with the direct variance approach? Right now I have only directly set the E parameter as needed for the subnested models for AE/CE to 0.5 in umxmodify, but am wondering if there is a more systematic approach (i.e. should the 0.5 just be replaced with the V listed above)? Since I know the start value can affect the ultimate CI bounds at the very least.
And, can this be set prior to running ACEv? I am not sure if the xmu_start_value_list() function is relevant.
Finally, is there a way to suppress warnings, or the attempt to print to browser for umx?
Log in or register to post comments
In reply to I see. I was actually by mirusem
quick follow-up
But does this mean that it's already included directly in umx internally? If not, is there a rule of thumb then for making E large or small (when it comes to doing the subnested models), or any starting values in general when doing ACEv?
And the warning suppression + browser suppression may still be helpful (since it prints out browser related information even if I have options(Browser = 'NULL').
Log in or register to post comments
In reply to I see. I was actually by mirusem
starting values
Log in or register to post comments
In reply to starting values by jpritikin
Got it, I see. So just
Thanks so much for the quick responses!
And I will try and submit a bug within the week for sure regarding the output issue.
Log in or register to post comments
In reply to I see. I was actually by mirusem
warnings etc
Log in or register to post comments
In reply to I see. I was actually by mirusem
umxModify, umx_set_silent, umx_set_auto_plot, umxSetParameters
As the matrix is symmetric these are equivalent. It's always easy to check what you've done with
`m1$top$C`, or `parameters(m1)` , or, for path-based models, try `tmx_show(m1)` - it shows all the matrix properties in nice browser tables with roll-overs for properties.
> Do I need to set start values in umx models?
No - umx takes care this for you. But if you want to, you can set them directly. They are just parameters, so just set them: for instance if you wondered about sensitivity to the start value for C, just set the C values quite high to start, e.g. see what the parameters are with `parameters(m1)`, and set with, e.g. `umxSetParameters(m1, "C_r1c1", values=1)`
> Finally, is there a way to suppress warnings, or the attempt to print to browser for umx?
Yes:
```Rsplus
umx_set_silent(TRUE)
umx_set_auto_plot(FALSE)
```
Log in or register to post comments
In reply to umxModify, umx_set_silent, umx_set_auto_plot, umxSetParameters by tbates
Thanks so much for the
I just figured out about the auto_plot before I saw this, that is exactly what one of them was (and I had umx_set_silent prior too). The only thing that is left at the moment seems to be as a result of the xmu functions (as I found online to match what I am getting). One of them isn't a warning, but the other is. I am wondering if it might be possible to disable these kinds of messages since I am looking into quite a few phenotypes.
The specific popups are from: xmu_show_fit_or_comparison which automatically outputs the log likelihood estimate (this isn't as major but everything adds into computation in terms of print out), and more apparently from
xmu_check_variance:
Polite note: Variance of variable(s) '' and '' is < 0.1.
You might want to express the variable in smaller units, e.g. multiply to use cm instead of metres.
Alternatively umx_scale() for data already in long-format, or umx_scale_wide_twin_data for wide data might be useful.
Given that the phenotypes I am working with are already in their native form, I see this note a lot, and am not sure if it could be suppressed.
And thanks a lot for those clarifications--that all makes plenty of sense.
Log in or register to post comments
In reply to crashing? by jpritikin
I partially disagree
I partially disagree, in that it's not a bad idea to use a bound to ensure that the _E_ variance is strictly positive.
Won't that cause CIs to have smaller coverage probability than they're supposed to?
Log in or register to post comments
In reply to I partially disagree by AdminRobK
coverage probability
>
> Won't that cause CIs to have smaller coverage probability than they're supposed to?
No because variance proportions are proportions. The true values are always between 0 and 1. Or you could regard values outside of 0 and 1 as rejections of the model. For example, if DZ twins are more correlated than MZ twins then there is something else going on besides genetic effects. Hence, it is inappropriate to use the classical ACE model to analyze such data.
Log in or register to post comments
Lower/Upper bound for E with direct variance approach
Log in or register to post comments
In reply to Lower/Upper bound for E with direct variance approach by mirusem
Example
lbound estimate ubound lbound Code ubound Code
top.A_std[1,1] NA 0 0 NA 3
top.C_std[1,1] NA 0 0 NA 3
top.E_std[1,1] 1 1 NA 3 NA
Log in or register to post comments
Phenotypes where E lower bound gets code 3 in alternative models
I get very few code 3 NaNs for the lower bound of the E estimate in any model in general (AE, etc.) of the ones I select for. Sometimes this is fixable by a change in the E starting value (a bit higher than what I had already set and not too high in certain cases, though this doesn't always work), and sometimes it is fixable by changing the seed. Are these alterations okay to do in this circumstance, even though it's not necessarily consistent with the rest of what I would be using for the rest of the phenotypes? I definitely appreciate it!
Log in or register to post comments
In reply to Phenotypes where E lower bound gets code 3 in alternative models by mirusem
CSOLNP optimizer
The packages are really nice :)
Log in or register to post comments
confidence intervals
Yes, in an _E_-only model, the upper and lower limits of the confidence interval for the standardized _E_ variance component are trivially 1 (because the standardized _E_ component is fixed to 1 under that model).
If you're getting different results for your CIs by changing the start values, the RNG seed, and/or the optimizer, then I'm concerned that you're also getting a different solution in the primary optimization (i.e., to find the MLE). The fact that changing the start values apparently affects your CI results is especially concerning, since every confidence-limit search begins at the MLE and not at the initial start values. Have you checked whether or not you're getting substantially equivalent point estimates, standard errors, and -2logL each time you try? You might want to first run your MxModel with `intervals=FALSE` to get a good initial solution, and then use
omxRunCI()
to subsequently get confidence intervals.Log in or register to post comments
In reply to confidence intervals by AdminRobK
Thanks a lot for the reply.
From what I can tell, it looks like the point estimates are stable (I will look into this more, though). The - 2logL seems consistent as well. The only thing that seems to change is, for example, when changing the E start value, I will get a different lower bound CI (not upper bound, for example) in the cases in which this did not succeed for say, the AE model. Here is an example where it doesn't succeed for the upper bound of the ACE model (info column is -2logLL top row, and AIC second row). This is with no change in the starting value.
lbound estimate ubound note info
top.A_std[1,1] -1.062900 -0.245001 NaN !!! 139.933203
top.C_std[1,1] -0.182275 0.492349 1.059306 -271.866406
top.E_std[1,1] 0.489673 0.752652 1.078756 0.000000
This is the case when I change the optimizer to CSOLNP
lbound estimate ubound note info
top.A_std[1,1] -1.064765 -0.245001 0.571626 139.933203
top.C_std[1,1] -0.186698 0.492349 1.060668 -271.866406
top.E_std[1,1] 0.489466 0.752652 1.078679 0.000000
When I look into the AE model with the error and change the starting value for E (from 0.5 to 0.8) for an estimate I get this:
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] NaN 0.569847 0.820538 !!! 0.000000
to this:
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] 0.393639 0.569847 0.820538 0.000000
and if I change the above (same AE model case) to the CSOLNP optimizer (instead of the starting value to 0.8, so keep that at 0.5), I get:
lbound estimate ubound note info
top.A_std[1,1] 0.178853 0.430153 0.624640 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] 0.375360 0.569847 0.821147 0.000000
Curious about your insight on this.
Log in or register to post comments
In reply to Thanks a lot for the reply. by mirusem
top row - log likelihood
Log in or register to post comments
In reply to confidence intervals by AdminRobK
umx equivalent
Thanks a lot for all of the help--I really appreciate it.
Log in or register to post comments
Another phenotype example if perturbed
lbound estimate ubound note info
top.A_std[1,1] 0.063975 0.342624 0.565213 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434787 0.657376 0.936025 0.000000
#E_start = 0.5, CSOLNP
lbound estimate ubound note info
top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000
#E_start = 0.5, default optimizer
lbound estimate ubound note info
top.A_std[1,1] 0.063897 0.342624 0.565175 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434825 0.657376 0.936082 0.000000
#E_start = 0.8, default optimizer
lbound estimate ubound note info
top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000
#E_Start = 0.5, default optimizer, different seed
Log in or register to post comments
In reply to Another phenotype example if perturbed by mirusem
More comprehensive output
Here is an example of an estimate where there is a CI bound NaN error.
#Estimate with error (NaN lowerbound, AE).
# seed = first, optimizer = SLSQP, E = 0.5
free parameters:
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.60409619 0.018456489
2 A_r1c1 top.A 1 1 0.01609547 0.005227711
3 E_r1c1 top.E 1 1 0.02132252 0.004326436
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] NaN 0.569847 0.820538 !!! 0.000000
# seed = first, optimizer = SLSQP, E = 0.8
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.60409620 0.018456490
2 A_r1c1 top.A 1 1 0.01609547 0.005227712
3 E_r1c1 top.E 1 1 0.02132252 0.004326436
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] 0.393639 0.569847 0.820538 0.000000
# seed = second, optimizer = SLSQP, E = 0.5
free parameters:
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.60409619 0.018456489
2 A_r1c1 top.A 1 1 0.01609547 0.005227711
3 E_r1c1 top.E 1 1 0.02132252 0.004326436
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.625016 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] 0.375239 0.569847 0.820538 0.000000
# seed = first, optimizer = CSOLNP, E = 0.5
free parameters:
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.60409614 0.018456474
2 A_r1c1 top.A 1 1 0.01609544 0.005227694
3 E_r1c1 top.E 1 1 0.02132248 0.004326420
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.624903 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] 0.375201 0.569847 0.820538 0.000000
Here is an example of the working case:
#Working estimate (AE).
# seed = first, optimizer = SLSQP, E = 0.5
ree parameters:
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.42486799 0.016192053
2 A_r1c1 top.A 1 1 0.01035623 0.004402426
3 E_r1c1 top.E 1 1 0.01986998 0.004057314
lbound estimate ubound note info
top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000
# seed = first, optimizer = SLSQP, E = 0.8
free parameters:
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.42486797 0.016192049
2 A_r1c1 top.A 1 1 0.01035622 0.004402423
3 E_r1c1 top.E 1 1 0.01986997 0.004057314
lbound estimate ubound note info
top.A_std[1,1] 0.063897 0.342624 0.565175 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434825 0.657376 0.936082 0.000000
# seed = second, optimizer = SLSQP, E = 0.5
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.42486799 0.016192053
2 A_r1c1 top.A 1 1 0.01035623 0.004402426
3 E_r1c1 top.E 1 1 0.01986998 0.004057314
lbound estimate ubound note info
top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000
# seed = first, optimizer = CSOLNP, E = 0.5
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.42486797 0.016192034
2 A_r1c1 top.A 1 1 0.01035620 0.004402407
3 E_r1c1 top.E 1 1 0.01986994 0.004057299
lbound estimate ubound note info
top.A_std[1,1] 0.063819 0.342624 0.565174 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434826 0.657376 0.936118 0.000000
Based on this, should there be any concerns as to switching the optimizer, or maybe an idea as to what really is going on in the few cases in which one of the CI bounds are not calculated?
Log in or register to post comments
One miscellaneous hint I
I do not think there are any concerns relating to switching the optimizer or changing the RNG seed. Neither of those things is considerably changing the point estimates or standard errors, right? But, from what you've included in your posts, I can't really tell what's going on when a confidence limit is reported as `NaN`. I would need to see at least the first seven columns of the 'CI details' table (which prints when you use `summary()` with argument `verbose=TRUE`). I would also need the -2logL at the MLE (which you seem to have intended to include in your posts?). The information in your posts is not very easy to read, either. The tables would be easier to read if they displayed in a fixed-width font, which can be done with Markdown or with HTML tags.
I don't know. Sorry.
Log in or register to post comments
In reply to One miscellaneous hint I by AdminRobK
That sounds promising, and it
I've never really typed in html, takes more time, but yeah I agree it looks much nicer (and I personally also didn't like how it was saving prior).
confidence intervals:
CI Details:
This is when it fails.
Also there is this information:
Model Statistics:
And, outside of that, it looks like the std errors are NA under the "free parameters" column, specifically when I run summary(verbose = TRUE) on the umxConfint result (which is what gives the results/CIs above). But when I run summary(verbose=TRUE) on just the model prior to umxConfint, the SEs (albeit no CIs) are stable (and can be referred to in the other post, etc.).
And no worries--you've helped me a lot! Hopefully this last bit will give a bit of closure. I will say switching the optimizer fixes the issue though for those few estimates this occurs.
Log in or register to post comments
Optimizer for CI vs optimizer for the model itself including CI
Log in or register to post comments
confidence intervals
OK, now I can see what's wrong with the lower confidence limit for _E_. The optimizer was unable to adequately worsen the -2logL. For a 95% CI, the target worsening of fit at both limits is about 3.841. For _E_'s lower limit, the worsening was only by about 3.566, which isn't too far off.
It's basically impossible to give a general recommendation about that. Do whatever seems to work best for you.
Log in or register to post comments
In reply to confidence intervals by AdminRobK
Got it--that all makes sense.
Log in or register to post comments
In reply to Got it--that all makes sense. by mirusem
Glad to be of help.
Log in or register to post comments