OpenMx General Help

parallel computation in linux cluster
Hi everyone,
I want to conduct several 10,000-simulations for an OpenMx model in Linux cluster. I have over 20 parameters in the model and for each parameter, I need 1000-bootstrap confidence interval to test the coverage. I've add lines in below screenshot, and there are 16 cores available for OpenMx. Even though, the simulation may take about one month at my best guess. Could someone kindly advise some other ways to speed up my simulations? Thanks in advance.
- Read more about parallel computation in linux cluster
- 4 comments
- Log in or register to post comments

Create a function to generate mxMatrix and mxAlgebra
Hi everyone,
I am writing an R function which can create and run an OpenMx model. I can do it in a simple way. Partial codes are shown below. However, I have over 20 definition variables and for each I need corresponding mxAlgebra. I may want to write a function/a loop to generate mxMatrix and mxAlgebra. I've tried a couple of methods, yet none of them worked. The mainly problem is that in the expression of mxAlgebra, I need call the name of corresponding mxMatrix. Thank you in advance!

Performance issues many algebras
Hey,
I am currently implementing a new model specification language for longitudinal panel models. As "backend", I use OpenMx. Everything works but it is painfully slow.
- Read more about Performance issues many algebras
- 21 comments
- Log in or register to post comments

Single Common Factor Model with Some Residual Errors Correlated
I want to fit a single common factor model to seven different types of measurements of the same thing. However, 6 of the methods I expect to have residual errors that are positively correlated. I expect the intercorrelations to be the same among these 6 methods (fortunately) so I want to build in this assumption. There would be too many correlations to estimate separately. So I want to assume rho is the same and need to constrain, say, cov23 = sigma2*sigma3*rho, cov24=sigma2*sigma4*rho, etc. But I don't see how to do this using mxConstraints.

Power estimation for the detection of rG and rE
How can one estimate the power for the detection of significant rG and rE in multivariate Cholesky models?
Specifically, the analysis employed a trivariate Cholesky (AE providing the best fit, with all significant rGs, and one significant rE). The CIs have been calculated for all estimates. The sample is on the small side: 200 same-sex pairs (half MZ, half DZ) and may have been underpowered for the other smaller rEs but I’m not sure what size effect I had enough power to detect.

How to set the value of a computed parameter to that computed value?
Is there a function to put the computed value of all parameters determined by a bracket address (e.g. var[1,1] ) into the appropriate values cell(s) in the model?
Background and use-case: OpenMx allows us to use labels to set the value of a parameter (matrix cell). Here's a RAM model where I insert the value of covXY into the row 2, column1 of the symmetric path (S) matrix:
m1$S
$labels
x y
x "x_with_x" "covXY[1,1]"
y "covXY[1,1]" "y_with_y"
$values
x y
x 1.198632 0.000000
y 0.000000 2.018483

Rmpi
I am trying to parallelize OpenMx on a computing cluster at my university. I'm using Rmpi, and I keep getting the same error:
Error in { : task 18 failed - "job.num is at least 2."
Calls: %dopar% ->
Execution halted
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 1077 on
node compute-0-11.local exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
- Read more about Rmpi
- 6 comments
- Log in or register to post comments

Standard Errors versus Confidence Intervals
When I run this code (a threshold model for ordinal data), the standard errors are incredibly large but the confidence intervals are relatively narrow. Are any of the results trustworthy? I've tried simplifying the thresholds (using labeling to reduce the number of parameters) but it doesn't help. I'm not sure why this data is so hard to model.
If I treat the ordinal data as if it were quantitative, I have no trouble fitting a common factor model.
- Read more about Standard Errors versus Confidence Intervals
- 6 comments
- Log in or register to post comments

How to use "mxFitFunctionAlgebra" instead of "mxAlgebraObjective"
This code used to work using "mxAlgebraObjective", but now says I have to use "mxFitFunctionAlgebra" instead of "mxAlgebraObjective".
###############
models_1_v4 <- mxModel("Models_1",
Model_1_t1_v4,
Model_1_t2_v4,
mxAlgebra(
Model_1_t1_v4.objective+
Model_1_t2_v4.objective,
name="multi"),
# mxAlgebraObjective("multi"),
mxFitFunctionAlgebra(algebra = "multi")
)
m1_fit_v4 <- mxRun(models_1_v4)
##################

Output from mxFactorScores - Order of rows?
In the help file for mxFactorScores, it says:
The rows are in the order of the _sorted_ data.
I'm not sure what this means. The data I have is in order by date and it's clear that the estimated factor scores for "ML" are no longer in the same order. There are two correlated factors each with 2 indicators (and nothing else). How can I get the factor scores into the same order as the rest of the data?
- Read more about Output from mxFactorScores - Order of rows?
- 5 comments
- Log in or register to post comments
Pagination
- Previous page
- Page 7
- Next page