# OpenMx vs. SASV9.2: Standardized Standard Errors Compared

5 posts / 0 new
Offline
Joined: 04/18/2013 - 12:32
OpenMx vs. SASV9.2: Standardized Standard Errors Compared

Hello Dr. Neale,

My name is Dr. Brandy Rutledge. I have attempted to duplicate the results from OpenMx in SAS for the first example in the Beginner’s Guide for Mx using the path model approach. I was able to duplicate everything except the -2 log likelihood (slightly lower) and the standardized standard errors (very different). I have done a little research on the web and found forums where the standard errors were being discussed a few years ago (2009). I am assuming that these issues have already been fixed. Do you know why the standardized standard errors in SAS using Proc Calis would be different from those that are produced by OpenMx?

Here is my SAS code for the first example in the Beginner’s Guide:

proc calis data=mxtest nobs=500;
path
x1<---G=theta1,
x2<---G=theta2,
x3<---G=theta3,
x4<---G=theta4,
x5<---G=theta5
;
pvar
G=1.0;
run;

Attached are the results from running this model.

Dr. Brandy Rutledge

AttachmentSize
23 KB
Offline
Joined: 07/31/2009 - 15:14
OpenMx standardized SEs are scaled to agree with unstandardized

Offline, I kicked this problem around as Brandy emailed me directly. In the OpenMx output,
free parameters:
name matrix row col Estimate Std.Error Std.Estimate Std.SE lbound ubound
1 One Factor.A[1,6] A x1 G 0.39715206 0.015549814 0.89130939 0.034897705
2 One Factor.A[2,6] A x2 G 0.50366105 0.018232572 0.93255466 0.033758556
3 One Factor.A[3,6] A x3 G 0.57724129 0.020448473 0.94384668 0.033435279
4 One Factor.A[4,6] A x4 G 0.70277356 0.024011511 0.96236247 0.032880829
5 One Factor.A[5,6] A x5 G 0.79624981 0.026669562 0.97255560 0.032574742
6 One Factor.S[1,1] S x1 x1 0.04081420 0.002812719 0.20556757 0.014166731
7 One Factor.S[2,2] S x2 x2 0.03801998 0.002805792 0.13034181 0.009618944
8 One Factor.S[3,3] S x3 x3 0.04082717 0.003152309 0.10915345 0.008427851
9 One Factor.S[4,4] S x4 x4 0.03938708 0.003408879 0.07385847 0.006392313
10 One Factor.S[5,5] S x5 x5 0.03628714 0.003678565 0.05413560 0.005487931

divding Estimate/Std.Error gives the same value as Std.Estimate/Std.SE. This is a conventional t-value against an estimate being zero. OpenMx has really 'fudged' the Std.SE to agree in this way, because it is known that estimating a function of a parameter g(theta) and its standard error does NOT yield a consistent t-statistic. See http://www.vipbg.vcu.edu/vipbg/Articles/behavior-fitting-1989.pdf p.43-44 for example. This is one of the arguments to avoid using Hessian-based standard errors to judge the significance of parameters. Likelihood-based confidence intervals, and the likelihood-ratio test generally do not have this problem, hence the availability of mxCI() and mxCompare() in OpenMx.

In the end, whether you consider the SAS standardized SE's "correct" or those of OpenMx might be considered a matter of taste. The t-values provided by OpenMx for the standardized estimates agree with those of the unstandardized estimates. I am not sure what is going on with SAS's Standardized SE's but they give t-values that are wildly different, e.g. 11 vs. 14 for a factor loading or 29 vs. 296 (gasp) for the error. Yet this may be correct given the parameter transformation (I've not done the math).

Offline
Joined: 07/31/2009 - 15:12
The math for parameter

The math for parameter transformation is consistent from Estimate to Standard error. For (co)variance paths (S) matrix, all parameters and their standard errors are divided by the product of the model-implied standard deviations of the two variables that go into it. Regressions are multiplied by the model-implied standard deviation of the DV, and divided by the SD of the IV.

While the confidence limits/SEs of linear or non-linear combinations of parameters won't be a simple function of those parameters SEs, simple linear transformations shouldn't affect significance tests. The standard error of X/2 should be SE_X divided by 2.

The standardizeRAM function became what summary uses to standardize RAM models, and is discussed here:

Offline
Joined: 07/31/2009 - 15:14
To be mathematically consistent, some more algebra is needed

I don't think we should override the mathematical properties of standardized parameter estimates that are in common use elsewhere (i.e., other packages). At the very least, we should offer both options.

Judging statistical significance of a parameter by dividing its estimate by its standard error (a t-value) can give you different answers depending on how the model is constructed. It is an unfortunate property, but I don't think we should suggest that it doesn't exist by reporting SE's based on the unstandardized parameter estimates. As you and I discussed offline, there are other awkward properties of standardized estimates, such as the t-value being inconsistent depending on where in the model the estimate lies. For example, the standardized estimate of an autoregression parameter may not be the same thing for occasion 1 to occasion 2 as it is for occasion 2 to occasion 3, even though the (unstandardized) parameter has been constrained to be equal in both instances. Yes, there are lots of problems with standardized estimates.

This means that we would have to make a decision as to which instance of a standardized estimate and its standard error should be reported. I don't think this is a difficult decision - it is already made for cases where a parameter (label) appears multiple times, and this is indexed by matrix row and column (as well as label where available). But there is a little bit of matrix calculus to work out (the chain rule essentially) to get the SE's out. Any takers?

Offline
Joined: 04/19/2011 - 21:00
OpenMx 2.0

OpenMx 2.0 has a new function, mxStandardizeRAMpaths(), which gives standard errors that agree pretty closely with what Dr. Rutledge got in SAS:

                name label matrix row col  Raw.Value  Std.Value      Std.SE
1  One Factor.A[1,6]  <NA>      A  x1   G 0.39715184 0.89130932 0.009723335
2  One Factor.A[2,6]  <NA>      A  x2   G 0.50366062 0.93255456 0.006408558
3  One Factor.A[3,6]  <NA>      A  x3   G 0.57724093 0.94384664 0.005514390
4  One Factor.A[4,6]  <NA>      A  x4   G 0.70277321 0.96236249 0.004007508
5  One Factor.A[5,6]  <NA>      A  x5   G 0.79624933 0.97255562 0.003275473
6  One Factor.S[1,1]  <NA>      S  x1  x1 0.04081418 0.20556769 0.017332999
7  One Factor.S[2,2]  <NA>      S  x2  x2 0.03801998 0.13034199 0.011952660
8  One Factor.S[3,3]  <NA>      S  x3  x3 0.04082716 0.10915353 0.010409477
9  One Factor.S[4,4]  <NA>      S  x4  x4 0.03938702 0.07385844 0.007713351
10 One Factor.S[5,5]  <NA>      S  x5  x5 0.03628708 0.05413557 0.006371159
11 One Factor.S[6,6]  <NA>      S   G   G 1.00000000 1.00000000 0.000000000

The difference is that now, in 2.0, the necessary calculus is being done to obtain theoretically more-coherent standard errors for the standardized parameter estimates. The derivatives are evaluated numerically, which is why the numDeriv package needs to also be installed to get these standard errors. Incidentally, no standardized values are reported in the 'free parameters' table of summary() output any longer.

Definitely, though--as I'm sure Ryne and Dr. Neale would agree--for inferential purposes, likelihood-ratio tests and/or profile-likelihood confidence intervals are generally preferable to these standard errors.