You are here

UVa Introduction to SEM - Spring 2010

Week 7: Maximum Likelihood, Fit Functions, and Data Diagnostics

This week's class first covers the basics of maximum likelihood and how various fit functions are calculated and used. This section includes caveats about what is assumed in ML and what can go wrong. Pay attention to the particularly interesting violation of assumptions that leads to the conclusion that every brown haired male in the U.S. is President Obama.

We next examine some methods for checking ML assumptions in your data. There are two R example scripts that run some graphical diagnostics and show how data transformations can be accomplished.

Week 6: Latent Structure

Week 6 of the course presents concepts in latent structure and measurement models. We take a look at 3 possible ways of using latent structure to test theories:

  1. the number of factors problem, i.e., can we collapse two scales into one;
  2. multiple regression using latent variables, i.e, how do we test the prediction of one latent variable from another and what are some limits on our abilities to test such theories; and
  3. mediation models with latent variables and some of the limitations on the conclusions that one can draw from fitting mediation models.

Week 5: Confirmatory Factor Models

In the fifth week of the course, we turn our attention to latent variables.

The lecture begins with a review of transformation matrices and principal components from a theoretic standpoint.

We then see how models that include latent predictors can be estimated and gain an understanding of problems of identification.

Next we fit a bunch of OpenMx models to a simulated data set and see how simple structure can emerge.

Week 4: Manifest Variable Models in OpenMx

This week we start with an introduction to R. Then we introduce the OpenMx data structures and syntax. Next, we work through the same models we used last week.

Week 3: Introduction to Path Analysis

The third week of the class focuses on path analysis. We calculate the model-expected components of covariance for some regression models using both path analysis tracing rules and the RAM-style matrix calculation of the effects matrix. These same simple regression models will be used next week as we introduce the OpenMx software syntax.

Week 2: Matrix Algebra Review

The second week of the class is dedicated to reviewing concepts and definitions from matrix algebra that are useful in specifying, fitting, and -- most important -- understanding SEM models. These notes are derived from the course in matrix algebra taught by Ledyard Tucker and passed on by John Nesselroade. I adapted them and translated them into LaTeX for the slides you see here.

Week 1: History and Overview

The first week of the class was a historical review of SEM and path analysis from my own (perhaps skewed) viewpoint.

Three critical readings that will help put this class in context are:

Spearman, C.(1904). General intelligence objectively determined and measured. American Journal of Psychology, 15, 201–293.

Wright, S.(1920). The relative importance of heredity and environment in determining the piebald pattern of guinea–pigs. Proceedings of the National Academy of Sciences, 6, 320–332.

Welcome to the Psyc 8501-001 class website

Hi everyone,

We will be using this forum to discuss OpenMx examples and SEM theory from the context of someone just starting to learn SEM. We will be moving fairly fast this semester and I hope to cover enough ground to make it to some advanced models by the end of the class.

If you are not part of the class, you are still welcome to join in the discussions and try out the examples.

I am attaching the class syllabus which has a week-by-week schedule of what we will be covering.

Cheers,
Steve Boker

Subscribe to UVa Introduction to SEM - Spring 2010