Is there a rationale for bootstrapping 95% CIs for univariate ACE estimates when relying on a relatively small convenience sample of MZ/DZ twins? Any source(s) that would support doing so?

The nonparametric bootstrap's justification is asymptotic. It relies on the fact that the empirical distribution converges to the true data-generating distribution as sample size increases without bound. So no, there's no theoretical reason why bootstrap CIs would work well with small sample sizes.

Of course, in general, the justifications for profile-likelihood (via mxCI()) and Wald (from standard errors) intervals are also asymptotic, and further assume that the likelihood function being maximized is actually of the same form as the true distribution. So if you have reason to doubt that the distributional assumption is correct, then bootstrap CIs might still be a better choice.

Note that parametric bootstrapping doesn't rely on asymptotic theory, though it does rely on the distributional assumption being correct. Permutation testing, however, doesn't rely on either.

The nonparametric bootstrap's justification is asymptotic. It relies on the fact that the empirical distribution converges to the true data-generating distribution as sample size increases without bound. So no, there's no theoretical reason why bootstrap CIs would work well with small sample sizes.

Of course, in general, the justifications for profile-likelihood (via

`mxCI()`

) and Wald (from standard errors) intervals are also asymptotic, and further assume that the likelihood function being maximized is actually of the same form as the true distribution. So if you have reason to doubt that the distributional assumption is correct, then bootstrap CIs might still be a better choice.Note that

parametricbootstrapping doesn't rely on asymptotic theory, though it does rely on the distributional assumption being correct. Permutation testing, however, doesn't rely on either.