Skip to content

Commit c758b24

Browse files
committed
feat: new article
1 parent 6b7029d commit c758b24

File tree

1 file changed

+163
-0
lines changed

1 file changed

+163
-0
lines changed
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
---
2+
title: >-
3+
Preregistering Structural Equation Modeling (SEM) Studies: A Comprehensive
4+
Guide
5+
categories:
6+
- Research Methods
7+
tags:
8+
- SEM
9+
- Preregistration
10+
- Open Science
11+
- Reproducibility
12+
author_profile: false
13+
seo_title: How to Preregister Structural Equation Modeling (SEM) Studies
14+
seo_description: >-
15+
A comprehensive guide to preregistering SEM studies, including software
16+
environments, modeling decisions, fit criteria, and contingency plans for
17+
robust and reproducible analysis.
18+
excerpt: >-
19+
Learn how to preregister your SEM study by systematically locking down
20+
modeling and analytic decisions to improve scientific transparency and reduce
21+
bias.
22+
summary: >-
23+
This guide explains how to preregister structural equation modeling (SEM)
24+
studies across seven major decision domains—software, model structure,
25+
statistical modeling, estimation, fit assessment, contingency planning, and
26+
robustness checks—ensuring confirmatory analyses remain unbiased and
27+
reproducible.
28+
keywords:
29+
- structural equation modeling
30+
- preregistration
31+
- open science
32+
- research reproducibility
33+
- confirmatory analysis
34+
classes: wide
35+
---
36+
37+
Structural Equation Modeling (SEM) is a powerful analytical tool, capable of modeling complex latent structures and causal relationships between variables. From psychology to marketing, SEM is used in diverse fields to test theoretical models with observed data. Yet, the same flexibility that makes SEM attractive also opens the door to excessive researcher degrees of freedom. Without constraints, analysts can tweak specifications post hoc--knowingly or unknowingly--to produce more favorable results.
38+
39+
Preregistration addresses this issue by setting in stone the analysis plan _before_ seeing the data. In the context of frequentist statistics, this step is crucial: data-contingent modeling decisions can invalidate p-values and confidence intervals by increasing the risk of false positives. By formally documenting decisions in advance, researchers commit to a confirmatory route, while still retaining space for transparent exploratory analysis.
40+
41+
This article outlines the major decision domains to address in a preregistered SEM study. Each domain includes actionable recommendations to make your research more reproducible and credible.
42+
43+
--------------------------------------------------------------------------------
44+
45+
# 1\. Locking the Software Environment
46+
47+
A reproducible analysis begins with a stable computational environment. SEM models are sensitive not only to the software used but also to subtle changes across versions of packages, operating systems, and numerical libraries.
48+
49+
Specify the software and exact version you will use--such as `lavaan 0.6-18` in `R 4.4.1`, `Mplus 8.10`, or `semopy 2.x` in Python. Don't stop at the modeling package; include the operating system, any hardware dependencies, and versions of supporting math libraries (e.g., BLAS, OpenBLAS, MKL) that may affect floating-point operations.
50+
51+
Use tools like `renv::snapshot()` in R, `requirements.txt` in Python, or `conda` environments to freeze dependencies. For maximal reproducibility, build a Docker container and share the `Dockerfile` or image.
52+
53+
Also include the exact script or notebook you intend to run during analysis. This provides a literal representation of your analysis plan--helpful not only for others but also for future-you.
54+
55+
# 2\. Defining the Scientific and Structural Model
56+
57+
Before data ever enters the picture, you need a clear theoretical model. This involves both conceptual and graphical representations of expected relationships.
58+
59+
Develop a complete path diagram showing hypothesized relationships between latent constructs and observed variables. Each latent variable should have a definition rooted in prior literature, and every item should be justified in terms of what it captures. This process helps clarify construct validity and prevents arbitrary inclusion of items during analysis.
60+
61+
Declare whether your model is directional (i.e., causal paths), and specify which variables are exogenous. For studies involving group comparisons or longitudinal designs, indicate plans for assessing measurement invariance.
62+
63+
To preserve confirmatory integrity, clearly state that no additional paths will be added unless in a predefined exploratory phase. If modifications are planned, make them conditional on specified thresholds or theoretical rationales.
64+
65+
# 3\. Operationalizing the Statistical Model
66+
67+
The theoretical structure must be translated into a formal statistical model. This includes selecting the appropriate SEM framework, specifying assumptions, and handling practical modeling choices.
68+
69+
Indicate the model type: Confirmatory Factor Analysis (CFA), full SEM, latent growth curve models, MIMIC, multilevel SEM, or network SEM. Each requires different identification strategies and introduces different assumptions.
70+
71+
Define your assumptions about variable distributions. For instance, will ordinal items be treated as continuous, or will you use polychoric correlations? Will you allow for non-normality or heavy-tailed distributions?
72+
73+
Declare how residuals are treated--are any error covariances theory-justified? Describe the strategy for missing data, whether Full Information Maximum Likelihood (FIML), multiple imputation, or listwise deletion.
74+
75+
Also fix your identification strategy: marker-variable (loading fixed to 1) or unit-variance scaling (latent variance fixed to 1). Changes to these decisions post hoc can affect parameter estimates, so preregistration helps avoid retrofitting models to the data.
76+
77+
# 4\. Estimation Methods and Robustness Considerations
78+
79+
Choosing an estimator isn't just a technical detail--it affects parameter accuracy, standard errors, and fit indices. Preregister your primary estimation method, such as Maximum Likelihood (ML), Robust ML (MLR), Diagonally Weighted Least Squares (DWLS/WLSMV), Unweighted Least Squares (ULS), or Bayesian estimation.
80+
81+
If you anticipate potential violations of assumptions, specify robustness corrections ahead of time. For instance, include Satorra–Bentler scaled chi-square if using MLR. Document the maximum number of iterations, convergence thresholds, and behavior in case of non-convergence.
82+
83+
For studies involving bootstrapping, state how many samples will be used (e.g., 5,000 BCa resamples), which statistics will be bootstrapped, and how the results will be interpreted.
84+
85+
Robustness checks should be planned--not reactive. They belong in a separate sensitivity analysis tier rather than as an opportunistic fix after primary analyses fail.
86+
87+
# 5\. Measurement Model Decisions and Fit Criteria
88+
89+
A major temptation in SEM is to "tune" the model post hoc to improve fit. Preregistration prevents this by locking in the criteria by which model fit will be judged.
90+
91+
Declare your primary fit indices: CFI, TLI, RMSEA, and SRMR are common. Specify thresholds for both good and acceptable fit (e.g., CFI > 0.95 for good, > 0.90 for acceptable).
92+
93+
State whether any correlated errors or cross-loadings are allowed based on theory. Describe how (or whether) modification indices (MI) will be consulted. If MIs are to be used, define a strict rule--for example, MI > 10 _and_ theoretical justification must both be met.
94+
95+
A best practice is to employ a two-tiered strategy: analyze the confirmatory model as preregistered, and if fit is poor, then conduct a clearly labeled exploratory refinement. Keep the confirmatory and exploratory results separate in interpretation and reporting.
96+
97+
--------------------------------------------------------------------------------
98+
99+
--------------------------------------------------------------------------------
100+
101+
# 6\. Predefined Backup Plans and Contingency Responses
102+
103+
Even well-specified models can fail. Convergence issues, inadmissible solutions, or severe model misfit are not uncommon in SEM. Rather than improvising fixes, define contingency plans in advance to preserve the integrity of your confirmatory claims.
104+
105+
Start by specifying a tiered approach to non-convergence. This might involve modifying starting values, switching optimization algorithms (e.g., from `nlminb` to `BFGS` in `lavaan`), or simplifying the model by removing problematic latent variables or paths.
106+
107+
Plan for the appearance of **Heywood cases**, such as negative variance estimates or standardized loadings exceeding 1\. You might specify that if a Heywood case appears and is minor (e.g., loading = 1.01), you will retain the solution; but if it exceeds a certain threshold, a predefined reduced model will be estimated instead.
108+
109+
If model fit is poor based on your preregistered criteria, define whether and how you will proceed. For example, you could specify that if RMSEA > 0.10, you will estimate a simplified version of the model that excludes certain weakly identified paths, provided the revision is consistent with theoretical expectations.
110+
111+
Contingencies can also include assumption violations, such as non-normality or outliers. If these are detected using predefined diagnostics (e.g., Mardia's test, Q-Q plots), you may move to robust estimation or data transformation--again, only if such actions were laid out in the preregistration.
112+
113+
Use a decision table mapping common problems to specific, predefined responses. This reduces the need for subjective choices once the data are visible.
114+
115+
# 7\. Multiverse and Sensitivity Analyses
116+
117+
To demonstrate that your results are not fragile, preregister a **multiverse analysis**--a systematic variation of defensible analytical decisions. This goes beyond robustness checks by explicitly modeling the uncertainty introduced by researcher degrees of freedom.
118+
119+
List all plausible alternatives in areas such as:
120+
121+
- Missing data strategy: FIML vs. multiple imputation
122+
- Treatment of ordinal items: as continuous vs. polychoric-based CFA
123+
- Scaling method: marker variable vs. unit variance
124+
- Grouping strategy: multigroup vs. covariate modeling
125+
- Outlier handling: Winsorizing, robust Mahalanobis distance exclusion
126+
127+
Create a plan to fit all combinations of these decisions and extract a key parameter of interest (e.g., path coefficient from A → B). Then visualize its distribution across specifications via a **specification curve**.
128+
129+
Include robustness checks such as **leave-one-indicator-out** analysis, where the measurement model is re-estimated repeatedly while omitting one indicator at a time to test for over-reliance on specific items.
130+
131+
The goal here is not to eliminate all variation, but to demonstrate that your conclusions hold across reasonable decision spaces. Automate this process before analyzing real data using scripts or workflows that can be rerun unchanged.
132+
133+
# 8\. Additional Preregistration Components
134+
135+
A high-quality preregistration does more than lock analytic decisions--it anticipates all aspects of confirmatory research.
136+
137+
Specify your **sample size** and the method used to determine it. This may involve a Monte Carlo simulation to assess statistical power for detecting your hypothesized effects under specific assumptions about model structure and measurement quality.
138+
139+
State your **primary outcomes** and hypotheses clearly. A good rule of thumb is one primary effect per hypothesis. Secondary effects should be labeled exploratory unless they are also preregistered with equal rigor.
140+
141+
If testing multiple effects, plan for **multiplicity correction**. This could involve controlling the false discovery rate (FDR), using Bonferroni or Holm corrections, or adopting a Bayesian approach with posterior probabilities.
142+
143+
Define your **reporting plan**, including how confirmatory results will be separated from exploratory ones. Make clear which figures, tables, and model variations will be included in the final paper.
144+
145+
Finally, consider uploading the preregistration to a public registry, such as [AsPredicted](https://aspredicted.org), [OSF Registries](https://osf.io/registries), or a journal-specific format if submitting under a Registered Reports model.
146+
147+
# 9\. Final Thoughts on Transparency and Rigor
148+
149+
Preregistration does not limit scientific creativity--it clarifies it. By defining your confirmatory analysis plan in advance, you create a clean boundary between hypothesis testing and hypothesis generation. Readers can trust that the results you present as confirmatory were not achieved through post hoc modifications.
150+
151+
A robust SEM preregistration spans more than just model syntax. It includes your computational setup, theoretical justifications, modeling assumptions, contingency plans, and robustness checks. It acknowledges the complexity of SEM and uses structure to prevent that complexity from becoming a liability.
152+
153+
Think of your preregistration as a recipe. If another researcher followed it precisely, without speaking to you, they should arrive at the same confirmatory results. When this happens, science advances--not just with findings, but with trust.
154+
155+
--------------------------------------------------------------------------------
156+
157+
# Resources and Templates
158+
159+
- [Preregistration Template for SEM Studies (OSF)](https://osf.io/registries)
160+
- [lavaan Model Syntax Documentation](https://lavaan.ugent.be/tutorial/index.html)
161+
- [Mplus User's Guide](https://www.statmodel.com/download/usersguide/MplusUserGuideVer_8.pdf)
162+
- [semopy Documentation](https://semopy.com/)
163+
- [Specifying and Visualizing Specification Curves](https://journals.sagepub.com/doi/10.1177/2515245919864955)

0 commit comments

Comments
 (0)