-
Notifications
You must be signed in to change notification settings - Fork 5
/
Copy path008_ml4_supplemental_results.Rmd
205 lines (159 loc) · 11.8 KB
/
008_ml4_supplemental_results.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
---
title : "ML4 Supplemental Results in rMarkdown"
shorttitle : "ML4 Results"
author:
- name : "Richard A. Klein"
affiliation : "1"
corresponding : yes # Define only one corresponding author
address : ""
email : "[email protected]"
affiliation:
- id : "1"
institution : "Université Grenoble Alpes"
authornote: |
This script generates supplemental analyses not reported in the main ML4 manuscript. To knit this document you must install the papaja package from GitHub.
abstract: |
keywords : "Terror Management Theory, mortality salience, replication, many labs"
wordcount : "X"
bibliography : ["r-references.bib"]
floatsintext : no
figurelist : no
tablelist : no
footnotelist : yes
linenumbers : no
mask : no
draft : no
documentclass : "apa6"
classoption : "man"
output : papaja::apa6_word
---
```{r setup, include = FALSE}
library("papaja")
library("metafor")
library("metaSEM")
library("haven")
library("psych")
library("dplyr")
library("effsize")
library("GPArotation")
library("tidyverse")
```
```{r analysis-preferences}
# Seed for random number generation
set.seed(1)
knitr::opts_chunk$set(cache.extra = knitr::rand_seed)
```
```{r analysis-loaddata, include = FALSE}
# Reading in all necessary data
# Primary data file with replication data aggregated across labs (deidentified should suffice)
# Note: This file is the merged data provided by sites, produced by 001_data_cleaning.R
merged <- readRDS("./data/public/merged_deidentified_subset.rds")
load("Results.RData")
load("supplementary_results.RData")
# Notes: Intercept1 is still the grand mean estimate,
# Slope1_1 represents the effect of expertise
# Tau2_1_1 is the heterogeneity estimate
# for reporting results from object of class "meta" in tidy fashion in-line
report_meta <- function(object, term, label = "*Hedges' g*") {
# grab coefficients table
res.df <- summary(object)$coefficients
# fetch model stats
g <- res.df[term, "Estimate"]
g.lbound <- res.df[term, "lbound"]
g.ubound <- res.df[term, "ubound"]
se <- res.df[term, 'Std.Error']
z <- res.df[term, 'z value']
p <- res.df[term, 'Pr(>|z|)']
# paste together into report
paste0(label, " = ", printnum(g),
", 95% CI = [", printnum(g.lbound), ", ", printnum(g.ubound),
"], *SE* = ", printnum(se),
", *Z* = ", printnum(z),
", *p* = ", printp(p))
}
report_q <- function(object) {
# grab Q.stat list
res.df <- summary(object)$Q.stat
# fetch the numbers
paste0("*Q*(", res.df$Q.df,
") = ", printnum(res.df$Q),
", *p* = ", printp(res.df$pval))
}
```
# Supplemental Results
This document presents supplementary analyses that consider ratings of the pro-US and anti-US authors as two separate DVs, rather than as a single difference score DV as in the main text.
The preregistered three-level meta-analyses frequently failed to converge because most sites ran only one study. In a divergence from the preregistration, we fit random-effects meta-analyses instead.
```{r analysis-alpha-exclusions, include = FALSE}
alpha_anti <- psych::alpha(select(merged, antius3, antius4, antius5))
alpha_pro <- psych::alpha(select(merged, prous3, prous4, prous5))
```
## Pro-US Author Ratings and Anti-US Author Ratings separately
The primary finding of interest from Greenberg et al. (1994) was that participants who underwent the mortality salience treatment showed greater preference for the pro-US essay author compared to the anti-US essay author. In the main article, we report our DVs with these separately averaged, and then anti-US scores subtracted from pro-US scores. In this section we instead treat these as two separate DVs and repeat the primary analyses reported in the manuscript.
### PRO-US RATINGS ONLY:
#### Research question 1: Meta-analytic results across all labs
A random-effects meta-analysis was conducted using the MetaSEM package (Cheung, 2014) in R.[^1] This analysis produces the grand mean effect size across all sites and versions.
[^1]: Sample code to run this analysis is: meta(y=es, v=var, data=dataset). In this sample code, “y=es” directs the program to the column of effect sizes, and “v=var” indicates the variable to be used as the sampling variance for each effect size.
First we examine if there was an overall effect of the mortality salience manipulation on ratings of the pro-US author. Regardless of which exclusion criteria were used, we did not observe an effect:
Exclusion set 1: `r report_meta(random1_pro, "Intercept1")`.
Exclusion set 2: `r report_meta(random2_pro, "Intercept1")`.
Exclusion set 3: `r report_meta(random3_pro, "Intercept1")`.
We also examined how much variation was observed among effect sizes (e.g., heterogeneity). For example, there may have been a mortality salience effect at some sites and not others. For all exclusion sets, this variability did not exceed variability we would expect by chance:
Exclusion set 1: `r report_q(random1_pro)`;
Exclusion set 2: `r report_q(random2_pro)`;
Exclusion set 3: `r report_q(random3_pro)`.
In sum, we observed no evidence for an overall effect of mortality salience on pro-US author ratings. Additionally, results indicate no detected heterogeneity in effect sizes across sites. This lack of variation suggests that it is unlikely we will observe an effect of Author Advised versus In House protocols or other moderators such as differences in samples or TMT knowledge. Even so, the plausible moderation by Author Advised/In House protocol is examined in the following section.
#### Research Question 2: Moderation by Author Advised/In House protocol
A covariate of protocol type was added to the random effects model to create a mixed-effects meta-analysis. This was pre-registered as our primary analysis.[^2]
[^2]: The addition of the argument "x = expert.ctr" to the prior metaSEM R code can be seen here: meta(y=es, v=var, x=expert.ctr, data=dataset)
This analysis again produces an overall grand mean effect size, and those were again not statistically significant across all three exclusion sets:
Exclusion set 1: `r report_meta(mixed1_pro, "Intercept1")`.
Exclusion set 2: `r report_meta(mixed2_pro, "Intercept1")`.
Exclusion set 3: `r report_meta(mixed3_pro, "Intercept1")`.
In addition, protocol version did not significantly predict replication effect size. Exclusion set 1: `r report_meta(mixed1_pro, "Slope1_1", label = "*b*")`;
Exclusion set 2: `r report_meta(mixed2_pro, "Slope1_1", label = "*b*")`;
Exclusion set 3: `r report_meta(mixed3_pro, "Slope1_1", label = "*b*")`.
We again did not observe heterogeneity between labs in any of the exclusion sets: exclusion set 1: `r report_q(mixed1_pro)`;
exclusion set 2: `r report_q(mixed2_pro)`;
exclusion set 3: `r report_q(mixed3_pro)`.
#### Research Question 3: Effect of Standardization
Finally, we tested whether In House protocols displayed greater variability in effect size than Author Advised protocols. To test this hypothesis, we ran the mixed-effects models but constrained the variance to 0, creating fixed-effects models. These models were then compared with a chi-squared differences test to assess whether the fit significantly changed. In this case, none of the three models significantly decreased in fit:
Exclusion set 1: *$\chi$²* (`r fit_comparison_1_pro$diffdf[2]`) = `r fit_comparison_1_pro$diffLL[2]`, *p* = `r fit_comparison_1_pro$p[2]`;
Exclusion set 2: *$\chi$²* (`r fit_comparison_2_pro$diffdf[2]`) = `r fit_comparison_2_pro$diffLL[2]`, *p* = `r fit_comparison_2_pro$p[2]`;
Exclusion set 3: *$\chi$²* (`r fit_comparison_3_pro$diffdf[2]`) = `r fit_comparison_3_pro$diffLL[2]`, *p* = `r fit_comparison_3_pro$p[2]`. Overall, there was no evidence that In House protocols elicited greater variability than Author Advised protocols.
### ANTI-US RATINGS ONLY:
#### Research question 1: Meta-analytic results across all labs
A random-effects meta-analysis was conducted using the MetaSEM package (Cheung, 2014) in R.[^3] This analysis produces the grand mean effect size across all sites and versions.
[^3]: Sample code to run this analysis is: meta(y=es, v=var, data=dataset). In this sample code, “y=es” directs the program to the column of effect sizes, and “v=var” indicates the variable to be used as the sampling variance for each effect size.
First we examine if there was an overall effect of the mortality salience manipulation on ratings of the anti-US author. There was no significant effect across exclusion sets:
Exclusion set 1: `r report_meta(random1_anti, "Intercept1")`.
Exclusion set 2: `r report_meta(random2_anti, "Intercept1")`.
Exclusion set 3: `r report_meta(random3_anti, "Intercept1")`.
We also examined how much variation was observed among effect sizes (e.g., heterogeneity). For example, there may have been a mortality salience effect at some sites and not others. We generally observed little evidence for heterogeneity:
exclusion set 1: `r report_q(random1_anti)`;
exclusion set 2: `r report_q(random2_anti)`;
exclusion set 3: `r report_q(random3_anti)`.
In sum, we observed no evidence for an overall effect of mortality salience on anti-us author ratings. Additionally, overall results suggest that there was minimal or no heterogeneity in effect sizes across sites. This lack of variation suggests that it is unlikely we will observe an effect of Author Advised versus In House protocols or other moderators such as differences in samples or TMT knowledge. Even so, the plausible moderation by Author Advised/In House protocol is examined in the following section.
#### Research Question 2: Moderation by Author Advised/In House protocol
A covariate of protocol type was added to the random effects model to create a mixed-effects meta-analysis. This was pre-registered as our primary analysis.[^4]
[^4]: The addition of the argument "x = version" to the prior metaSEM R code can be seen here: meta(y=es, v=var, x=version, data=dataset)
This analysis again produces an overall grand mean effect size, and those were again null across all three exclusion sets:
Exclusion set 1: `r report_meta(mixed1_anti, "Intercept")`.
Exclusion set 2: `r report_meta(mixed2_anti, "Intercept")`.
Exclusion set 3: `r report_meta(mixed3_anti, "Intercept")`.
In addition, protocol version did not significantly predict replication effect size. Exclusion set 1: `r report_meta(mixed1_anti, "Slope1_1", "*b*")`;
exclusion set 2: `r report_meta(mixed2_anti, "Slope1_1", "*b*")`;
exclusion set 3: `r report_meta(mixed3_anti, "Slope1_1", "*b*")`.
We again did not observe heterogeneity between labs, in any of the exclusion sets:
exclusion set 1: `r report_q(mixed1_anti)`;
exclusion set 2: `r report_q(mixed2_anti)`;
exclusion set 3: `r report_q(mixed3_anti)`.
#### Research Question 3: Effect of Standardization
Finally, we tested whether In House protocols displayed greater variability in effect size than Author Advised protocols. To test this hypothesis, we ran the mixed-effects models but constrained the variance to 0, effectively creating fixed-effects models. These models were then compared with a chi-squared differences test to assess whether the fit significantly changed. In this case, none of the three models significantly decreased in fit:
Exclusion set 1: *$\chi$²* (`r fit_comparison_1_anti$diffdf[2]`) = `r fit_comparison_1_anti$diffLL[2]`, *p* = `r fit_comparison_1_anti$p[2]`;
Exclusion set 2: *$\chi$²* (`r fit_comparison_2_anti$diffdf[2]`) = `r fit_comparison_2_anti$diffLL[2]`, *p* = `r fit_comparison_2_anti$p[2]`;
Exclusion set 3: *$\chi$²* (`r fit_comparison_3_anti$diffdf[2]`) = `r fit_comparison_3_anti$diffLL[2]`, *p* = `r fit_comparison_3_anti$p[2]`. Overall, there was no evidence that In House protocols elicited greater variability than Author Advised protocols.
\begingroup
\setlength{\parindent}{-0.5in}
\setlength{\leftskip}{0.5in}
<div id = "refs"></div>
\endgroup