-
Notifications
You must be signed in to change notification settings - Fork 79
/
11-meta.qmd
1078 lines (856 loc) · 88.1 KB
/
11-meta.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# Meta-analysis {#sec-meta}
```{r, include = FALSE}
# needed to make the chapter (not visible)
library(patchwork)
library(MASS)
library(MOTE)
library(metafor)
# for students
library(ggplot2)
# add description tau in heterogeneity section
```
Every single study is just a data-point in a future meta-analysis. If you draw small samples from a population, the mean and standard deviation in the sample can differ considerably from the mean and standard deviation in the population. There is great variability in small samples. Parameter estimates from small samples are very imprecise, and therefore the 95% confidence intervals around effect sizes are very wide. Indeed, this led Cohen [-@cohen_earth_1994] to write “I suspect that the main reason [confidence intervals] are not reported is that they are so embarrassingly large!” If we want a more precise estimate of our parameter of interest, such as the mean difference or correlation in the population, we need either run extremely large single studies, or alternatively, combine data from several studies by performing a meta-analysis. The most common approach to combine studies is to perform a meta-analysis of effect size estimates.
You can perform a meta-analysis for a set of studies in a single article you plan to publish (often called an internal meta-analysis), or you can search the literature for multiple studies reported in as many different articles as possible, and perform a meta-analysis on all studies others have published. An excellent introduction to meta-analyses is provided in the book by @borenstein_introduction_2009. There is commercial software you can use to perform meta-analyses, but I highly recommend against using such software. Almost all commercial software packages lack transparency, and do not allow you to share your analysis code and data with other researchers. In this chapter, we will be using R to perform a meta-analysis of effect sizes, using the `metafor` package by @viechtbauer_conducting_2010. An important benefit of using `metafor` is that your meta-analysis can be made completely reproducible. If you plan to perform a narrative review, it is relatively little additional effort to also code the effect sizes and sample size, and perform an effect size meta-analysis, and to code the statistical tests and *p*-values, to perform a *p*-curve or *z*-curve analysis (which will be discussed in the next chapter on [bias detection](#sec-bias)).
## Random Variation
People find it difficult to think about random variation. Our mind is more strongly geared towards recognizing patterns than randomness. In this section, the goal is to learn what random variation looks like, and how the number of observations collected determines the amount of variation.
Intelligence tests have been designed such that the mean Intelligence Quotient of the entire population of adults is 100, with a standard deviation of 15. This will not be true for every sample we draw from the population. Let’s get a feel for what the IQ scores from a sample look like. Which IQ scores will people in our sample have?
We will start by manually calculating the mean and standard deviation of a random sample of 10 individuals. Their IQ scores are: 91.15, 86.52, 75.64, 115.72, 95.83, 105.44, 87.10, 100.81, 82.63, and 106.22. If we sum these 10 scores and divide them by 10, we get the mean of our sample: 94.71. We can also calculate the standard deviation from our sample. First, we subtract the overall mean (94.71) from each individual IQ score. Then, we square these differences and then sum these squared differences (giving 1374.79). We divide this sum of the squared difference by the sample size minus 1 (10-1=9), and finally take the square root of this value, which gives the standard deviation: 12.36. Copy the code below, remove the `set.seed(3190)` line (which makes the code reproducible but creates the same data as in the plot below each time) and run it to randomly simulate 10 IQ scores and plot them.
```{r fig-plot-hist-iq-1, echo = TRUE}
#| fig-cap: "Simulation of 10 random datapoints with mean = 100 and sd = 15 in the population."
library(ggplot2)
set.seed(3190) # set seed for reproducibility
n <- 10 # set sample size
x <- rnorm(n = n, mean = 100, sd = 15) # simulate data
# plot data adding normal distribution and annotations
ggplot(as.data.frame(x), aes(x)) +
geom_histogram(colour = "black", fill = "grey", aes(y = after_stat(density)), binwidth = 2) +
stat_function(fun = dnorm, args = c(mean = 100, sd = 15), linewidth = 1, color = "red", lty = 2) +
xlab("IQ") +
ylab("number of people") +
theme_bw(base_size = 20) +
geom_vline(xintercept = mean(x), colour = "gray20", linetype = "dashed") +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = seq(50, 150, 10)) +
annotate("text", x = mean(x), y = 0.02, label = paste("Mean = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = ""), size = 8) +
theme(plot.background = element_rect(fill = "#fffafa")) +
theme(panel.background = element_rect(fill = "#fffafa"))
```
The plot above provides one example of a randomly simulated dataset of 10 points drawn from a normal distribution with a mean of 100 and a standard deviation of 15. The grey bars indicate the frequency with which each IQ score was observed. The red dotted line illustrates the normal distribution based on the mean and sd of the population. Both the observed mean (97; thin vertical dashed line), as well as the observed standard deviation (14), differ from the true population values. If we simulate 4 additional datasets, we see both the mean and the standard deviation vary.
```{r sim-4-iq, echo = FALSE}
set.seed(3190)
n <- 10 # set sample size
x <- rnorm(n = n, mean = 100, sd = 15) # create sample from normal distribution
# plot data adding normal distribution and annotations
p1 <- ggplot(as.data.frame(x), aes(x)) +
geom_histogram(colour = "black", fill = "grey", aes(y = after_stat(density)), binwidth = 2) +
stat_function(fun = dnorm, args = c(mean = 100, sd = 15), linewidth = 1, color = "red", lty = 2) +
xlab("IQ") +
ylab("number of people") +
theme_bw(base_size = 12) +
geom_vline(xintercept = mean(x), colour = "gray20", linetype = "dashed") +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = seq(50, 150, 10)) +
annotate("text", x = mean(x), y = 0.02, label = paste("Mean = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = ""), size = 5) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor))
n <- 10 # set sample size
x <- rnorm(n = n, mean = 100, sd = 15) # create sample from normal distribution
# plot data adding normal distribution and annotations
p2 <- ggplot(as.data.frame(x), aes(x)) +
geom_histogram(colour = "black", fill = "grey", aes(y = after_stat(density)), binwidth = 2) +
stat_function(fun = dnorm, args = c(mean = 100, sd = 15), linewidth = 1, color = "red", lty = 2) +
xlab("IQ") +
ylab("number of people") +
theme_bw(base_size = 12) +
geom_vline(xintercept = mean(x), colour = "gray20", linetype = "dashed") +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = seq(50, 150, 10)) +
annotate("text", x = mean(x), y = 0.02, label = paste("Mean = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = ""), size = 5) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor))
n <- 10 # set sample size
x <- rnorm(n = n, mean = 100, sd = 15) # create sample from normal distribution
# plot data adding normal distribution and annotations
p3 <- ggplot(as.data.frame(x), aes(x)) +
geom_histogram(colour = "black", fill = "grey", aes(y = after_stat(density)), binwidth = 2) +
stat_function(fun = dnorm, args = c(mean = 100, sd = 15), linewidth = 1, color = "red", lty = 2) +
xlab("IQ") +
ylab("number of people") +
theme_bw(base_size = 12) +
geom_vline(xintercept = mean(x), colour = "gray20", linetype = "dashed") +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = seq(50, 150, 10)) +
annotate("text", x = mean(x), y = 0.02, label = paste("Mean = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = ""), size = 5) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor))
n <- 10 # set sample size
x <- rnorm(n = n, mean = 100, sd = 15) # create sample from normal distribution
# plot data adding normal distribution and annotations
p4 <- ggplot(as.data.frame(x), aes(x)) +
geom_histogram(colour = "black", fill = "grey", aes(y = after_stat(density)), binwidth = 2) +
stat_function(fun = dnorm, args = c(mean = 100, sd = 15), linewidth = 1, color = "red", lty = 2) +
xlab("IQ") +
ylab("number of people") +
theme_bw(base_size = 12) +
geom_vline(xintercept = mean(x), colour = "gray20", linetype = "dashed") +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = seq(50, 150, 10)) +
annotate("text", x = mean(x), y = 0.02, label = paste("Mean = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = ""), size = 5) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor))
# Use patchwork to combine and plot only 1 legend without title.
combined <- p1 + p2 + p3 + p4 & theme(
legend.position = "bottom",
legend.title = element_blank()
)
combined + plot_layout(guides = "collect")
```
Imagine we did not yet know what the mean IQ was in our population (where *M* = 100), or the standard deviation (where *SD* = 15), and that we would only have access to one dataset. Our estimate might be rather far off. This type of variation is to be expected in small samples of 10 participants, given the true standard deviation. The variability in the mean is determined by the standard deviation of the measurement. In real life, the standard deviation can be reduced by for example using multiple and reliable measurements (which is why an IQ test has not just one question, but many different questions). But we can also make sure our sample mean is closer to the population mean by increasing the sample size.
A new simulated sample with 100 participants is plotted below. We are slowly seeing what is known as the **normal distribution** (and the frequency scores start to resemble the red dotted line illustrating the normal distribution of the population). This is the well-known
bell shaped curve that represents the distribution of many variables in scientific research (although some other types of distributions are quite common as well). The mean and standard deviation are much closer to the true mean and standard deviation, and this is true for most of the simulated samples if you set n <- 100 in the code above and run additional simulations.
```{r fig-plot-hist-iq-2, fig.margin = FALSE, echo = FALSE}
#| fig-cap: "100 random datapoints with mean = 100 and sd = 15 in the population."
set.seed(519)
n <- 100 # set sample size
x <- rnorm(n = n, mean = 100, sd = 15) # create sample from normal distribution
# plot data adding normal distribution and annotations
ggplot(as.data.frame(x), aes(x)) +
geom_histogram(colour = "black", fill = "grey", aes(y = after_stat(density)), binwidth = 2) +
stat_function(fun = dnorm, args = c(mean = 100, sd = 15), linewidth = 1, color = "red", lty = 2) +
xlab("IQ") +
ylab("number of people") +
theme_bw(base_size = 20) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor)) +
geom_vline(xintercept = mean(x), colour = "gray20", linetype = "dashed") +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = seq(50, 150, 10)) +
annotate("text", x = mean(x), y = 0.02, label = paste("Mean = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = ""), size = 8)
```
If we simulate a really large sample of 1000 observations, we will see the benefits of collecting a large sample size in terms of accuracy of the measurement. Not every simulated study of 1000 people will yield the true mean and standard deviation, but it will happen quite often. And note how although the distribution is very close to a normal distribution, even with 1000 people it is not perfect.
```{r fig-plot-hist-iq-3, fig.margin = FALSE, echo = FALSE}
#| fig-cap: "1000 random datapoints with mean = 100 and sd = 15 in the population."
set.seed(618)
n <- 1000 # set sample size
x <- rnorm(n = n, mean = 100, sd = 15) # create sample from normal distribution
# plot data adding normal distribution and annotations
ggplot(as.data.frame(x), aes(x)) +
geom_histogram(colour = "black", fill = "grey", aes(y = after_stat(density)), binwidth = 2) +
stat_function(fun = dnorm, args = c(mean = 100, sd = 15), linewidth = 1, color = "red", lty = 2) +
xlab("IQ") +
ylab("number of people") +
theme_bw(base_size = 20) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor)) +
geom_vline(xintercept = mean(x), colour = "gray20", linetype = "dashed") +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = seq(50, 150, 10)) +
annotate("text", x = mean(x), y = 0.02, label = paste("Mean = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = ""), size = 8)
```
So far, we have simulated only a single group of observations, but it is also informative to examine the variation we will observe when we compare the means in two independent groups. Assume we have a new IQ training program that will increase people's IQ score by 6 points. People in condition 1 are in the control condition – they do not get IQ training. People in condition 2 get IQ training. Let’s simulate 10 people in each group, assuming mean IQ in the control condition is 100 and in the experimental group is 106 (the SD is still 15 in each group).
```{r fig-plot-group1, warning=FALSE, fig.margin = FALSE, echo = FALSE}
#| fig-cap: "Simulation of 10 observations in two independent groups."
# Set color palette
cbbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
set.seed(811)
n1 <- 10 # set the sample size in Group 1
n2 <- 10 # set the sample size in Group 2
mx <- 100 # set the mean in Group 1
sdx <- 15 # set the standard deviation in Group 1
my <- 106 # set the mean in Group 2
sdy <- 15 # set the standard deviation in Group 2
# randomly draw data
x <- rnorm(n = n1, mean = mx, sd = sdx)
y <- rnorm(n = n2, mean = my, sd = sdy)
DV <- c(x, y) # combine the two samples into a single variable
IV <- as.factor(c(rep("1", n1), rep("2", n2))) # create the independent variable (1 and 2)
dataset <- as.data.frame(cbind(IV, DV)) # create a dataframe (to make the plot)
# plot graph
ggplot(dataset, aes(DV, fill = as.factor(IV))) +
geom_histogram(alpha = 0.4, binwidth = 2, position = "dodge", colour = "black", aes(y = after_stat(density))) +
scale_fill_manual(values = cbbPalette, name = "Condition") +
stat_function(fun = dnorm, args = c(mean = mx, sd = sdx), linewidth = 1, color = "#E69F00", lty = 2, inherit.aes = FALSE) +
stat_function(fun = dnorm, args = c(mean = my, sd = sdy), linewidth = 1, color = "#56B4E9", lty = 2, inherit.aes = FALSE) +
xlab("IQ") +
ylab("number of people") +
ggtitle("Data") +
theme_bw(base_size = 20) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor)) +
theme(legend.key = element_rect(fill = backgroundcolor)) +
theme(panel.grid.major.x = element_blank(), axis.text.y = element_blank(), panel.grid.minor.x = element_blank()) +
geom_vline(xintercept = mean(x), colour = "black", linetype = "dashed", linewidth = 1) +
geom_vline(xintercept = mean(y), colour = "black", linetype = "dashed", linewidth = 1) +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = c(50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150)) +
annotate("text", x = 70, y = 0.06, label = paste("Mean X = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = "")) +
annotate("text", x = 130, y = 0.06, label = paste("Mean Y = ", round(mean(y)), "\n", "SD = ", round(sd(y)), sep = "")) +
theme(plot.title = element_text(hjust = 0.5))
```
The two groups differ in how close they are to their true means, and as a consequence, the difference between groups varies as well. Note that this difference is the main variable in statistical analyses when comparing two groups in for example a *t*-test. In this specific simulation, we got quite extreme results, with a score of 96 (when the population mean is 100) and a score of 111 (when the population mean is 106). So in this sample, due to random variation, we calculate an effect size estimate that is quite a bit larger than the true effect size. Let's simulate 4 additional datasets to see the variation.
```{r fig-plot-group2, warning=FALSE, fig.margin = FALSE, echo = FALSE}
#| fig-cap: "Four simulated samples of independent groups."
# Set color palette
cbbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
set.seed(1420)
n1 <- 10 # set the sample size in Group 1
n2 <- 10 # set the sample size in Group 2
mx <- 100 # set the mean in Group 1
sdx <- 15 # set the standard deviation in Group 1
my <- 106 # set the mean in Group 2
sdy <- 15 # set the standard deviation in Group 2
# randomly draw data
x <- rnorm(n = n1, mean = mx, sd = sdx)
y <- rnorm(n = n2, mean = my, sd = sdy)
DV <- c(x, y) # combine the two samples into a single variable
IV <- as.factor(c(rep("1", n1), rep("2", n2))) # create the independent variable (1 and 2)
dataset <- as.data.frame(cbind(IV, DV)) # create a dataframe (to make the plot)
# plot graph
p1 <- ggplot(dataset, aes(DV, fill = as.factor(IV))) +
geom_histogram(alpha = 0.4, binwidth = 2, position = "dodge", colour = "black", aes(y = after_stat(density))) +
scale_fill_manual(values = cbbPalette, name = "Condition") +
stat_function(fun = dnorm, args = c(mean = mx, sd = sdx), linewidth = 1, color = "#E69F00", lty = 2, inherit.aes = FALSE) +
stat_function(fun = dnorm, args = c(mean = my, sd = sdy), linewidth = 1, color = "#56B4E9", lty = 2, inherit.aes = FALSE) +
xlab("IQ") +
ylab("number of people") +
ggtitle("Data") +
theme_bw(base_size = 10) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor)) +
theme(legend.key = element_rect(fill = backgroundcolor)) +
theme(panel.grid.major.x = element_blank(), axis.text.y = element_blank(), panel.grid.minor.x = element_blank()) +
geom_vline(xintercept = mean(x), colour = "black", linetype = "dashed", linewidth = 1) +
geom_vline(xintercept = mean(y), colour = "black", linetype = "dashed", linewidth = 1) +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = c(50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150)) +
annotate("text", x = 70, y = 0.06, label = paste("Mean X = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = "")) +
annotate("text", x = 130, y = 0.06, label = paste("Mean Y = ", round(mean(y)), "\n", "SD = ", round(sd(y)), sep = "")) +
theme(plot.title = element_text(hjust = 0.5))
# randomly draw data
x <- rnorm(n = n1, mean = mx, sd = sdx)
y <- rnorm(n = n2, mean = my, sd = sdy)
DV <- c(x, y) # combine the two samples into a single variable
IV <- as.factor(c(rep("1", n1), rep("2", n2))) # create the independent variable (1 and 2)
dataset <- as.data.frame(cbind(IV, DV)) # create a dataframe (to make the plot)
# plot graph
p2 <- ggplot(dataset, aes(DV, fill = as.factor(IV))) +
geom_histogram(alpha = 0.4, binwidth = 2, position = "dodge", colour = "black", aes(y = after_stat(density))) +
scale_fill_manual(values = cbbPalette, name = "Condition") +
stat_function(fun = dnorm, args = c(mean = mx, sd = sdx), linewidth = 1, color = "#E69F00", lty = 2, inherit.aes = FALSE) +
stat_function(fun = dnorm, args = c(mean = my, sd = sdy), linewidth = 1, color = "#56B4E9", lty = 2, inherit.aes = FALSE) +
xlab("IQ") +
ylab("number of people") +
ggtitle("Data") +
theme_bw(base_size = 10) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor)) +
theme(panel.grid.major.x = element_blank(), axis.text.y = element_blank(), panel.grid.minor.x = element_blank()) +
geom_vline(xintercept = mean(x), colour = "black", linetype = "dashed", linewidth = 1) +
geom_vline(xintercept = mean(y), colour = "black", linetype = "dashed", linewidth = 1) +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = c(50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150)) +
annotate("text", x = 70, y = 0.06, label = paste("Mean X = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = "")) +
annotate("text", x = 130, y = 0.06, label = paste("Mean Y = ", round(mean(y)), "\n", "SD = ", round(sd(y)), sep = "")) +
theme(plot.title = element_text(hjust = 0.5))
# randomly draw data
x <- rnorm(n = n1, mean = mx, sd = sdx)
y <- rnorm(n = n2, mean = my, sd = sdy)
DV <- c(x, y) # combine the two samples into a single variable
IV <- as.factor(c(rep("1", n1), rep("2", n2))) # create the independent variable (1 and 2)
dataset <- as.data.frame(cbind(IV, DV)) # create a dataframe (to make the plot)
# plot graph
p3 <- ggplot(dataset, aes(DV, fill = as.factor(IV))) +
geom_histogram(alpha = 0.4, binwidth = 2, position = "dodge", colour = "black", aes(y = after_stat(density))) +
scale_fill_manual(values = cbbPalette, name = "Condition") +
stat_function(fun = dnorm, args = c(mean = mx, sd = sdx), linewidth = 1, color = "#E69F00", lty = 2, inherit.aes = FALSE) +
stat_function(fun = dnorm, args = c(mean = my, sd = sdy), linewidth = 1, color = "#56B4E9", lty = 2, inherit.aes = FALSE) +
xlab("IQ") +
ylab("number of people") +
ggtitle("Data") +
theme_bw(base_size = 10) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor)) +
theme(panel.grid.major.x = element_blank(), axis.text.y = element_blank(), panel.grid.minor.x = element_blank()) +
geom_vline(xintercept = mean(x), colour = "black", linetype = "dashed", linewidth = 1) +
geom_vline(xintercept = mean(y), colour = "black", linetype = "dashed", linewidth = 1) +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = c(50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150)) +
annotate("text", x = 70, y = 0.06, label = paste("Mean X = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = "")) +
annotate("text", x = 130, y = 0.06, label = paste("Mean Y = ", round(mean(y)), "\n", "SD = ", round(sd(y)), sep = "")) +
theme(plot.title = element_text(hjust = 0.5))
# randomly draw data
x <- rnorm(n = n1, mean = mx, sd = sdx)
y <- rnorm(n = n2, mean = my, sd = sdy)
DV <- c(x, y) # combine the two samples into a single variable
IV <- as.factor(c(rep("1", n1), rep("2", n2))) # create the independent variable (1 and 2)
dataset <- as.data.frame(cbind(IV, DV)) # create a dataframe (to make the plot)
# plot graph
p4 <- ggplot(dataset, aes(DV, fill = as.factor(IV))) +
geom_histogram(alpha = 0.4, binwidth = 2, position = "dodge", colour = "black", aes(y = after_stat(density))) +
scale_fill_manual(values = cbbPalette, name = "Condition") +
stat_function(fun = dnorm, args = c(mean = mx, sd = sdx), linewidth = 1, color = "#E69F00", lty = 2, inherit.aes = FALSE) +
stat_function(fun = dnorm, args = c(mean = my, sd = sdy), linewidth = 1, color = "#56B4E9", lty = 2, inherit.aes = FALSE) +
xlab("IQ") +
ylab("number of people") +
ggtitle("Data") +
theme_bw(base_size = 10) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor)) +
theme(panel.grid.major.x = element_blank(), axis.text.y = element_blank(), panel.grid.minor.x = element_blank()) +
geom_vline(xintercept = mean(x), colour = "black", linetype = "dashed", linewidth = 1) +
geom_vline(xintercept = mean(y), colour = "black", linetype = "dashed", linewidth = 1) +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = c(50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150)) +
annotate("text", x = 70, y = 0.06, label = paste("Mean X = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = "")) +
annotate("text", x = 130, y = 0.06, label = paste("Mean Y = ", round(mean(y)), "\n", "SD = ", round(sd(y)), sep = "")) +
theme(plot.title = element_text(hjust = 0.5))
# Use patchwork to combine and plot only 1 legend without title.
combined <- p1 + p2 + p3 + p4 & theme(
legend.position = "bottom",
legend.title = element_blank()
)
combined + plot_layout(guides = "collect")
```
We see that there is quite some variation, up to the point that in one simulation the sample means are in the opposite direction of the population means. Again, increasing the sample size will mean that, in the long run, the sample means will get closer to the population means, and that we are more accurately estimating the difference between conditions. With 250 observations in each group, a randomly simulated set of observations for the two groups might look like @fig-plotgroup3. Note that this difference might not look impressive. However, the difference would pass a significance test (an independent *t*-test) with a very low alpha level.
```{r fig-plotgroup3, warning=FALSE, fig.margin = FALSE, echo = FALSE}
#| fig-cap: "Simulated sample of 250 independent observations."
# Set color palette
cbbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
set.seed(811)
n1 <- 250 # set the sample size in Group 1
n2 <- 250 # set the sample size in Group 2
mx <- 100 # set the mean in Group 1
sdx <- 15 # set the standard deviation in Group 1
my <- 106 # set the mean in Group 2
sdy <- 15 # set the standard deviation in Group 2
# randomly draw data
x <- rnorm(n = n1, mean = mx, sd = sdx)
y <- rnorm(n = n2, mean = my, sd = sdy)
DV <- c(x, y) # combine the two samples into a single variable
IV <- as.factor(c(rep("1", n1), rep("2", n2))) # create the independent variable (1 and 2)
dataset <- as.data.frame(cbind(IV, DV)) # create a dataframe (to make the plot)
# plot graph
ggplot(dataset, aes(DV, fill = as.factor(IV))) +
geom_histogram(alpha = 0.4, binwidth = 2, position = "dodge", colour = "black", aes(y = after_stat(density))) +
scale_fill_manual(values = cbbPalette, name = "Condition") +
stat_function(fun = dnorm, args = c(mean = mx, sd = sdx), linewidth = 1, color = "#E69F00", lty = 2, inherit.aes = FALSE) +
stat_function(fun = dnorm, args = c(mean = my, sd = sdy), linewidth = 1, color = "#56B4E9", lty = 2, inherit.aes = FALSE) +
xlab("IQ") +
ylab("number of people") +
ggtitle("Data") +
theme_bw(base_size = 20) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor)) +
theme(legend.key = element_rect(fill = backgroundcolor)) +
theme(panel.grid.major.x = element_blank(), axis.text.y = element_blank(), panel.grid.minor.x = element_blank()) +
geom_vline(xintercept = mean(x), colour = "black", linetype = "dashed", linewidth = 1) +
geom_vline(xintercept = mean(y), colour = "black", linetype = "dashed", linewidth = 1) +
coord_cartesian(xlim = c(50, 150)) +
scale_x_continuous(breaks = c(50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150)) +
annotate("text", x = 70, y = 0.04, label = paste("Mean X = ", round(mean(x)), "\n", "SD = ", round(sd(x)), sep = "")) +
annotate("text", x = 130, y = 0.04, label = paste("Mean Y = ", round(mean(y)), "\n", "SD = ", round(sd(y)), sep = "")) +
theme(plot.title = element_text(hjust = 0.5))
```
The variation in the estimate of the mean decreases as the sample size increases. The larger the sample size, the more precise the estimate of the mean becomes. The **standard deviation of the sample** ($\sigma_x$) of single IQ scores is 15, irrespective of the sample size, and the larger the sample size, the more accurately we can measure the true standard deviation. But the **standard deviation of the sampling distribution of the sample mean** ($\sigma_{\overline{x}}$) decreases, as the sample size increases, and is referred to as the **standard error (SE)**. The estimated standard deviation of the sample mean, or the standard error, calculated based on the observed standard deviation of the sample ($\sigma_x$) is:
$$SE = \sigma_{\overline{x}} = \frac{\sigma_x}{\sqrt{n}}$$
Based on this formula, and assuming an observed standard deviation of the sample of 15, the standard error of the mean is `r round(15/sqrt(10), 2)` for a sample size of 10, and `r round(15/sqrt(250), 2)` for a sample size of 250. Because estimates with a lower standard error are more precise, the effect size estimates in a meta-analysis are weighed based on the standard error, with the more precise estimates getting more weight.
So far we have seen random variation in means, but correlations will show similar variation as a function of the sample size. We will continue with our example of measuring IQ scores, but now we search for fraternal (so not identical) twins, and measure their IQ. Estimates from the literature suggest the true correlation of IQ scores between fraternal twins is around *r* = 0.55. We find 30 fraternal twins, measure their IQ scores, and plot the relation between the IQ of both individuals. In this simulation, we assume all twins have a mean IQ of 100 with a standard deviation of 15.
The correlation is calculated based on the IQ scores of one fraternal twin (x) and the IQ scores of the other fraternal twin (y) for each pair of twins, and the total number of pairs (N). In the numerator of the formula, the number of pairs is multiplied by the sum of the product of x and y, and from this value the sum of x multiplied by the sum of y is subtracted. In the denominator, the square root is taken from the number of pairs multiplied by the sum of x squared, from which the sum of x, which is then squared, is subtracted, and multiplied by the same calculation but now for y.
$$r=\frac{n \Sigma x y-(\Sigma x )(\Sigma y)}{\sqrt{[n \Sigma x^{2}-(\Sigma x)^{2}][n \Sigma y^{2}-(\Sigma y)^{2}]}}$$
When we randomly simulate observations for 30 twins, we get the following result.
```{r, fig-plot-cor1, warning=FALSE, fig.margin = FALSE, echo = FALSE}
#| fig-cap: "Correlation based on 30 pairs."
set.seed(4130)
n <- 30 # set sample size
cor.true <- 0.55 # set true correlation
mx <- 106 # set the mean in Group 1
sdx <- 15 # set the standard deviation in Group 1
my <- 100 # set the mean in Group 2
sdy <- 15 # set the standard deviation in Group 2
# randomly create correlated data using the mvrnorm function in the MASS package
cov.mat <- matrix(c(1.0, cor.true, cor.true, 1.0), nrow = 2, byrow = T)
mu <- c(0, 0)
mat <- MASS::mvrnorm(n, Sigma = cov.mat, mu = mu, empirical = FALSE)
x <- mat[, 1] * sdx + mx
y <- mat[, 2] * sdy + my
dataset <- as.data.frame(cbind(x, y))
# Plot graph
ggplot(dataset, aes(x = x, y = y)) +
geom_point(size = 2) + # Use hollow circles
geom_smooth(method = lm, colour = "#E69F00", size = 1, fill = "#56B4E9", formula = "y ~ x") + # Add linear regression line
geom_abline(intercept = (my - (cor.true * mx)), slope = cor.true, colour = "black", linetype = "dotted", size = 1) +
coord_cartesian(xlim = c(40, 160), ylim = c(40, 160)) +
scale_x_continuous(breaks = c(40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160)) +
scale_y_continuous(breaks = c(40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160)) +
xlab("IQ twin 1") +
ylab("IQ twin 2") +
ggtitle(paste("Correlation = ", round(cor(x, y), digits = 2), sep = "")) +
theme_bw(base_size = 20) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor)) +
theme(legend.key = element_rect(fill = backgroundcolor)) +
theme(panel.grid.major.x = element_blank(), panel.grid.minor.x = element_blank()) +
theme(plot.title = element_text(hjust = 0.5))
```
On the x-axis, we see the IQ score of one twin, and on the y-axis we see the IQ score of the second twin, for each pair. The black dotted diagonal line illustrates the true correlation (0.55), while the yellow line shows the observed correlation (in this case, *r* = 0.43). The slope of the yellow line is determined by the observed correlation, but the position of the line is influenced by the mean IQ scores in both groups (in this simulation, the mean on the y-axis is `r round(mean(x),0)`, somewhat above 100, and the mean on the x-axis is `r round(mean(y),0)`, also slightly above 100. The blue area is the 95% confidence interval around the observed correlation. As we saw in the chapter on [confidence intervals](#confint), 95% of the time (in the long run) the blue area will contain the true correlation (the dotted black line). As in the examples based on means, increasing the sample size to 300 narrows the confidence interval considerably, and will mean that most of the time the correlation in the sample is much closer to the correlation in the population. As the sample size increases, the estimate of the correlation becomes more precise, following the formula of the standard error of a correlation:
$$SE_{r_{xy}} = \frac{1 - r^2_{xy}}{\sqrt{(n - 2)}}$$
```{r, fig-plot-cor2, warning=FALSE, fig.margin = FALSE, echo = FALSE}
#| fig-cap: "Correlation based on 300 pairs."
set.seed(4130)
n <- 300 # set sample size
cor.true <- 0.55 # set true correlation
mx <- 106 # set the mean in Group 1
sdx <- 15 # set the standard deviation in Group 1
my <- 100 # set the mean in Group 2
sdy <- 15 # set the standard deviation in Group 2
# randomly create correlated data using the mvrnorm function in the MASS package
cov.mat <- matrix(c(1.0, cor.true, cor.true, 1.0), nrow = 2, byrow = T)
mu <- c(0, 0)
mat <- MASS::mvrnorm(n, Sigma = cov.mat, mu = mu, empirical = FALSE)
x <- mat[, 1] * sdx + mx
y <- mat[, 2] * sdy + my
dataset <- as.data.frame(cbind(x, y))
# Plot graph
ggplot(dataset, aes(x = x, y = y)) +
geom_point(size = 2) + # Use hollow circles
geom_smooth(method = lm, colour = "#E69F00", size = 1, fill = "#56B4E9", formula = "y ~ x") + # Add linear regression line
geom_abline(intercept = (my - (cor.true * mx)), slope = cor.true, colour = "black", linetype = "dotted", size = 1) +
coord_cartesian(xlim = c(40, 160), ylim = c(40, 160)) +
scale_x_continuous(breaks = c(40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160)) +
scale_y_continuous(breaks = c(40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160)) +
xlab("IQ twin 1") +
ylab("IQ twin 2") +
ggtitle(paste("Correlation = ", round(cor(x, y), digits = 2), sep = "")) +
theme_bw(base_size = 20) +
theme(plot.background = element_rect(fill = backgroundcolor)) +
theme(panel.background = element_rect(fill = backgroundcolor)) +
theme(panel.grid.major.x = element_blank(), panel.grid.minor.x = element_blank()) +
theme(plot.title = element_text(hjust = 0.5))
```
Because estimates of means, standard deviations, or correlations based on small samples have relatively large uncertainty, it is preferable to collect larger samples. However, this is not always possible, and often the goal of a study is not to provide an accurate estimate, but to test a hypothesis. A study often requires less observations to achieve sufficient power for a hypothesis test, than are required to be able to accurately estimate a parameter [@maxwell_sample_2008]. Therefore, scientists often rely on meta-analyses, where data from multiple studies are combined, to provide accurate estimates.
## A single study meta-analysis
Let’s first begin with something you will hardly ever do in real life: a meta-analysis of a single study. This is a little silly, because a simple *t*-test or correlation will tell you the same thing – but it is educational to compare a *t*-test with a meta-analysis of a single study, before we look at how multiple studies are combined into a meta-analysis.
A difference between an independent *t*-test and a meta-analysis is that a *t*-test is performed on the raw data, while a meta-analysis is typically performed on the effect size(s) of individual studies. The `metafor` R package contains a very useful function called `escalc` that can be used to calculate effect sizes, their variances, and confidence intervals around effect size estimates. So let’s start by calculating the effect size to enter into our meta-analysis. As explained in the chapter on [effect sizes](#sec-effectsize) the two main effect sizes used for meta-analyses of continuous variables are the standardized mean difference (*d*) or the correlation (*r*), although it is of course also possible to perform meta-analyses on dichotomous variables (we will see an example below). The code below will calculate the **standardized mean difference** (SMD) from two independent groups from **means** (specified by m1i and m2i), **standard deviations** (sd1i and sd2i), and the number of observations in each group (n1i and n2i). By default, `metafor` computes the effect size ‘**Hedges’ g**’ which is the unbiased version of Cohen’s *d* (see the section on [Cohen's *d*](#sec-cohend) in the chapter on Effect Sizes).
```{r intrometa1}
library(metafor)
g <- escalc(measure = "SMD",
n1i = 50, # sample size in Group 1
m1i = 5.6, # observed mean in Group 1
sd1i = 1.2, # observed standard deviation in Group 1
n2i = 50, # sample size in Group 2
m2i = 4.9, # observed mean in Group 2
sd2i = 1.3) # observed standard deviation in Group 2
g
```
The output gives you Hedge’s *g* (under the `yi` column, which always returns the effect size, in this case the standardized mean difference) and the variance of the effect size estimate (under `vi`). As explained in @borenstein_introduction_2009 formula 4.18 to 4.24 the standardized mean difference Hedges’ *g* is calculated by dividing the difference between means by the pooled standard deviation, multiplied by a correction factor, J:
$$
J = (1 - \ \ 3/(4df - 1))
$$
$$
g = J \times \ \left( \frac{{\overline{X}}_{1} - {\overline{X}}_{2}}{S_{\text{within}}} \right)
$$
and a very good approximation of the variance of Hedges’ g is provided by:
$$
Vg = J^{2} \times \left( \frac{n_{1} + n_{2}}{n_{1}n_{2}} + \frac{g^{2}}{2(n_{1} + n_{2})} \right)
$$
The variance of the standardized mean difference depends only on the sample size (n1 and n2) and the value of the standardized mean difference itself. **To perform the required calculations for a meta-analysis, you need the effect sizes and their variance**. This means that if you have coded the effect sizes and the sample sizes (per group) from studies in the literature, you have the information you need to perform a meta-analysis. You do not need to manually calculate the effect size and its variance using the two formula above – the `escalc` function does this for you. We can now easily perform a single study meta-analysis using the `rma` function in the `metafor` package:
```{r runmeta}
meta_res <- rma(yi, vi, data = g)
meta_res
```
Under 'Model Results' we find the effect size Hedges’ g (`r round(meta_res$b[1], 2)`) and the standard error (`r round(meta_res$se, 2)`), the *Z*-test statistic testing the mean difference against the null-hypothesis (`r round(meta_res$zval, 2)`), and the 95% confidence interval [ci.lb = `r round(meta_res$ci.lb, 2)`; ci.ub = `r round(meta_res$ci.ub,2)`] around the effect size (the interval width can be specified using the ‘level =’ option). We also see the *p*-value for the test of the meta-analytic effect size against 0. In this case we can reject the null-hypothesis (*p* = `r round(meta_res$pval, 3)`).
In a meta-analysis, a *Z*-test is used to examine whether the null-hypothesis can be rejected. This assumes a normally distributed random effect size model. Normally, you would analyze data from a single study with two groups using a *t*-test, which not surprisingly uses a *t*-distribution. I don't know why statistical computations sometimes care a lot about a small amount of bias (the difference between the effect size *d* and *g*, for example) and sometimes not (the difference between *Z* and *t*), but meta-analysts seem happy with *Z*-scores (in fact, with large enough sample sizes (which is commonly true in a meta-analysis) the difference between a *Z*-test and *t*-test is tiny). If we directly compare a single-study meta-analysis based on a *Z*-test with a *t*-test, we will see some tiny differences in the results.
As explained in the chapter on [effect sizes](#sec-effectsize) we can directly calculate the effect size Hedges’ *g* (and it's 95% confidence interval) using MOTE [@buchanan_mote_2017]. The MOTE package uses the *t*-distribution when calculating confidence intervals around the effect size (and we can see this makes only a tiny difference compared to using the *Z*-distribution in a meta-analysis with 50 observations in each group).
```{r moteres, messages = FALSE, echo = FALSE}
# Calculate Hedges g and perform t-test with MOTE
MOTE_res <- MOTE::g.ind.t(m1 = 5.6,
m2 = 4.9,
sd1 = 1.2,
sd2 = 1.3,
n1 = 50,
n2 = 53,
a = 0.05)
```
The *t*-value is `r round(MOTE_res$t, 3)`, and the *p*-value is `r round(MOTE_res$p, 3)`. The results are very similar to those computed when performing a meta-analysis, with *g* = `r round(MOTE_res$d, 2)`, 95% CI[`r round(MOTE_res$dlow, 2)`; `r round(MOTE_res$dhigh, 2)`], where the effect size and the upper bound for the confidence interval differ only 0.01 after rounding.
It is now common to visualize the results of a meta-analysis using a forest plot. According to @cooper_handbook_2009 the first forest plot was published in 1978 [@freiman_importance_1978], with the goal to visualize a large set of studies that had concluded the absence of an effect based on non-significant results in small studies (see @fig-freiman1978). By plotting the width of the confidence interval for each study, it becomes possible to see that even though the studies do not reject an effect size of 0, and thus were all non-significant, many studies also did not reject the presence of a meaningful favorable treatment effect. To make large studies more noticeable in a forest plot, later versions added a square to indicate the estimated effect size, where the size of the square was proportional to the weight that will be assigned to the study when computing the combined effect.
```{r fig-freiman1978, echo=FALSE, out.width = '100%', fig.height = 10}
#| fig-cap: "First version of a forest plot by Freiman and colleagues, 1978 (image from https://www.jameslindlibrary.org/freiman-ja-chalmers-tc-smith-h-kuebler-rr-1978/)."
knitr::include_graphics("images/freiman1978.jpg")
```
In @fig-metaforest we see a modern version of a forest plot, with the effect size for Study 1 marked by the black square at `r round(meta_res$b[1], 2)`, and the confidence interval visualized by lines extending to `r round(meta_res$ci.lb, 2)` on the left and `r round(meta_res$ci.ub, 2)` on the right. The numbers printed on the right-hand side of the forest plot provide the exact values for the effect size estimate and the lower and upper bound of the confidence interval. On the lower half of the forest plot, we see a stretched-out diamond, in a row labeled 'RE Model', for 'Random Effects model'. The diamond summarizes the meta-analytic effect size estimate, being centered on that effect size estimate with the left and right endpoints at the 95% confidence interval of the estimate. Because we only have a single study, the meta-analytic effect size estimate is the same as the effect size estimate for our single study.
```{r fig-metaforest, fig.margin = FALSE, echo = FALSE}
#| fig-cap: "Forest plot for a single study."
par(mar=c(5,4,0,2))
par(bg = backgroundcolor)
metafor::forest(meta_res)
```
## Simulating meta-analyses of mean standardized differences
Meta-analyses get a bit more exciting when we are using them to analyze results from multiple studies. When multiple studies are combined in a meta-analysis, effect size estimates are not simply averaged, but they are **weighed** by the **precision** of the effect size estimate, which is determined by standard error, which is in turn determined by the sample size of the study. Thus, the larger the sample size of an individual study, the more weight it gets in the meta-analysis, meaning that it has more influence on the meta-analytic effect size estimate.
One intuitive way to learn about meta-analyses is to simulate studies and meta-analyze them. The code below simulates 12 studies. There is a true effect in the simulated studies, as the difference in means in the population is 0.4 (and given the standard deviation of 1, Cohen's *d* = 0.4 as well). The studies vary in their sample size between 30 observations and 100 observations per condition. The meta-analysis is performed, and a forest plot is created.
```{r fig-meta-sim, echo = TRUE}
#| fig-cap: "Forest plot for 12 simulated studies."
set.seed(94)
nSims <- 12 # number of simulated studies
m1 <- 0.4 # population mean Group 1
sd1 <- 1 # standard deviation Group 1
m2 <- 0 # population mean Group 2
sd2 <- 1 # standard deviation Group 1
metadata <- data.frame(yi = numeric(0), vi = numeric(0)) # create dataframe
for (i in 1:nSims) { # for each simulated study
n <- sample(30:100, 1) # pick a sample size per group
x <- rnorm(n = n, mean = m1, sd = sd1)
y <- rnorm(n = n, mean = m2, sd = sd2)
metadata[i,1:2] <- metafor::escalc(n1i = n, n2i = n, m1i = mean(x),
m2i = mean(y), sd1i = sd(x), sd2i = sd(y), measure = "SMD")
}
result <- metafor::rma(yi, vi, data = metadata, method = "FE")
par(bg = "#fffafa")
metafor::forest(result)
```
We see 12 rows, one for each study, each with their own effect size and confidence interval. If you look closely, you can see the squares that indicate the effect size estimate for each study differ in size. The larger the sample size, the bigger the square. Study 5 had a relatively small sample size, which can be seen by both the small square and the relatively wide confidence interval. Study 9 had a larger sample size, and thus a slightly larger square and narrower confidence interval. At the bottom of the graph we find the meta-analytic effect size and its confidence interval, both visualized by a diamond and numerically. The model is referred to as an FE Model, or **Fixed Effect (FE) model**. The alternative approach is an RE Model, or **Random Effects (RE) model** (the difference is discussed below).
You might notice that the first two studies in the meta-analysis were not statistically significant. Take a moment to think for yourself if you would have continued this research line, after not finding an effect twice in a row. If you feel like it, run the code above several times (remove the set.seed argued used to make the simulation reproducible first, or you will get the same result each time) and see how often this happens with a population effect size and range of sample sizes in this simulation. As should be clear from discussion of mixed results in the chapter on likelihoods, it is important to think meta-analytically. In the long run, there will be situations where you will find one or two non-significant results early in a research line, even when there is a true effect.
Let's also look at the statistical results of the meta-analysis, which is a bit more interesting now that we have 12 studies:
```{r printmeta, echo = FALSE}
result
```
We see a test for **heterogeneity**, a topic we will return to [below](#sec-heterogeneity). We see the model results, which in this specific simulation yielded a meta-analytic effect size estimate of `r round(result$b[1], 2)`. The confidence interval around the effect size estimate [`r round(result$ci.lb, 2)` ; `r round(result$ci.ub, 2)`] is much narrower than we saw before for a single study. This is because the 12 studies we simulated together have quite a large sample size, and the larger the sample size, the smaller the standard error, and thus the narrower the confidence interval is. The meta-analytic effect size estimate is statistically different from 0 (*p* \< 0.0001) so we can reject the null hypothesis even if we use a stringent alpha level, such as 0.001. Note that, as discussed in the chapter on sample size justification and in line with the section on justifying error rates in the chapter on error control, it seems sensible to use a much lower alpha level than 5% in meta-analyses. It is possible to set the alpha level in `metafor`, e.g. using `level = 0.999` (for an alpha level of 0.001), but this adjusts all confidence intervals, including those of the individual studies, which will mostly have used an alpha level of 0.05, so it is easier to just manually check if the test is significant at your chosen alpha level (e.g., 0.001).
## Fixed Effect vs Random Effects
There are two possible models when performing a meta-analysis. One model, known as a fixed effect model, assumes there is one effect size that generates the data in all studies in the meta-analysis. This model assumes there is no variation between individual studies – all have exactly the same true effect size. The author of the `metafor` package we used in this chapter prefers to use the term [equal-effects model](https://wviechtb.github.io/metafor/reference/misc-models.html) instead of fixed effect model. The perfect example of this is the simulations we have done so far. We specified a single true effect in the population, and generated random samples from this population effect.
Alternatively, one can use a model where the true effect differs in some way in each individual study. We don’t have a single true effect in the population, but a range of **randomly distributed** true effect sizes (hence the ‘random effects’ model). Studies differs in some way from each other (or some sets of studies differ from other sets), and their true effect sizes differ as well. Note the difference between a fixed effect model, and a random effect**s** model, in that the plural ‘effects’ is used only in the latter. Borenstein et al [-@borenstein_introduction_2009] state there are two reasons to use a fixed effect model: When all studies are functionally equivalent, and when your goal is *not* to generalize to other populations. This makes the random effects model generally the better choice, although some people have raised the concern that random-effects models give more weight to smaller studies, which can be more biased. By default, `metafor` will use a random effects model. We used the `method="FE"` command to explicitly ask for a fixed effect model. In the meta-analyses that we will simulate in the rest of this chapter, we will leave out this command and simulate random effects meta-analyses, as this is the better choice in many real life meta-analyses.
## Simulating meta-analyses for dichotomous outcomes
Although meta-analyses on mean differences are very common, a meta-analysis can be performed on different effect sizes measures. To show a slightly less common example, let’s simulate a meta-analysis based on odds ratios. Sometimes the main outcome in an experiment is a dichotomous variable, such as the success or failure on a task. In such study designs we can calculate risk ratios, odds ratios, or risk differences as the effect size measure. Risk differences are sometimes judged easiest to interpret, but odds ratios are most often used for a meta-analysis because they have attractive statistical properties. An **odds ratio** is a ratio of two odds. To illustrate how an odds ratio is calculated, it is useful to consider the four possible outcomes in a 2 x 2 table of outcomes:
| | Success | Failure | N |
|:-------------|:-------:|:-------:|:------:|
| Experimental | *A* | *B* | *n1* |
| Control | *C* | *D* | *n2* |
The odds ratio is calculated as:
$$OR = \ \frac{\text{AD}}{\text{BC}}$$
The meta-analysis is performed on log transformed odds ratios (because log transformed odds ratios are symmetric around 1, see Borenstein et al., 2009), and thus the log of the odds ratio is used, which has a variance which is approximated by:
$$\text{Var}\left( \log\text{OR} \right) = \ \frac{1}{A} + \frac{1}{B} + \frac{1}{C} + \frac{1}{D}$$
Let’s assume that we train students in using a spaced learning strategy (they work through a textbook every week instead of cramming the week before the exam). Without such training, 70 out of 100 students succeed in passing the course after the first exam, but with this training, 80 out of 100 students
pass.
| | Success | Failure | N |
|--------------|---------|---------|-----|
| Experimental | 80 | 20 | 100 |
| Control | 70 | 30 | 100 |
The odds of passing in the experimental group is 80/20, or 4, while odds in the control condition are 70/30, or 2.333. The ratio of these two odds is then: 4/2.333 = 1.714, or:
$$
OR = \ \frac{80 \times 30}{20\ \times 70} = 1.714
$$
We can simulate studies with dichotomous outcomes, where we set the percentage of successes and
failures in the experimental and control condition. In the script below, by default the percentage of success in the experimental condition is 70%, and in the control condition it is 50%.
```{r}
library(metafor)
set.seed(5333)
nSims <- 12 # Number of simulated experiments
pr1 <- 0.7 # Set percentage of successes in Group 1
pr2 <- 0.5 # Set percentage of successes in Group 2
ai <- numeric(nSims) # set up empty vector for successes Group 1
bi <- numeric(nSims) # set up empty vector for failures Group 1
ci <- numeric(nSims) # set up empty vector for successes Group 2
di <- numeric(nSims) # set up empty vector for failures Group 2
for (i in 1:nSims) { # for each simulated experiment
n <- sample(30:80, 1)
x <- rbinom(n, 1, pr1) # participants (1 = success, 0 = failure)
y <- rbinom(n, 1, pr2) # participants (1 = success, 0 = failure)
ai[i] <- sum(x == 1) # Successes Group 1
bi[i] <- sum(x == 0) # Failures Group 1
ci[i] <- sum(y == 1) # Successes Group 2
di[i] <- sum(y == 0) # Failures Group 2
}
# Combine data into dataframe
metadata <- cbind(ai, bi, ci, di)
# Create escalc object from metadata dataframe
metadata <- escalc(measure = "OR",
ai = ai, bi = bi, ci = ci, di = di,
data = metadata)
# Perform Meta-analysis
result <- rma(yi, vi, data = metadata)
# Create forest plot. Using ilab and ilab.xpos arguments to add counts
par(mar=c(5, 4, 0, 2))
par(bg = "#fffafa")
forest(result,
ilab = cbind(metadata$ai, metadata$bi, metadata$ci, metadata$di),
xlim = c(-10, 8),
ilab.xpos = c(-7, -6, -5, -4))
text(c(-7, -6, -5, -4), 14.7, c("E+", "E-", "C+", "C-"), font = 2, cex = .8)
```
The forest plot presents the studies and four columns of data after the study label, which contain the number of successes and failures in the experimental groups (E+ and E-), and the number of successes and failures in the control group (C+ and C-). Imagine we study the percentage of people who get a job within 6 months after a job training program, compared to a control condition. In Study 1, which had 50 participants in each condition, 29 people in the job training condition got a job within 6 months, and 21 did not get a job. In the control condition, 23 people got a job, but 27 did not. The effect size estimate for the random effects model is 0.65. Feel free to play around with the script, adjusting the number of studies, or the sample sizes in each study, to examine the effect it has on the meta-analytic effect size estimate.
We can also get the meta-analytic test results by printing the test output. We see that there was no heterogeneity in this meta-analysis. This is true (we simulated identical studies), but is highly unlikely to ever happen in real life where variation in effect sizes between studies included in a meta-analysis is a much more realistic scenario.
```{r}
# Print result meta-analysis
result
```
## Heterogeneity {#sec-heterogeneity}
Although researchers often primarily use meta-analysis to compute a meta-analytic effect size estimate, and test whether this effect is statistically different from zero, **an arguably much more important use of meta-analyses is to explain variation between (sets of) studies**. This variation among (sets of) studies is referred to as **heterogeneity**. One goal of meta-analyses is not just to code effect sizes and estimate the meta-analytic effect size, but to code factors in studies that can explain heterogeneity, and examine which of these factors account for heterogeneity. This can help in theory evaluation or theory development. Tests have been developed to examine whether the studies included in a meta-analysis vary more than would be expected if the underlying true effect size in all studies was the same, and measures have been developed to quantify this variation.
If all studies have the same true population effect size, the only source of variation is random error. If there are real differences between (sets of) studies, there are two sources of variation, namely random variation from study to study *and* real differences in effect sizes in (sets of) studies.
A classical measure of heterogeneity is Cochran’s $Q$ statistic, which is the weighted sum of the squared differences between effect size estimates in each study, and the meta-analytic effect size estimate. The $Q$ statistic can be used to test whether the absence of heterogeneity can be statistically rejected (by comparing it to the expected amount of variation, which is the degrees of freedom, *df*, or the number of studies -1, see Borenstein et al., 2009), but it can have low power if the number of studies in the meta-analysis is small [@huedo-medina_assessing_2006].
On theoretical grounds one might argue that some heterogeneity will always happen in a meta-analysis, and therefore it is more interesting to quantify the extent to which there is heterogeneity. The $I^2$ index aims to quantify statistical heterogeneity. It is calculated as follows: $$I^{2} = \ \frac{(Q - k - 1)}{Q} \times 100\%$$ where $k$ is the number of studies (and $k-1$ is the degrees of freedom). $I^2$ ranges from 0 to 100 and can be interpreted as the percentage of the total variability in a set of effect sizes that is due to heterogeneity. When $I^2$ = 0 all variability in the effect size estimates can be explained by within-study error, and when $I^2$ = 50 half of the total variability can be explained by true heterogeneity. $I^2$ values of 25%, 50%, and 75% can be interpreted as low, medium, and high heterogeneity. Finally, in a random effects meta-analysis, $\tau^2$ estimates the variance of the true effects, and $\tau$ is the estimated standard deviation, as expressed on the same scale as the effect size. A benefit of $\tau^2$ is that it does not depend on the precision, as $I^2$ does, which tends to 100% if the studies included in the meta-analysis are very large [@rucker_undue_2008], but a downside is that $\tau^2$ is more difficult to interpret [@harrer_doing_2021].
The script below simulates a similar meta-analysis to the example for standardized means above, but with a small variation. Of the 30 studies, 15 are generated based on a true mean difference of 0.2, while the other 15 studies are based on a true effect size of 0.5. Thus, in this set of studies the true effect size varies, and there is true heterogeneity. We use the `confint` function in the metafor package to report $I^2$ and $\tau^2$, and their confidence intervals, and we see the test for heterogeneity is statisticaly significant. In other words, we would conclude that the meta-analytic effect size is statistically different form 0, but we would also conclude that there is unexplained variability in effect sizes in this set of studies, and the effect size is not the same for all studies in the meta-analysis.
```{r meta-heterogeneity}
library(metafor)
set.seed(3)
nSims <- 30 # Number of simulated experiments (need to be divided by 2)
metadata <- data.frame(yi = numeric(0), vi = numeric(0)) # create dataframe
true_es <- numeric(nSims) # set up empty vector for true effect sizes
study <- numeric(nSims) # set up empty vector for study numbers
group <- numeric(nSims) # set up empty vector for group
m1 <- 0.5 # population mean Group 1
sd1 <- 1 # standard deviation Group 1
m2 <- 0 # population mean Group 2
sd2 <- 1 # standard deviation Group 1
for (i in 1:(nSims/2)) {
n <- sample(30:100, 1)
x <- rnorm(n = n, mean = m1, sd = sd1) # simulate random normally distributed data
y <- rnorm(n = n, mean = m2, sd = sd2) # simulate random normally distributed data
metadata[i,1:2] <- metafor::escalc(n1i = n, n2i = n, m1i = mean(x), m2i = mean(y), sd1i = sd(x), sd2i = sd(y), measure = "SMD") # compute the effect size
true_es[i] <- paste("Study", i, "Effect = 0.5") # true effect size
group[i] <- paste("Manipulation A") # Group label
study[i] <- paste("Study",i) # Study
}
m1 <- 0.2 # population mean Group 1
sd1 <- 1 # standard deviation Group 1
m2 <- 0 # population mean Group 2
sd2 <- 1 # standard deviation Group 1
for (i in (nSims/2+1):nSims) { # for third quarter of each simulated study
n <- sample(30:100, 1)
x <- rnorm(n = n, mean = m1, sd = sd1) # simulate random normally distributed data
y <- rnorm(n = n, mean = m2, sd = sd2) # simulate random normally distributed data
metadata[i,1:2] <- metafor::escalc(n1i = n, n2i = n, m1i = mean(x), m2i = mean(y), sd1i = sd(x), sd2i = sd(y), measure = "SMD") # compute the effect size
true_es[i] <- paste("Study", i, "Effect = 0.2") # true effect size
group[i] <- paste("Manipulation B") # Group label
study[i] <- paste("Study",i) # Study
}
# Combine data into dataframe
metadata <- cbind.data.frame(metadata, true_es, study, group)
# Shuffle rows to make it difficult to see which effect size comes from which group
metadata <- metadata[sample(nrow(metadata)),]
# Perform Meta-analysis
result <- rma(yi, vi, data = metadata, slab = paste(true_es))
# Print result meta-analysis
result
confint(result) # Get confidence interval for indices of heterogeneity
```
The forest plot in @fig-plot-heterogeneity shows there is more variation around the meta-analytic effect size estimate than expected purely based on random variation. The plot has 2 vertical lines, one at 0.2, and one at 0.5. Of course, in a real meta-analysis we would not know what the true effect sizes of subsets are, but these lines help us to see that one subset of the effects varies randomly around 0.2, and a second subset varies randomly around 0.5.
```{r, fig-plot-heterogeneity, warning=FALSE, fig.margin = FALSE, fig.height = 10, echo = FALSE}
#| fig-cap: "Forest plot of 30 studies with 2 subsets of studies, one set with a true effect size of 0.2, and one set with a true effect size of 0.5."
par(mar=c(4,0,0,2), bg = backgroundcolor)
metafor::forest(result,
efac=c(0,3),
lty=c(1,1,0),
cex=0.8,
shade = TRUE,
rowadj=0,
psize = 1,
colout = "#333333",
xlim = c(-2, 2.5),
alim = c(-1, 1.5),
textpos = c(-2, 2.5),
at = c(-1, -0.5, 0, 0.5, 1, 1.5),
refline = c(0, 0.2, 0.5),
header = TRUE
)
```
Based on the test for heterogeneity, we can reject the null hypothesis that there is no heterogeneity in the meta-analysis. Tests for heterogeneity themselves have Type 1 and Type 2 error rates, and with a small number of studies (such as in our example, n = 12) tests for heterogeneity can have low power. If you remove the set.seed command and run the code multiple times, you will see that the test for heterogeneity will often not be significant, even though there is true heterogeneity in the simulation. In large meta-analyses, power can be so high that the test always yields a *p*-value small enough to reject the null hypothesis, but then it is important to look at the $I^2$ estimate.
Recently there has been considerable attention to the possibility that effect sizes within research lines have substantial heterogeneity [@bryan_behavioural_2021]. Large heterogeneity can impact the power of studies, and therefore has consequences for how studies are planned [@kenny_unappreciated_2019]. Although heterogeneity seems to be low in direct replication studies [@olsson-collentine_heterogeneity_2020] it is high in most meta-analyses, which has been argued to reflect a lack of understanding of the effects in those research lines [@linden_heterogeneity_2021].
## Exploring heterogeneity through subgroup analyses
Let's imagine that while coding our meta-analysis we noticed that in the total set of studies two different manipulations were used. Manipulation A was stronger, and therefore we would theoretically expect a stronger effect, while manipulation B was more subtle. We have coded which manipulation was used in each of the studies included in our meta-analysis. This allows us to explore whether the heterogeneity observed above can be explained by the type of manipulation. In other words, we are testing if the effect size is moderated by the type of manipulation. This is a sub-group analysis. It is conceptually very similar to an ANOVA where we test whether there are differences between groups. We see the test of moderators yields a statistically significant result, which means we can reject the null hypothesis that the two groups do not differ in their effect size. Furthermore, there is no statistically significant emount of heterogeneity left after splitting up the studies in these two groups.
```{r}
# Based on https://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates
# We take original dataset and run a meta-regression using the "mods" argument
# This pools the estimates of tau, but this is often a good approach
rma(yi, vi, mods = ~ group, data = metadata, digits = 3)
```
If we plot the two subgroups, we see in @fig-plot-subgroup that effect sizes vary randomly across the true effect sizes. Again, in real meta-analyses these true effect sizes would not be known.
```{r, fig-plot-subgroup, warning=FALSE, fig.margin = FALSE, echo = FALSE, fig.height = 10}
#| fig-cap: "Two forest plots for the 2 different subgroups with Manipulation A or B."
#Perform meta-analyses for each subgroup
res1 <- rma(yi, vi, data = metadata, subset=group=="Manipulation A")
res2 <- rma(yi, vi, data = metadata, subset=group=="Manipulation B")
par(mfrow=c(2,1))
par(mar=c(4,0,0,2), bg = backgroundcolor)
metafor::forest(res1,
efac=c(0,3),
lty=c(1,1,0),
cex=0.8,
shade = TRUE,
rowadj=0,
psize = 1,
colout = "#333333",
xlim = c(-2, 2.5),
alim = c(-1, 1.5),
at = c(-1, -0.5, 0, 0.5, 1, 1.5),
textpos = c(-2, 2.5),
refline = c(0, 0.5),
header = "Studies with Manipulation A"
)
par(mar=c(4,0,0,2), bg = backgroundcolor)
metafor::forest(res2,
efac=c(0,3),
lty=c(1,1,0),
cex=0.8,
shade = TRUE,
rowadj=0,
psize = 1,
colout = "#333333",
xlim = c(-2, 2.5),
alim = c(-1, 1.5),
at = c(-1, -0.5, 0, 0.5, 1, 1.5),
textpos = c(-2, 2.5),
refline = c(0, 0.2),
header = "Studies with Manipulation B"
)
par(mfrow=c(1,1))
```
## Strengths and weaknesses of meta-analysis
The conclusions from meta-analyses have been debated from the very first meta-analyses that were performed. It's ironic that, as far as I can find, the 'garbage in-garbage out' criticism of meta-analysis originates from Eysenck [-@eysenck_exercise_1978] because although it is valid criticism, Eysenck himself published literal garbage, as he was found [guilty of scientific misconduct](https://www.science.org/content/article/misconduct-allegations-push-psychology-hero-his-pedestal), which has led to a large number of [retractions](http://retractiondatabase.org/RetractionSearch.aspx?AspxAutoDetectCookieSupport=1#?AspxAutoDetectCookieSupport%3d1%26auth%3dEysenck%252c%2bHans%2bJ) and expressions of concern. Eysenck wrote about a meta-analysis that yielded results he did not like:
>The most surprising feature of Smith and Glass's (1977) exercise in mega-silliness is their advocacy of low standards of judgment. More, they advocate and practice the abandonment of critical judgments of any kind. A mass of reports—good, bad, and indifferent—are fed into the
computer in the hope that people will cease caring about the quality of the material on which the conclusions are based. If their abandonment of scholarship were to be taken seriously, a daunting but improbable likelihood, it would mark the beginning of a passage into the dark age of scientific psychology.
The notion that one can distill scientific knowledge from a compilation of studies mostly of poor design, relying on subjective, unvalidated, and certainly unreliable clinical judgments, and dissimilar with respect to nearly all the vital parameters, dies hard. This article, it is to be hoped, is the final death rattle of such hopes. "Garbage in—garbage out" is a well-known axiom of computer specialists; it applies here with equal force.
The problem of 'garbage in, garbage out' remains one of the most common, and difficult to deal with, criticisms of meta-analysis. It is true that a meta-analysis cannot turn low quality data into a good effect size estimate, or highly heterogeneous effect sizes into a useful estimate of an effect size that generalizes to all studies included in the meta-analysis. The decision of which studies to include in a meta-analysis is a difficult one, and often leads to disagreements in the conclusions of meta-analyses performed on the same set of studies [@goodyear-smith_analysis_2012; @ferguson_comment_2014]. Finally, meta-analyses can be biased, in the same way individual studies are biased, which is a topic explored in more detail in the chapter on [bias detection](#sec-bias).
A strength of meta-analysis is that combining highly similar studies into a single analysis increases the statistical power of the test, as well as the accuracy of the effect size estimate. Whenever it is not possible, or it is efficient, to perform studies with a large number of observations in each study, an unbiased meta-analysis can provide better statistical inferences. Furthermore, including an **internal meta-analysis** in a multi-study paper (when all studies are sufficiently similar) can be a way to reduce the file-drawer problem, by allowing researchers to publish mixed results. At the same time, researchers have raised the concern that if researchers selectively report studies when they perform an internal meta-analysis they simply increase the flexibility in the data analysis, and are more likely to erroneously claim support for their hypothesis [@vosgerau_99_2019]. Researchers should publish all well-designed studies they have performed in a research line, and if the studies are similar and unbiased, a meta-analysis will improve inferences. At the same time, the result of a meta-analysis may be biased, and should not be interpreted as the final answer. For this reason, an analysis of the heterogeneity of effect size estimates, and the use of statistical techniques to detect bias, are an essential part of any meta-analysis.
## Which results should you report to be included in a future meta-analysis? {#sec-reportmeta}
It would be a useful educational exercise for any researcher who publishes quantitative studies to code a dozen studies for a meta-analysis. A notorious problem when performing a meta-analysis is that researchers do not report all the results a meta-analyst needs in order to include the study in their meta-analysis. Sometimes the original researcher can be contacted and the missing information can be provided, but as every single study is just a data-point in a future meta-analysis, it is best to report all the required results to be included in a future meta-analysis.
The single best approach to guarantee that all required information for a future meta-analysis is available to meta-analysts is to share the (anonymized) data and analysis code with the manuscript. This will enable meta-analysts to compute any statistical information they need. Access to individual observations allows meta-analysts to perform analyses on subgroups, and makes it possible to perform more advanced statistical tests [@stewart_ipd_2002]. Finally, access to the raw data, instead of only access to the summary statistics, makes it easier to find flawed individual studies that should not be included in a meta-analysis [@lawrence_lesson_2021]. As open data becomes the norm, efforts to standardize measures and develop specifications for datasets will facilitate the availability of raw data as input for meta-analyses. This will also facilitate the re-use of data, and allow researchers to perform meta-analyses unrelated to the main research question. If you want to share the raw data you will collect, make sure you address this in your [informed consent form](https://www.uu.nl/en/research/research-data-management/guides/informed-consent-for-data-sharing).
When summarizing data in a scientific article, report the number of observations associated with each statistical test. Most articles will mention the total sample size, but if some observations are removed while cleaning the data, also report the final number of observations included in a test. When a statistical test is based on multiple conditions (e.g., a *t*-test), report the sample size in each independent group. If this information is missing, meta-analysts will often have to assume that the total number of observations is distributed equally across conditions, which is not always correct. Report full test results for significant and non-significant results (e.g., never write *F* < 1, *ns*). Write out the full test result, including an effect size estimate, regardless of the *p*-value, as non-significant results are especially important to be included in meta-analyses. When reporting effect sizes, report how they were computed (e.g., when reporting standardized mean differences, did you compute Cohen's *d* or Hedges' *g*). Report exact *p*-values for each test, or the full test statistics that can be used to recompute the *p*-value. Report means and standard deviations for each group of observations, and for within-subject designs, report the correlation between dependent variables (which is currently almost always never reported, but is needed to compute [Cohen’s $d_{av}$](#sec-cohend) and perform simulation based power analyses based on the predicted data pattern). It might be useful to use a table to summarize all statistical tests if many tests are reported, but the raw data cannot be shared (for example in the supplemental material).
## Improving the reproducibility of meta-analyses {#sec-metareporting}
Although meta-analyses do not provide definitive conclusions, they are typically interpreted as state-of-the-art empirical knowledge about a specific effect or research area. Large-scale meta-analyses often accumulate a massive number of citations and influence future research and theory development. It is therefore essential that published meta-analyses are of the highest possible quality.
At the same time, the conclusions from meta-analyses are often open for debate and are subject to change as new data becomes available. We recently proposed practical recommendations to increase the reproducibility of meta-analyses to facilitate quality control, improve reporting guidelines, allow researchers to re-analyze meta-analyses based on alternative inclusion criteria, and future-proof meta-analyses by making sure the collected meta-analytic data is shared so that continuously accumulating meta-analyses can be performed, and so that novel statistical techniques can be applied on the collected data as they become available [@lakens_reproducibility_2016]. The need for the improvement in reproducibility of meta-analysis is clear - a recent review of 150 meta-analyses in Psychological Bulletin revealed that only 1 meta-analysis shared the statistical code [@polanin_transparency_2020]. This is unacceptable in the current day and age. In addition to inspecting how well your meta-analysis adheres to the [JARS Quantitative Meta-Analysis Reporting Standards](https://apastyle.apa.org/jars/quant-table-9.pdf), following the recommendations summarized in @tbl-table-rec1 should substantially improve the state-of-the-art in meta-analyses.
```{r tbl-table-rec1, echo = FALSE}
#| tbl-cap: "Six practical recommendations to improve the quality and reproducibility of meta-analyses."
table_rec <- data.frame(c("Facilitate cumulative science", "Facilitate quality control", "Use reporting guidelines", "Preregister", "Facilitate reproducibility", "Recruit expertise"), c("Disclose all meta-analytic data (effect sizes, sample sizes for each condition, test statistics and degrees of freedom, means, standard deviations, and correlations between dependent observations) for each data point. Quote relevant text from studies that describe the meta-analytic data to prevent confusion, such as when one effect size is selected from a large number of tests reported in a study. When analyzing subgroups, include quotes from the original study that underlie this classification, and specify any subjective decisions.", "Specify which effect size calculations are used and which assumptions are made for missing data (e.g., assuming equal sample sizes in each condition, imputed values for unreported effect sizes), if necessary for each effect size extracted from the literature. Specify who extracted and coded the data, knowing it is preferable that two researchers independently extract effect sizes from the literature.", "A minimal requirement when reporting meta-analyses is to adhere to one of the reporting standards (e.g., PRISMA). The reporting guidelines ask authors of meta-analyses to report essential information that should be made available either in the main text of the article, or by providing a completed checklist as supplementary material during review and after publication.", "Whenever possible, pre-register the meta-analysis research protocol (e.g., using PROSPERO) to distinguish between confirmatory and exploratory analyses. Perform a prospective meta-analysis where possible.", "Allow others to re-analyze the data to examine how sensitive the results are to subjective choices such as inclusion criteria. Always include a link to data files that can be directly analyzed with statistical software, either by providing completely reproducible scripts containing both the data and the reported analyses in free software (e.g., R), or at the very minimum a spreadsheet that contains all meta-analytic data that can easily analyzed in any statistical program.", "Consider consulting a librarian before you start the literature search, and a statistician before coding the effect sizes, for advice on how to make the literature search and effect size calculations reproducible."))
colnames(table_rec) <- c("What?", "How?")
table_print <- knitr::kable(table_rec)
table_print <- kableExtra::column_spec(table_print, 1, width = "5cm")
table_print <- kableExtra::column_spec(table_print, 2, width = "10cm")
table_print
```
For another open educational resource on meta-analysis in R, see [Doing Meta-Analysis in R](https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R).
## Test Yourself
::: {.webex-check .webex-box}
**Q1**: What is true about the standard deviation of the sample, and the standard deviation of the mean (or the standard error)?
```{r, echo = FALSE, results = 'asis'}
opts_p <- c(
"As the sample size increases, the standard deviation of the sample becomes smaller, and the standard deviation of the mean (or standard error) becomes smaller.",
answer = "As the sample size increases, the standard deviation of the sample becomes more accurate, and the standard deviation of the mean (or standard error) becomes smaller. ",
"As the sample size increases, the standard deviation of the sample becomes more smaller, and the standard deviation of the mean (or standard error) becomes more accurate.",
"As the sample size increases, the standard deviation of the sample becomes more accurate, and the standard deviation of the mean (or standard error) becomes more accurate. "
)
cat(longmcq(opts_p))
```
**Q2**: If we would perform a meta-analysis by just averaging all observed effect sizes, an effect size of *d* = 0.7 from a small study with 20 observations would influence the meta-analytic effect size estimate just as much as a *d* = 0.3 from a study with 2000 observations. How is the meta-analytic effect size estimate computed instead?
```{r, echo = FALSE, results = 'asis'}
opts_p <- c(
"Effect sizes estimates from small studies undergo a small study correction before being included.",
"Effect size estimates from small studies are ignored when computing a meta-analytic effect size estimate.",
answer = "Effect sizes are weighed based on the precision of their estimate, determined by the standard error. ",
"Effect sizes are weighed based on how close they are to the meta-analytic effect size estimate, with studies further removed receiving less weight. "
)
cat(longmcq(opts_p))
```
**Q3**: The size of the squares indicating effect sizes in a forest plot vary as a function of the:
```{r, echo = FALSE, results = 'asis'}
opts_p <- c(
"Power of the study",
"Size of the effect",
answer = "Sample size",
"Type 1 error rate"
)
cat(longmcq(opts_p))
```
**Q4**: One can compute a ‘fixed effect model’ or a ‘random effects model’ when performing a meta-analysis on studies in the scientific literature. Which statement is true?
```{r, echo = FALSE, results = 'asis'}
opts_p <- c(
"It is generally recommended to compute a **fixed effect** model, mainly because not all studies included in a meta-analysis will be functionally similar. ",
answer = "It is generally recommended to compute a **random effects** model, mainly because not all studies included in a meta-analysis will be functionally similar.",
"It is generally recommended to compute a **fixed effect** model, as this reduces the heterogeneity in the set of studies. ",
"It is generally recommended to compute a **random effects** model, as this reduces the heterogeneity in the set of studies. "
)
cat(longmcq(opts_p))
```
**Q5**: When there is no heterogeneity in the effect size estimates included in a meta-analysis, a fixed effect and random effects model will yield similar conclusions. If there is variability in the effect size estimates, the two models can yield different results. Below, we see two forest plots based on the same 5 simulated studies. The top plot is based on a random effects meta-analysis, the bottom plot based on a fixed effect meta-analysis. A random effects meta-analysis incorporates uncertainty about the variability of effect size estimates into the final meta-analytic estimate. How does this translate into a difference between the two plots?
```{r fig-meta-sim-rand, echo = FALSE}
#| fig-cap: "Simulated studies under a random effects model."
set.seed(199)
nSims <- 5 # number of simulated studies
m1 <- 0.4 # population mean Group 1
sd1 <- 1 # standard deviation Group 1
m2 <- 0 # population mean Group 2
sd2 <- 1 # standard deviation Group 1
metadata <- data.frame(yi = numeric(0), vi = numeric(0)) # create empty dataframe
for (i in 1:nSims) { # for each simulated study
n <- sample(30:100, 1) # pick a sample size per group
x <- rnorm(n = n, mean = m1, sd = sd1)
y <- rnorm(n = n, mean = m2, sd = sd2)
metadata[i,1:2] <- metafor::escalc(n1i = n, n2i = n, m1i = mean(x),
m2i = mean(y), sd1i = sd(x), sd2i = sd(y), measure = "SMD")
}
result <- metafor::rma(yi, vi, data = metadata)
par(bg = backgroundcolor)
metafor::forest(result)
```
```{r fig-meta-sim-fixed, echo = FALSE}
#| fig-cap: "Simulated studies under a fixed effect model."
set.seed(199)
nSims <- 5 # number of simulated studies
m1 <- 0.4 # population mean Group 1