-
Notifications
You must be signed in to change notification settings - Fork 18
/
017-conclusion.qmd
43 lines (24 loc) · 7.16 KB
/
017-conclusion.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
{{< include _setup.qmd >}}
::: {.content-visible when-format="pdf"}
# Conclusion {#sec-conclusion}
:::
::: {.content-visible when-format="html"}
# Conclusion {#sec-conclusion .unnumbered}
:::
You've made it to the end of *Experimentology*, our (sometimes opinionated) guide to how to run good psychology experiments. In this book we've tried to present a unified approach to the why and how of running experiments, centered on the goal of doing experiments:
> Experiments are intended to make maximally unbiased, generalizable, and precise estimates of specific causal effects.
This formulation isn't exactly how experiments are talked about in the broader field, but we hope you've started to see some of the rationale behind this approach. In this final chapter, we will briefly discuss some aspects of our approach, as well how this approach connects with our four themes, [transparency]{.smallcaps}, [measurement precision]{.smallcaps}, [bias reduction]{.smallcaps}, and [generalizability]{.smallcaps}. We'll end by mentioning some exciting new trends in the field that give us hope about the future of experimentology and psychology more broadly.
## Summarizing our approach
The *Experimentology* approach is grounded in both an appreciation of the power of experiments to reveal important aspects about human psychology and also an understanding of the many ways that experiments can fail. In particular, the "replication crisis" (@sec-replication) has revealed that small samples, a focus on dichotomous statistical inference, and a lack of transparency around data analysis can lead to a literature that is often neither reproducible nor replicable. Our approach is designed to avoid these pitfalls.
We focus on [measurement precision]{.smallcaps} in service of measuring causal effects. The emphasis on causal effects stems from an acknowledgement of the key role of experiments in establishing causal inferences (@sec-experiments) and the importance of causal relationships to theories (@sec-theories). In our statistical approach, we focus on estimation (@sec-estimation) and modeling (@sec-models), helping us to avoid some of the fallacies that come along with dichotomous inference (@sec-inference). We choose measures to maximize reliability (@sec-measurement). We prefer simple, within-participant experimental designs because they typically result in more precise estimates (@sec-design). And we think meta-analytically about the overall evidence for a particular effect beyond our individual experiment (@sec-meta).
Further, we recognize the presence of many potential sources of bias in our estimates, leading us to focus on [bias reduction]{.smallcaps}. In our measurements, we identify arguments for the validity of our measures, decreasing bias in estimation of the key constructs of interest (@sec-measurement); in our designs we seek to minimize bias due to confounding or experimenter effects (@sec-design). We also try to minimize the possibility of bias in our decisions about data collection (@sec-collection) and data analysis (@sec-prereg). Finally, we recognize the possibility of bias in literatures as a whole and consider ways to compensate in our estimates (@sec-meta).
Finally, we consider [generalizability]{.smallcaps} throughout the process. We theorize with respect to a particular population (@sec-theories) and select our sample in order to maximize the generalizability of our findings to that target population (@sec-sampling). In our statistical analysis, we take into account multiple dimensions of generalizability, including across participants and experimental stimulus items (@sec-models). And in our reporting, we contextualize our findings with respect to limits on their generalizability (@sec-writing).
Woven throughout this narrative is the hope that embracing [transparency]{.smallcaps} throughout the experimental process will help you maximize your work. Not only is sharing your work openly an ethical responsibility (@sec-ethics), but it's also a great way to minimize errors while creating valuable products that both advance scientific progress and accelerate your own career (@sec-management).
## Forward the field
We have focused especially on reproducibility\ and replicability\ issues, but we've learned at various points in this book that there's a replication crisis [@osc2015], a theory crisis [@oberauer2019], and a generalizability crisis [@yarkoni2020] in psychology. Based on all these crises, you might think that we are pessimistic about the future of psychology. Not so.
There have been tremendous changes in psychological methods since we started teaching Experimental Methods in 2012. When we began, it was common for incoming graduate students to describe the rampant $p$-hacking they had been encouraged to do in their undergraduate labs. Now, students join the class aware of new practices like preregistration and cognizant of problems of generalizability and theory building. It takes a long time for a field to change, but we have seen tremendous progress at every level---from government policies requiring transparency in the sciences all the way down to individual researchers' adoption of tools and practices that increase the efficiency of their work and decrease the chances of error.
One of the most exciting trends has been the rise of metascience, in which researchers use the tools of science to understand how to make science better [@hardwicke2020b]. Reproducibility\ and replicability\ projects (reviewed in @sec-replication) can help us measure the robustness of the scientific literature. In addition, studies that evaluate the impacts of new policies [e.g., @hardwicke2018b]---can help stakeholders like journal editors and funders make informed choices about how to push the field toward more robust science.
In addition to changes that correct methodological issues, the last ten years have seen the rise of "big team science" efforts that advance the field in new ways [@coles2022]. Collaborations such as the Psychological Science Accelerator [@moshontz2018] and ManyBabies [@frank2017b] allow hundreds of researchers from around the world to come together to run shared projects. These projects are enabled by open science practices like data and code sharing, and they provide a way for researchers to learn best practices via participating. In addition, by including broader and more diverse samples, they can help address challenges around generalizability [@klein2018b].
Finally, the last ten years have seen huge progress in the use of statistical models both for understanding data [@mcelreath2018] and for describing specific psychological mechanisms [@ma2022]. In our own work, we have used these models extensively and we believe that they provide an exciting toolkit for building quantitative theories that allow us to explain and to predict the human mind.
## Final thoughts
Doing experiments is a craft, one that requires practice and attention. The first experiment you run will have limitations and issues. So will the 100th. But as you refine your skills, the quality of the studies you design will get better. Further, your own ability to judge others' experiments will improve as well, making you a more discerning consumer of empirical results. We hope you enjoy this journey!