Rinse and Repeat
Replication and Research Ethics
Clinical Impact Statement: Professional competence requires regular self-assessment and self-reflection on acquisition and maintenance of the skills, abilities, and training needed to perform effectively as a psychologist. What happens when a trainee’s or psychologist’s competence is comprised? What happens when psychological science is inaccurately or falsely communicated to the public? This article discusses areas of professional competence and considerations for self-assessment and maintenance of competence over the lifespan.
Scandals involving psychological research have been making the headlines since World War II (see Adair, 2001, for review). These public critiques make individuals skeptical of the veracity of psychological science. Recently, participants from Phillip Zimbardo’s Stanford Prison Experiment (Haney, Banks, & Zimbardo, 1973) were interviewed and revealed potential ethical violations, including feeling they were obligated to remain in the experiment (Blum, 2018). Although Zimbardo has reported the participants’ accusations are unfounded, the question of research ethics and need for replicability continues to be a point of discussion.
Beyond the public discourse regarding the authenticity of psychological science, there have been concerns among psychologists as well. In recent years, researchers have noted a crisis of confidence in scientific research, including psychological research (Giofre, Cumming, Fresc, Boedker, & Tressoldi, 2017). Commonly cited studies sometimes go without critique or replication within the field or its subspecialties. Indeed, many introductory psychology textbooks expose students to inaccurate information or report information that is not mainstream (Ferguson, Brown, & Torres, 2018; Warne, Astle, & Hill, 2018). Therefore, there is a need within psychology to continuously monitor research ethics.
Beyond an Institutional Review Board: Research Ethics
Ethical guidelines, such as the American Psychological Association’s (APA, 2017) Ethical Principles of Psychologists and Code of Conduct, provide foundational understanding of research ethics, but interpretation of those guidelines is still ambiguous. Given the necessity to provide accurate information to both the field and the public, an evaluation of current research ethics and limitations in current practice is warranted. Awareness and application of ethics in professional activities is a core competency of psychologist and expected component of graduate and post-graduate training in psychology (Rodríguez et al., 2014).
Although these ethical guidelines may be globally informative, some argue they provide little guidance into subspecialty areas of research, especially in social and experimental psychology (Adair, 2001; Seiber, 1994). Thus, there has been an emergence of within specialty guidelines that have been developed. For instance, individuals in psychology and law consult the Specialty Guidelines for Forensic Psychology (American Psychological Association, 2013). Evaluation of research ethics within subspecialties or with specific populations is needed.
Reproducibility and replicability in research become increasingly important when considering the implications of research findings. Alter and Gonzalez (2018) defined reproducible as “the ability to verify published findings using the same data set” and replicable as “the ability to find similar results in a new study” (p. 146). Reports have revealed many published studies in psychology cannot be replicated or reproduced (Open Science Collaboration, 2015). The Open Science Collaboration replicated 100 articles from three well-regarded journals. Only 36% of those replications were statistically significant and the effect sizes were only half as strong as those reported in the original studies. Other critics have noted the presence of “p-hacking,” the term used for the practice of post-hoc adjustments to data or data analyses in order to obtain a significant p-value, in psychological research. In order to avoid such conduct, data reporting measures need to be reviewed and enhanced.
Although the American Psychological Association (APA)’s publication manual establishes a baseline for what should be reported in published works, other guidelines providing more extensive details on content that should be including for the purposes of replication. For instance, the STROBE checklists are helpful for researchers and authors to provide thorough information regarding their methodology, data analyses, and study limitations (STROBE, 2007). Additionally, some journals and professional organizations have adopted their own data or submission reporting guidelines (see Giofre, Cumming, Fresc, Boedker, & Tressoldi, 2017).
In addition to replicability, generalizability of results is another important component of psychological research, as many studies lack representative samples. However, there continues to be discussion regarding conducting research with vulnerable and marginalized populations, including, but not limited to, children and adolescents, individuals who are undocumented, minority groups (racial, ethnic, sexual), individuals who are incarcerated, individuals with physical health conditions, and students. For instance, there has been a great deal of debate on whether researchers can/should ask about abuse histories of participants, with some Institutional Review Boards (IRBs) prohibiting this practice (Becker-Blease & Freyd, 2006). Additionally, given that racial-ethnic minorities often experience a disproportionate number of psychosocial stressors, researchers should be especially mindful of the well-being of minority populations as research participants; this includes awareness of the lack of diversity among researchers, the use of culturally sensitive practices, and avoidance of exploitation (Gil & Bob, 1999). Addressing culturally considerations in ethics beyond deception is crucial in maintaining beneficence.
Emerging technology, such as the use of social media, presents additional ethical challenges. Golder, Ahmed, Norman, and Booth (2017) reviewed 17 studies to evaluate attitudes regarding social media research. Through their analysis of themes, concerns such as conducting research with vulnerable groups on social media, risks to users, privacy, and validity of research were discussed as ethical concerns. There was a lack of consensus regarding social media research ethics in general, especially in major areas such as confidentiality, informed consent, and assessing the risks and benefits of conducting social media studies. While some researchers have identified these challenges in psychological research (see Keller & Lee, 2003 for review), official guidelines have not been developed. In sum, there were challenging ethical factors to consider in social media research that need further exploration and, as emerging technologies develop, further dialogue will be warranted.
Lastly, in working with vulnerable populations, researchers should be mindful of unintentional coercive approaches. In fact, Festinger et al. (2009) found monetary incentives improved participants recall of informed consent, which may seemingly improve the quality of the study, but may influence vulnerable populations in a different way than other participants. Currently, many researchers are using Amazon’s Mechanical Turk (MTurk) to recruit participants for their studies. MTurk is a means of crowdsourcing research and can results in a more diverse sample than the traditional college student convenience sample. However, several researchers have noted ethical concerns in using these participants, including economic exploitation, comprehensive sampling, and concerns regarding effort, attention, and validity of research (Goodman, Cryder, & Cheema, 2013; McInroy, 2016). Further, other data collection sources, such as message boards and social media, have also been critiqued. Though these approaches might increase access to important subpopulations (e.g., racial, ethnic, and/or sexual minorities), researchers should critically evaluate the use of emerging technologies in data collection.
Several recommendations have been made to better improve research ethics in the field of psychology. Although some approaches have been in place for a while, there are innovative ideas on how improve accountability in research ethics.
Training and continuing education. As always, training and continuing education are great venues to have discussions regarding research ethics. Many continuing education workshops in ethics focus on broad clinical and professional ethical concerns, but rarely discuss research ethics. As stated above, speciality guidelines and new research considerations are frequently discussed in the literature. These venues will allow trainees and professionals the venue to dialogue about these emerging areas of research and maintain professional competence. Further, nonlicensed professionals, such as academic researchers, may not need continuing education, so this is an important training gap to address within their institutions.
Open access. Given the lack of replication of many studies, increased need for transparency of methodology and data analysis is apparent. There has been a call for more transparency in data reporting practices, as well as access to data for the purposes of replication (Alter & Gonzalez, 2018; Giofrè et al., 2017). Researchers have suggested all results are “open to challenge through reexamination, reanalysis, reproducibility, and replication” (Alter & Gonzalez, 2018, p. 146). Data repositories (i.e., organizations that maintain and distribute data) contain data files for use by other researchers, and some funding agencies require data sharing plans for research grants (DuBois, Strait, & Walsh, 2017). Materials commonly archived in these sources are interview guides, raw data, data codebooks, IRB approval documents, transformed data, research methods protocol, and references (DuBois, Strait, & Walsh, 2017). These repositories can also assist should unexpected incidents occur, such as computer or server failure, natural disasters, or the death of a primary investigator. In sum, having data readily available to provide to other researchers may help improve the data sharing process and facilitate more replication of studies.
Badges. Back to the Scouts we go! The Center for Open Science (COS) has developed three “badges” for published journal articles to designate which articles meet criteria for research openness. An Open Data badge is given to articles for which the collected data are stored on an open-access online site. An Open Material badge indicates the researchers have uploaded their surveys, tests, and other materials that were used in the collected of the data. A Preregistered badge indicated the researchers clearly articulated important aspect of their methodology prior to collecting data and have saved their research plan on a website. Several psychological research journals, including Psychological Science, Journal of Social Psychology, and Journal of Research in Personality, have supported the COS and adopted the badges (Grahe, 2014; Rouse, 2017). The Psi Chi Journal of Psychological Research adopted the COS badges and added an additional badge (Rouse, 2017); this Replication badge identifies replication studies.
Findings from research have major implications for the public and the field. Many trust that such results were obtained in an ethical manner and that the peer review process serves as gatekeeper to what is published in peer-reviewed journals. Accurate data reporting and replicability of research are crucial in the scientific process and in determining the generalizability of results. Researchers should discuss the limitations of their results and report areas of need for future study. Further, emerging approaches, such as open access repositories, can lend to better facilitating collaborating and replication efforts.
Cite This Article
Alexander, A. (2018). Rinse and repeat: Replication and research ethics. Psychotherapy Bulletin, 53(3), 43-7.
Adair, J. G. (2001). Ethics of psychological research: New policies; continuing issues; new concerns. Canadian Psychology, 42(1), 25-37.
Alter, G., & Gonzalez, R. (2018). Responsible practices for data sharing. American Psychologist, 73(2), 146-156. doi: 10.1037/amp0000258
American Psychological Association. (2013). Specialty guidelines for forensic psychology. Retrieved from http://www.apa.org/pubs/journals/features/forensic-psychology.pdf
American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. Retrieved from http://www.apa.org/ethics/code/
Becker-Blease, K. A., & Freyd, J. J. (2006). Research participants telling the truth about their lives: The ethics of asking and not asking about abuse. American Psychologist, 61(3), 218-226. doi: 10.1037/0003-066X.61.3.218
Blum, B. (2018). The life span of a lie. Retrieved from https://medium.com/s/trustissues/the-lifespan-of-a-lie-d869212b1f62
DuBois, J. M., Strait, M., & Walsh, H. (2017). Is it time to share qualitative research data? Qualitatitve Psychology. Advance online publication. doi: 10.1037/qup0000076
Festinger, D. S., Marlowe, D. B., Croft, J. R., Dugosh, K. L., Arabia, P. L., & Benasutti, K. M. (2009). Monetary incentives improve recall of research consent information: It pays to remember. Experimental and Clinical Psychopharmacology, 17(2), 99-104. doi: 10.1037/a0015421
Ferguson, C. J., Brown, J. M., & Torres, A. V. (2018). Education or indoctrination? The accuracy of introductory psychology textbooks in covering controversial topics and urban legends about psychology. Current Psychology, 37(3), 574-582. doi: 10.1007/s12144-016-9539-7
Gil, E. F., & Bob, S. (1999). Culturally competent research: An ethical perspective. Clinical Psychology Review, 19(1), 45-55.
Giofrè, D., Cumming, G., Fresc, L., Boedker, I., & Tressoldi, P. (2017). The influence of journal submission guidelines on authors’ reporting of statistics and use of open research practices. PLoS One, 12(4), e0175583. doi: 10.1371/journal.pone.0176683.
Golder, S., Ahmed, S., Norma, G., & Booth, A. (2017). Attitudes toward the ethics of research using social media: A systematic review. Journal of Medical Internet Research, 9(6), e195.
Goodman, J. K., Cryder, C. E., & Cheema, A. (2013). Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making, 26, 213-224. doi: 10.1002/bdm.1753
Grahe, J. E. (2014). Announcing open science badges and reaching for the sky. The Journal of Social Psychology, 154, 1-3. doi: 10.1080/00224545.2014.853582
Haney, C., Banks, C., & Zimbardo, P. (1973). Interpersonal dynamics in a simulated prison. International Journal of Criminology and Penology, 1, 69-97.
Keller, H. E., & Lee, S. (2003). Ethical issues surrounding human participants research using the internet. Ethics & Behavior, 13(3), 211-219.
McInroy, L. B. (2016). Pitfalls, potentials, and ethics of online survey research: LGBTQ and other marginalized and hard-to-access youths. Social Work Research. 40(2), 83-94. doi: 10.1093/swr/svw005
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349, aac4716. doi: 10.1126/science.aac4716
Rodríguez, M. M. D., Cornish, J. A. E., Thomas, J. T., Forrest, L., Anderson, A., & Bow, J. N. (2014). Ethics education in professional psychology: A survey of American Psychological Association accredited programs. Training and Education in Professional Psychology, 8(4), 241-247. doi: 10.1037/tep000043
Rouse, S. V. (2017). The red badge of research (and the yellow, blue, and green badges, too). Psi Chi Journal of Psychological Research, 22(1), 2-9.
Sieber, J. E. (1994). Will the new code help researchers to be more ethical? Professional Psychology: Research and Practice, 25(4), 369-375.
STROBE. (2007). STROBE checklists (Version 4). Retrieved from https://www.strobe-statement.org/index.php?id=available-checklists
Warne, R. T., Astle, A. C., & Hill, J. C. (2018). What do undergraduates learn about human intelligence? An analysis of introductory psychology textbooks. Archives of Scientific Psychology, 6, 32-50. doi: 10.1037/arc0000038