A Horse Race …
Psychological treatments that are intended to be fully therapeutic and that are provided by trained professionals (bona fide psychotherapy; Wampold & Imel, 2015; Wampold et al., 2011) have been found to be effective compared to no-treatment and treatment-as-usual for individuals who suffer from a number of disorders, including anxiety and depression (Cuijpers et al., 2013; Wampold et al., 2011). Psychotherapy has also shown enduring efficacy for anxiety disorders in comparison to pharmacotherapy (e.g., Cuijpers et al., 2013; Roshenaei-Moghaddan et al., 2011).
However, the same meta-analyses of studies with direct treatment comparisons utilizing randomized controlled treatment designs (RCTs) also have indicated that the effects between two bona fide psychotherapies are usually small. Particularly, there is little evidence that specific treatment orientations such as psychodynamic or cognitive-behavioral treatments are more sustainable than other bona fide psychotherapies at the follow-up assessments (called the sleeper effect; Flückiger et al., 2015). Overall, these results can be summarized to indicate that the effects between treatments are generally smaller in comparison to the variability of successful and less successful components within treatment packages, especially in a long-term perspective.
Due to the lack of evidence that the selection of the “right” treatment packet (selective indication) might provide the hoped exploratory power, research on more fine-grained adaptations during treatment are required (Campell, Norcross, Vasquez, & Kaslow, 2013). Rather than creating an increasing number of novel treatment packages that are tested by comparative RCTs, an additional strategy may lie in the development of research designs that can be used to formulate and test a more adaptive approach to psychotherapy.
… And a Bouquet of Designs
Outside the traditional RCT design in which usually two or more distinctive psychotherapy approaches or components of treatments are compared to each other, there are a number of experimental designs that are appropriate for investigating psychotherapy processes and outcomes.
Looking at the therapist
A landmark study that used an innovative design to test for therapist effects was conducted by Strupp and Hadley (1979). In this study researchers asked university professors that had been said to be especially warm, understanding, and empathic to participate as a control group of therapists. College students that met criteria for depression and psychasthenia were then assigned to either therapists with actual training and experience or this control group of therapists, depending on availability. Both groups of “therapists” were allowed to choose whatever treatment approach they wanted to use with their clients for a maximum 25 sessions.
Interestingly, the degree of client improvement was not found to be different between the expert and control therapists. Although the professors had comparable success with client outcomes, the authors did note that the lay therapists had more difficulties on working toward specific treatment goals, had run out of topics to discuss with the clients, and most of them did not want to further participate as therapists.
In a more recent, but similarly innovative, research study on therapist effects, Lutz et al. (2015) examined the impact of feedback systems on outcome and treatment length. In this naturalistic randomized controlled study, a feedback condition was compared to a non-feedback condition. The therapists in the feedback condition were given feedback of the progress of their patients after a certain number of sessions.
To investigate the effect of attitude toward feedback, therapists were asked to provide their satisfaction with institutionalized feedback at the end of treatment. Multilevel analyses indicated that the therapists’ attitudes toward the feedback accounted for 5.4% of the variability in treatment outcome. This study provides evidence not only that therapists do differ in client outcomes, but that a specific therapist variable (therapist attitudes toward outcome feedback) may be partially explaining the differences.
Looking at the patient
As an example of another innovative RCT study, this time examining the impact of a patient variable (patient preferences) on treatment outcome, Raue, Schulberg, Heo, Klimstra, and Bruce (2009) asked patients about their preferred treatment method and let them rate how strong their preference was prior to randomly assigning them to one of two treatment conditions. Because of the random assignment, some patients had been naturally allocated to preference congruent and others to preference incongruent treatment conditions.
A comparison of initial preferences revealed that 70% of the patients favoured psychotherapy over medication and that the preference for psychotherapy was, on average, stronger. More importantly, the preference congruent treatment lead to a higher percentage of treatment initiation (100%) compared to the preference incongruent treatment (74%) and preference strength was associated with treatment adherence; however, client outcome was not related to either preference congruence or preference strength.
A further selective adaptation has been tested by Cheavens, Strunk, Lazarus, and Goldstein (2012). In this study, an idiographic ranking of patients’ strengths and weaknesses was developed for each client participant based on an interview at intake. Patients were then randomized either into a compensation treatment selection or a capitalization treatment selection.
In the compensation selection, treatment packages were selected on the basis of the patients’ relative weaknesses to build up skills. In the capitalization selection, treatment packages were chosen based on patients’ strengths to activate resources and therefor foster their competences. Interestingly, patients in the capitalization condition experienced greater symptom reduction than patients in the compensation condition. This effect occurred especially early in treatment and differences were maintained over the course of therapy.
As an example of another study focused on the patient, Flückiger et al. (2012) investigated if patients’ evaluations of the therapeutic alliance at the start of the remediation phase subsequently changed based on a brief adjunctive instruction. Demonstrating the use of a minimal intervention paradigm, patients in a university outpatient clinic received treatment as usual in both conditions, but were randomized to either receive a personal one-page letter inviting and encouraging them for direct feedback about the perceived therapeutic relationship and goal consensus with their therapist, or to a control condition in which no letter was sent. Therapists were blinded about the condition to which their patients had been randomized. In accordance with the authors’ hypotheses, the results indicated that the global alliance rating in the adjunctive condition showed faster increases compared to the control condition.
Looking at the sessions
Examining changes at a session level, the effect of a brief mindfulness centering exercise for therapists was tested by Dunn, Callahan, Swift, and Ivanovic (2013). In this study therapists randomly received different exercises (centering or control) to engage in before starting the session, so that the effect of centering could be investigated between therapists as well as within different sessions of one therapist. To ensure familiarity with the concept of mindfulness centering, therapists engaged in five short manualized mindfulness training sessions. In the control condition therapists were allowed to engage in typical pre-session activities for the participating clinics, such as chatting with colleagues, checking email, or using the restroom.
Rather than randomizing therapists to a single condition at the start of the study, therapist activities were randomized prior to the start of every session. A comparison of the session impacts of these different conditions showed that 5 minutes of a centering exercise resulted in the therapists perceiving themselves as more present in the subsequent sessions. Furthermore, when therapists engaged in the centering exercise compared to other exercises, patients perceived the session afterwards as more effective.
Further evidence for the relevance of session-level decisions comes from an implementation-trial design conducted by Flückiger et al. (2016; also Flückiger & Grosse Holtforth, 2008). The authors contrasted an established treatment for generalized anxiety disorder (mastery-of-your-anxiety packet, MAW) within three randomized implementation conditions. Five sequences of 10-minute peer-tutoring supervision immediately before the start of the sessions were used to set therapists’ attentional focus on patients’ individual symptoms and how these symptoms can be addressed into the MAW-packet (adherence priming condition).
Two comparable conditions deriving from a capitalization model were used to set therapists’ attentional focus on patients’ pre-existing strengths and functional coping skills and how these individual strengths can be used to involve the patient into the MAW-packet (resource priming conditions). The two resource priming conditions differed as to whether the therapists were allowed to invite a patient’s helpful other (usually husband or wife) into psychotherapy sessions. The results indicated that both resource priming implementations led to faster symptom reduction compared to adherence priming condition.
It is the nature of psychotherapy, and maybe of human interventions more generally, that data on treatment processes and outcomes have super nested data structures at multiple levels, including the in-session level, the session-by-session level, the therapy phase level, the patient level, the therapist level, the institution level, and so on (Orlinsky, Rønnestad, & Willutzki, 2004). At all of these levels, clinical decisions have to be made, resulting in a stream of interdependent frames, decisions, and outcomes.
Maybe one of the most challenging tasks for psychotherapist practitioners and researchers is to obtain a coordinated view of all these levels and to carefully consider the trees as well as the woods. Classical RCT designs try to tackle this clinical complexity by precisely conceptualizing, describing, and distributing overall treatment packages at the patient level. Moving forward, future psychotherapy research should attempt to provide additional knowledge that includes all levels (from the institutional to the in-session level) to understand what makes psychotherapy as effective as it is (e.g., Norcross, 2011). Further developments of intervention designs, including experimental as well as repeated measure correlational designs, are required to address these various levels of clinical decision making.
Cite This Article
Wolfer, C., & Flückiger, C. (2016). A bouquet of experimental designs in psychotherapy research. Psychotherapy Bulletin, 51(4), 13-16.
Campell, L. F., Norcross, J. C., Vasquez, M. J., & Kaslow, N. J. (2013). Recognition of psychotherapy effectiveness: The APA resolution. Psychotherapy, 50, 98-101. doi: 10.1037/a0031817
Cheavens, J. S., Strunk, D. R., Lazarus, S. A., & Goldstein, L. A. (2012). The compensation and capitalization models: A test of two approaches to individualizing the treatment of depression. Behaviour Research and Therapy, 50(11), 699-706. doi:10.1016/j.brat.2012.08.002
Cuijpers P., Sijbradij, M., Koole, S., Andersson, G., Beekman, A. T, & Reynolds, C. F. (2013). The efficacy of psychotherapy and pharmacotherapy in treating depressive and anxiety disorders: A meta-analysis of direct comparisons. World Psychiatry, 12(2), 137-148. doi: 10.1002/wps.20038
Dunn, R., Callahan, J. L., Swift, J. K., & Ivanovic, M. (2013). Effects of pre-session centering for therapists on session presence and effectiveness. Psychotherapy Research, 23(1), 78-85. doi:10.1080/10503307.2012.731713
Flückiger, C., Del Re, A. C., & Wampold, B. E. (2015). The sleeper effect: Artifact or phenomenon—A brief comment on Bell, Marcus, & Goodlad, 2013. Journal of Consulting and Clinical Psychology, 83, 438-442. doi: 10.1037/a0037220
Flückiger, C., Del Re, A. C., Wampold, B. E., Znoj, H., Caspar, F., & Jörg, U. (2012). Valuing
clients’ perspective and the effects on the therapeutic alliance: A randomized controlled study of an adjunctive instruction. Journal of Counseling Psychology, 59(1), 18-26. doi:10.1037/a0023648
Flückiger, C., Forrer, L., Schnider, B., Bättig, I., Bodenmann, G., & Zinbarg, R. E. (2016). A single-blinded, randomized clinical trial of how to implement an evidence-based cognitive-behavioural therapy for generalised anxiety disorder [IMPLEMENT]: Effects of three different strategies of implementation. EBioMedicine, 3, 163-171. doi: 10.1016/j.ebiom.2015.11.049
Flückiger, C., & Grosse Holtforth, M. (2008). Focusing the therapist’s attention on the patient’s strengths: A preliminary study to foster a mechanism of change in outpatient psychotherapy. Journal of Clinical Psychology, 64(7), 876-890. doi: 10.1002/jclp.20493
Lutz, W., Rubel, J., Schiefele, A. K., Zimmermann, D., Bohnke, J. R., & Wittmann, W. W. (2015). Feedback and therapist effects in the context of treatment outcome and treatment length. Psychotherapy Research, 25(6), 647-660. doi:10.1080/10503307.2015.1053553
Norcross, J. C. (2011). Psychotherapy relationships that work (2nd ed.). New York, NY: Oxford University Press.
Orlinsky, D. E., Rønnestad, M. H., & Willutzki, U. (2004). Fifty years of process-outcome research: Continuity and change. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior change (5th ed., pp. 307-390). New York, NY: Wiley.
Raue, P. J., Schulberg, H. C., Heo, M., Klimstra, S., & Bruce, M. L. (2009). Patients’ depression treatment preferences and initiation, adherence, and outcome: A randomized primary care study. Psychiatric Services, 60(3), 337-343. doi: 10.1176/appi.ps.60.3.337
Roshanaei-Moghaddam, B., Pauly, M. C., Atkins, D. C., Baldwin, S. A., Stein, M. S., & Roy-Byrne, P. (2011). Relative effects of CBT and pharmacotherapy in depression versus anxiety: Is medication somewhat better for depression and CBT somewhat better for anxiety? Depression and Anxiety, 28(7), 560-567. doi: 10.1002/da.20829
Strupp, H. H., & Hadley, S. W. (1979). Specific vs nonspecific factors in psychotherapy: A controlled study of outcome. Archives of General Psychiatry, 36(10), 1125-1136. doi:10.1001/archpsyc.1979.01780100095009
Wampold, B. E., Budge, S., Laska, K., Del Re, A. C., Baardseth, T. P., & Flückiger, C., . . . Gunn, W. (2011). Evidence-based treatments for depression and anxiety versus treatment-as-usual: A meta-analysis of direct comparisons. Clinical Psychology Review, 31(8), 1304-1312. doi:10.1016/j.cpr.2011.07.012
Wampold, B. E., & Imel, Z. E. (2015). The great psychotherapy debate: The evidence for what makes psychotherapy work. New York, NY: Routledge.