Psychotherapy Bulletin

Psychotherapy Bulletin

When is Quantitative Evidence Actually Useful for Day-to-Day Psychotherapy Practice? Why Unsystematic Qualitative Evidence Reigns Supreme

In this article, I will argue that quantitative evidence is not very useful to the practicing psychotherapist and instead most day-to-day clinical decisions are based on unsystematic qualitative evidence. I imagine this argument will be obvious to some in clinical practice and considered blasphemy against clinical science for others. It is a realization I have come to after practicing psychotherapy part-time for 9 years while also receiving doctoral and post-doctoral training in clinical science. I spent much time and energy trying to marry the two, psychotherapy and clinical science, and have concluded they simply aren’t that compatible.  

A pivotal moment during my development as a psychotherapist occurred while on my pre-doctoral internship. I told my supervisor in a rather whiny tone, “But I want to practice psychotherapy based on quantitative evidence!” Imagine the girl from Willy Wonka and the chocolate factory (“But I want an Oompa-Loompa now!”), but more professional. He empathetically responded, “I know you do, David, I know.” I was realizing practicing psychotherapy primarily on quantitative evidence was a fantasy rather than a reality. I used to think the infamous “research-practice” gap in psychotherapy (Teachman et al., 2012) was due to psychotherapists not practicing based on quantitative research (Lilienfeld et al., 2013). Now I think the infamous “research-practice” gap exists because quantitative research is simply not that useful to the practicing psychotherapist. Day-to-day practice requires too many granular decisions for which quantitative evidence is too coarse to be applicable. I will argue a few forms of quantitative evidence directly relate to day-to-day practice; however, those are a minority of the clinical decisions a psychotherapist makes. Although this is not a new argument (Beutler, 2009; Levant, 2004), I believe it is worth re-stating in our current age of “evidence-based psychotherapies.” 

What do I mean by day-to-day psychotherapy practice?

By day-to-day psychotherapy practice, I mean the moment-to-moment decisions a psychotherapist makes during a session: the comments, questions, and listening (i.e., intentionally not saying anything and letting the client talk) a psychotherapist decides at each minute of a psychotherapy session 

As I progressed as a psychotherapist, thinking in terms of psychotherapy “types” or “modalities” was less and less useful. Even the level of “techniques” or “skills” is often too coarse. I remember one early supervisor told me, “Do cognitive restructuring” – as if that was clear enough instruction for a psychotherapist. There are hundreds of ways to “do cognitive restructuring” with a client. For example, you may not want to literally read verbatim from a CBT manual (Owen & Hilsenroth, 2014). Instead, you need to figure out what comments/questions you are going to say to your client, in what order, and how to respond to your clients’ replies to “do cognitive restructuring.” The level of “techniques” or “skills” is not granular enough for the practicing psychotherapist. A psychotherapist is not really deciding what broad “intervention” to do, but instead continually deciding what comment, question, or listening they are going to do in the next minute with their client. After several years of training, I would argue this is the level of granularity that a psychotherapist wants to be thinking and practicing at. 

What forms of quantitative evidence will I be talking about?

By quantitative evidence, I mean empirical data involving numbers systematically collected within the field of psychological science (broadly construed). This usually involves research studies on psychotherapy done by clinical scientists (McFall, Treat, & Simons, 2015). I will be limiting my discussion to two forms of quantitative evidence: 1) basic clinical science, and 2) psychotherapy outcome research. I acknowledge there are other forms of quantitative evidence potentially more relevant to day-to-day psychotherapy practice (e.g., psychotherapy process research, routine outcomes monitoring, psychotherapy training research, etc.). However, these are beyond the breadth of this article. 

1) Basic clinical science: Why being in the ballpark is just not good enough

People craft elaborate narratives loosely connecting quantitative evidence to some intervention they plan to do with a client. If there is any quantitative evidence in the ballpark of their claim/intervention, people will call it up like a divine spirit. For example, while attending a psychotherapy conference, I heard a panelist state we needed to focus on improving clients’ grit. He cited Angela Duckworth’s research showing grit is associated with not only elite performance (Duckworth et al., 2007) but also psychological well-being (Duckworth, 2016). He suggested doing grit interventions with clients. The problem is that there are no grit interventions with quantitative evidence, and none are explicitly framed within the context of psychotherapy (e.g., what comments/questions/listening would a psychotherapist do to increase their client’s grit). The scientific evidence on grit being applied was too far removed from day-to-day psychotherapy practice to be clinically useful and yet the panelist claimed this was a new innovative way to do “evidence-based practice.” 

Quantitative evidence is most useful to practicing psychotherapists when it comes from one of the following: 1) human subjects research, 2) within-person studies, or 3) intervention testing. Psychotherapy usually involves higher-order cognitive processes such as language and meta-cognition unique to human beings. Research on animals, including primates, is not an appropriate analog for understanding psychotherapy. Psychotherapy is innately a within-person process involving change over time. Within-person studies seek to understand why a person has different levels of a construct at time A vs. time B. For example, why a person has high depression before psychotherapy and low depression after psychotherapy. Between-person studies investigate why person A has different levels of a construct than person B. For example, why one person has clinical depression, and another person has fewer symptoms of depression. While those between-person studies are relevant to the causes and correlates of mental health, I assert that they are not relevant to treatments like psychotherapy. Finally, intervention studies testing out psychotherapist comments/questions/listening are needed. Knowing that when meaning in life increases for a person, their depression goes down is not very useful for day-to-day psychotherapy practice. The research does not indicate what comments/questions/listening by a psychotherapist can increase meaning in life. It is the difference between a naturalistic vs. intervention study. Because a construct predicts mental health at the within-person level, does not mean a “common sense” intervention based on that construct will change mental health (e.g. grit: telling a client it will be worth it if they don’t give up; meaning in life: asking a client to identify a purpose in life for their next session; self-acceptance: telling a client to mindfully recognize their “common humanity” next time they criticize themselves). 

I learned this first-hand from group supervision with licensed psychotherapists. I was a student psychotherapist who had just completed four years of quantitative research training. One of the licensed psychotherapists talked about a client with social anxiety and a hostile interpersonal style. I got very excited as my graduate school mentor had done research on this. He found ~20% of people who meet diagnostic criteria for Social Anxiety Disorder had elevated aggression and anger when compared with quiet, meek, and inhibited socially anxious clients (Kashdan et al., 2009; Kashdan & Hoffman, 2008). I proudly informed the psychotherapist of my mentor’s research. He asked what my mentor recommended for treating these types of clients as a round of traditional cognitive behavioral therapy (CBT) for Social Anxiety Disorder was not working (e.g., Heimberg & Becker, 2002). I told him that my mentor did not do research on interventions for this population. His reply was, “I don’t need your mentor’s study to know my client has social anxiety with anger problems. You can come to my office at 3:00pm on Tuesdays and I will show you the evidence! Instead, I need to know how to treat this client.” Indeed, my advisor’s research did not inform his day-to-day psychotherapy practice. 

2) Psychotherapy outcome research:  After selecting a black box, you leave the quantitative evidence behind

My critique of basic clinical science naturally brings us to the psychotherapy outcome literature. Perhaps the most common form of quantitative evidence used to argue day-to-day psychotherapy practice can be based on science are studies of psychotherapy modality. The studies tend to be randomized controlled trials (RCTs) that are 1) research on humans that 2) look at within-person change 3) in response to interventions. The three aspects of quantitative evidence I outlined above are satisfied. In this case, the intervention is the psychotherapy modality. By psychotherapy modality, I am referring to the type of psychotherapy that might be a treatment arm of an RCT:  Beckian cognitive therapy, intensive short-term dynamic therapy, exposure and response prevention, emotion-focused psychotherapy, etc. There are many psychotherapy modalities with evidence from RCTs supporting their efficacy (Wampold & Imel, 2015). This quantitative evidence is useful for selecting a type of psychotherapy to use with a client. After the psychotherapy modality is selected, the psychotherapist must decide how to implement the type of psychotherapy. In other words, what comments or questions the psychotherapist communicates to their clients, in what order, and how they respond to their clients’ replies. These clinical decisions, which number by the hundreds each week, are not based on quantitative evidence. 

What I am referring to arises in the most recent psychotherapy book I read: Cognitive therapy for suicidal patients: Scientific and clinical applications (Wenzel et al., 2009), which asserts that it is based on “scientific evidence.” While the book cites RCTs that empirically support Beckian cognitive therapy for reducing suicidality (e.g., Brown et al., 2005), there is no scientific evidence for any given recommendation or suggestion by the authors. Theoretically, the recommendations in the book are things the RCT study psychotherapists did, but there was no quantitative assessment of that. For example, the authors recommend homework assignments that only contain a single component (rather than multi-component homework assignments). The authors did not present evidence correlating use of single-component homework assignments with therapy outcomes. Ideally, an RCT would be conducted that randomized suicidal clients to receive single-component or multi-component homework assignments. As presented in the book, cognitive therapy was a black box for which all the moment-to-moment clinical decisions were lost to the abyss.

If clinical scientists attempted to map out the black box of cognitive therapy for suicidal patients, it would be an impossible task. An untenable number of RCTs would be required to fully study the thousands of day-to-day clinical decisions to be based on the psychotherapy outcome literature. It would take an immense amount of time and money; it is simply not a feasible program of research. Even if thousands of researchers agreed to do these RCTs, you would still have – what I have heard some refer to as – the “infinite moderator problem.”2 This is the idea that human psychology is so complex with so many variables at play that almost any main effect has a seemingly infinite number of moderators potentially impacting its magnitude. Yes, RCTs would suggest, on average, clinical decision X leads to better psychotherapy outcomes (i.e., main effect). However, there are inevitably individual differences in that effect depending on client, therapist, relationship, and other factors (i.e., moderators). This is the spirit behind the famous Gordan Paul (1969) quote “What treatment, by whom, is most effective for this individual with that specific problem, and under which set of circumstances” (pp. 44).3 After conducting thousands of RCTs to determine the best clinical decisions to make on average, researchers would need to conduct thousands more RCTs with larger sample sizes to detect the plethora of potential moderators at play. Again, it is simply not a feasible program of research. 

Therefore, it is my position that Paul Meehl and colleagues’ research about clinical vs. actuarial prediction (Dawes, Faust, & Meehl, 1989; Meehl, 1954/1996) is not very useful to practicing psychotherapists. Many psychotherapy outcome studies manipulate the modality, or the black box. The actuarial prediction applies when selecting which black box to open. After that, the thousands of psychotherapist decisions for each client cannot be based on actuarial prediction because there are not thousands upon thousands of RCTs done to build the actuarial prediction with. The practicing psychotherapist is forced to use clinical prediction because actuarial prediction is not available. I do not doubt that Meehl understood this idea, and that if he were around today, he would likely agree.4 I do hear some clinical scientists talk about Meehl’s research on clinical vs. actuarial prediction as if psychotherapists should be basing their moment-to-moment clinical decisions off quantitative evidence. For example, Lilienfeld et al. (2013) cited two clinical vs. actuarial prediction meta-analyses arguing for the use of quantitative evidence in psychotherapy practice. However, the first meta-analysis primarily looked at predicting future behavior (e.g., violent offense), performance (e.g., academic achievement), or prognosis (e.g., length of hospital stay) (Egisdottir et al., 2006) and the second meta-analysis primarily looked at predicting medical diagnosis (e.g., throat infection), job performance (e.g., military training) and mental health treatment outcome (e.g., psychotherapy modality) (Grove et al., 2000). The clinical vs. actuarial prediction literature support psychotherapy in general as a treatment (Wampold & Imel, 2015) and using the psychotherapy outcome literature to select a type of psychotherapy, but nothing more. Anyone who says otherwise either hasn’t been a full-time psychotherapist before or simply does not understand the very limited amount of actuarial prediction available for day-to-day clinical decision making. 

Revenge of the psychotherapy arts: The important role of unsystematic qualitative evidence

I have argued RCTs and actuarial prediction cannot tell you what comment/question/listening to say next to your client. What does a psychotherapist use instead? I contend that the answer is unsystematic qualitative evidence. By unsystematic qualitative evidence, I mean empirical data that is not collected in any systematic way and without assigning numbers to the data. Clinical experience from working with clients in psychotherapy is a main source of unsystematic qualitative evidence. I added the adjective “unsystematic” to distinguish the clinical experience I am talking about from qualitative research studies that collect evidence in a systematic way (e.g., Morrison et al., 2017; Maxwell & Levitt, in press). Psychotherapists are continually absorbing unsystematic qualitative evidence from their clients. They are seeing how clients respond to comments/questions/listening during sessions, whether clients find a technique helpful, inert, or harmful, if clients’ distress and impairment goes down after several weeks of an approach, etc. Most psychotherapists want to help their clients and are motivated to do so. Psychotherapists are continually getting reinforced and punished for their therapeutic behavior depending on whether their clients get better or not. Unsystematic qualitative evidence includes more than just clinical observations though. Many specific recommendations in treatment manuals and clinical books are based on unsystematic qualitative evidence from the authors (e.g., complexity of homework assignments in cognitive therapy for suicidal patients; Wenzel et al., 2009). In addition, much of the clinical wisdom I have received from supervisors was based on unsystematic qualitative evidence. I had one supervisor who authored a well-known treatment manual. I asked if he follows his own treatment manual when doing psychotherapy with his clients. He replied, “No” and emphasized the importance of flexibility and “tailoring treatment to the individual client.” Learning how to “tailor treatment to the individual client” – what some call the art of psychotherapy – is based on unsystematic qualitative evidence. 

Unsystematic qualitative evidence – not quantitative evidence – is how new psychotherapies have developed. We learn in Psych 101 that Freud used his clinical experience to develop psychoanalysis. The same is true for Beck with Cognitive Therapy and Linehan with Dialectical Behavior Therapy (DBT). Beck used his clients’ thoughts (initially dreams as well) as qualitative data and essentially conducted thematic analysis on them (a qualitative research method).5 I have not read about the involvement of any quantitative data until Beck had already fully developed cognitive therapy for depression and then conducted his first RCT with John Rush (Rush et al., 1977). DBT had a similar trajectory. It is reported that Linehan started doing conventional behavior therapy with suicidal clients. Through clinical experience, she realized suicidal clients felt invalidated by conventional behavior therapy interventions and felt that client-centered therapy was not helpful. Linehan determined that she needed to balance both conventional behavior therapy and client-centered therapy (Linehan, 2020). Quantitative evidence was introduced to Linehan’s research to test the already developed DBT psychotherapy modality. 

Thus, many of the sentences in treatment manuals and clinical books – like the cognitive therapy for suicidal patients book I referenced above – are based on unsystematic qualitative evidence. However, they can be useful reading for a practicing psychotherapist. There is a reason many psychotherapists do not reference basic clinical science, RCTs, and other empirical journal articles (Morrow-Bradley & Elliott, 1986). It is more useful to hear a case study of one of the clients from an RCT than to interpret the statistical results based on the RCT’s full sample. I have met people who know the science of psychotherapy research very well, but from what my colleagues and I could tell, they were not effective psychotherapists. My moment-to-moment clinical decisions as a psychotherapist won’t be based on quantitative evidence, because they can’t be based on quantitative evidence. As I aim to become a better psychotherapist, I will be reading treatment manuals and clinical books by expert psychotherapists who have gathered massive amounts of unsystematic qualitative evidence from seeing hundreds – if not thousands – of clients. I still feel the same way I felt on internship that “I want to practice psychotherapy based on quantitative evidence,” but I have now accepted it is not possible. I hear my supervisor’s empathetic voice in my head – “I know you do David, I know” and then I turn to the next clinical book on my list. 

David Disabato is an Assistant Professor of Psychology at Baldwin Wallace University in Berea, OH. He completed his PhD in Clinical Psychology at George Mason University. He teaches Psychological Disorders, Practicum in Psychology, Research Methods, and Statistics. David also supervises clinical psychology doctoral students at Kent State University providing individual psychotherapy.

Cite This Article

Disabato, D. (2023). When is quantitative evidence actually useful for day-to-day psychotherapy practice? Why unsystematic qualitative evidence reigns supreme. Psychotherapy Bulletin, 58(2,3), 51-57. 

References

Beutler, L. E. (2009). Making science matter in clinical practice: Redefining psychotherapy. Clinical Psychology: Science and Practice, 16(3), 301–317. https://doi.org/10.1111/j.1468-2850.2009.01168.x 

Beutler, L. E. (1997). The psychotherapist as a neglected variable in psychotherapy: An illustration by reference to the role of therapist experience and training. Clinical Psychology: Science and Practice, 4(1), 44–52. https://doi.org/10.1111/j.1468-2850.1997.tb00098.x 

Brown, G. K., Tenhave, T., Henriques, G. R., Xie, S. X., Hollander, J. E., & Beck, A. T. (2005). Cognitive therapy for the prevention of suicide attempts: A randomized controlled trial. JAMA, 294, 563-570. 

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30, 116-127. 

Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243(4899), 1668-1674. 

Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 92(6), 1087-1101. 

Duckworth, A. L. (2016). Grit: The power of passion and perseverance. Simon & Schuster, Inc. 

Egisdottir, S., White, M. J., Spengler, P. M., Maugherman, A. S., Anderson, L. A., Cook, R. S., et al (2006). A meta-analysis of clinical judgment project: Fifty-six years of accumulated research on clinical versus statistical prediction. The Counseling Psychologist, 34, 341–382. 

Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment, 12,19–30. 

Heimberg, R. G., & Becker, R. E. (2002). Cognitive-behavioral group therapy for social phobia: Basic mechanisms and clinical strategies. Guilford Press. 

Kashdan, T. B., & Hofmann, S. G. (2008). The high-novelty-seeking, impulsive subtype of generalized social anxiety disorder. Depression & Anxiety, 25, 535-541. doi:10.1002/da.20382 

Kashdan, T. B., McKnight, P. E., Richey, J. A., & Hofmann, S. G. (2009). When social anxiety disorder co-exists with risk-prone, approach behavior: Investigating a neglected, meaningful subset of people in the National Comorbidity Survey Replication. Behaviour Research and Therapy, 47, 559-568. 

Krumboltz, J. D. (1966). Promoting adaptive behavior: New answers to familiar questions. In J. D. Krumboltz (Ed.), Revolution in counseling (pp. 1-26). Boston MA: Houghton Mifflin. 

Levant, R. (2004). The empirically validated treatments movement: A practitioner perspective. Clinical Psychology: Science and Practice, 11(2), 219-224. 

Lilienfeld, S. O., Ritschel, L. A., Lynn, S. J., Cautin, R. L., & Latzman, R. D. (2013). Why many clinical psychologists are resistant to evidence-based practice: Root causes and constructive remedies. Clinical Psychology Review, 33(7), 883-900. 

Linehan, M. M. (2020). Building a life worth living: A memoir. Random House. 

Maxwell, J. A., & Levitt, H. M. (in press). How qualitative methods advance the study of causation in psychotherapy research. Psychotherapy Research. 

McFall, R. M., Treat, T. A., & Simons, R. F. (2015). Clinical Science Model. In (Ed. R. L. Cautin & S. O. Lilienfeld) The Encyclopedia of Clinical Psychology (1st edition). 

Meehl, P. E. (1996). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. Northvale, NJ: Jason Aronson. (Original work published 1954) 

Morrison, N. R., Constantino, M. J., Westra, H. A., Kertes, A., Goodwin, B. J., & Antony, M. M. (2017). Using interpersonal process recall to compare patients’ accounts of resistance in two psychotherapies for generalized anxiety disorder. Journal of Clinical Psychology, 73(11), 1523-1533. 

Morrow-Bradley, C., & Elliott, R. (1986). Utilization of psychotherapy research by practicing psychotherapists. American Psychologist, 41(2), 188-197. 

Owen, J., & Hilsenroth, M. J. (2014). Treatment adherence: the importance of therapist flexibility in relation to therapy outcomes. Journal of Counseling Psychology, 61(2), 280-288. 

Paul, G. L. (1969). Behavior modification research: Design and tactics. In C. M. Franks (Ed.) Behavior therapy: Appraisal and status (pp. 29-62). New York, NY: McGraw-Hill. 

Rush, A. J., Beck, A. T., Kovacs, M., & Hollon, S. (1977). Comparative efficacy of cognitive therapy and pharmacotherapy in the treatment of depressed outpatients. Cognitive Therapy and Research, 1, 17-37. 

Teachman, B. A., Drabick, D. A. G., Hershenberg, R., Vivian, D., Wolfe, B. E., & Goldfried, M. R. (2012). Bridging the gap between clinical research and clinical practice: Introduction to the special section. Psychotherapy, 49(2), 97–100. https://doi.org/10.1037/a0027346 

Wampold, B. E., & Imel, Z. E. (2015). The great psychotherapy debate: The evidence for what makes psychotherapy work. Routledge. 

Wenzel, A., Brown, G. K., & Beck, A. T. (2009). Cognitive therapy for suicidal patients: Scientific and clinical applications. American Psychological Association. 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *