Web-only Feature

Web-only Feature

Impact of Coaching on Rates of Utilization and Clinical Change for Digital Self-Care Modules Based on Cognitive Behavioral Therapy

Internet Editor’s Note: Dr. Jeb Brown and colleagues published an article titled “Effectiveness For Online Cognitive Behavioral Therapy Versus Outpatient Treatment” in Psychotherapy.

The purpose of the current study is to explore if the addition of personalized coaching improves outcomes of an iCBT program, as reported by Brown et al., 2020. The Learn to Live iCBT program offers several modes of enhanced personal coaching support, which is optionally available to the user. Individuals can get coaching support via phone, email, or text communication while completing iCBT therapy. While coaches are supervised by a licensed clinician, the coaches generally have advanced graduate degrees in psychology but no licensure for independent practice. 21% of the users took advantage of the personalized coaching. The coaching was relatively equally divided between phone, email and text contact. No significant differences were found based on the mode of contact.

The results of this study can be used to inform end users, clinicians and other customers regarding the benefits of the program overall and the potential added benefits of taking advantage of the support elements of the program.



Learn to Live (iCBT program) uses three different questionnaires depending on the lesson modules: depression as measured by Patient Health Questionnaire-9 (PHQ-9;Kroenke & Spitzer, 2002); social anxiety measured with the Social Phobia Inventory-17 (SPIN-17; Conner et al., 2000) and general stress and worry as measured by the Generalized Anxiety Disorder-7 (GAD-7; Spitzer et al., 2006).

At the time of the initial assessment, before the client is exposed to the clinical material, all questionnaire items from the three scales are administered concurrently as a comprehensive assessment. This allows for a factor analysis of items for all three questionnaires concurrently to determine if the three questionnaires are distinct measures or if they represent a common psychological factor (Brown et al., 2020). The analysis of these three questionnaires reveals that all the items load on a common factor, with factor loadings ranging from .47 to .77, with a mean loading of .63. The determination of a common factor is a statistical process that does not connect the factor with any specific clinical construct.

The existence of a common factor among items commonly used on outcome questionnaires has been apparent for some years (Brophy, Norvell, & Kiluk, 1988; Enns, Cox, Parker, & Guertin, 1998; Lo Coco et al., 2008; Brown et al., 2015). The underlying construct has been referred to by different terms. Within the ACORN collaboration it is referred to as the global distress factor. The ACORN (A Collaborative Outcomes Resource Network) collaboration is a 13-year-old project that has facilitated data collection and feedback for thousands of clinicians (Brown et al., 2015). Clinicians participating in the ACORN collaboration typically administer a client-completed questionnaire at every session. While item content (face validity) varies somewhat depending on the population and measurement needs, the underlying psychometrics of the various versions are very similar. Also, almost 30% of the adult clients within the ACORN collaboration completed either the PHQ-9, GAD-7 or both concurrently. The ACORN data provides the basis for comparison in outcomes of face to face psychotherapy care provided in a typical outpatient setting to the outcome data collected within the iCBT program.

Benchmarking Outcomes

Change on the outcome questionnaires is evaluated and compared employing the benchmarking methodology developed by various researchers participating in the ACORN collaboration (Minami et al., 2007; Miami et al., 2008a;Minami et al., 2008b;  Minami et al., 2009; Minami et al., 2012).  This methodology was employed in the prior article comparing results for iCBT to outpatient therapy (Brown et al., 2020).

A critical component of the benchmarking methodology is the use of effect size. An effect size of one means that the client improved by one standard deviation. The use of the effect size statistic creates a standardized metric with which to compare results. The ACORN methodology limits calculation of effect size to those cases with intake scores in a clinical range, similar to subjects who would be enrolled in clinical trials of various methods of psychotherapy. Using this methodology, it is possible to compare results obtained in outpatient treatment as usual in the community with those observed in clinical trials. Effect sizes =>.8 are considered evidence of highly effective treatment (Wampold & Imel, 2015).



The ACORN sample of 116,854 participants was drawn from those adults initiating treatment on or after 1/1/2019 and who completed treatment between two and seven sessions. A total of 4,040 Learn to Live (iCBT program) users meet the criteria for the calculation of effect size and therefore inclusion in this study. Of the Learn to Live users, 21% chose to receive some type of personalized coaching.

The results will be reported using two different criteria for completion. Treatment completers are those who completed all seven lessons. The second, larger group, referred to as the intent to treat group, are those who began the program or outpatient therapy and completed at least two lessons/sessions, but did not complete seven lessons/sessions. For this group, the calculation of effect size is based on the last lesson completed.

Among the iCBT users with no coaching who completed all seven lessons (18% of no coaching sample) the effect size was .82.  For those receiving coaching the effect size at lesson seven (32% of coaching sample) was 1.28 (p<.01; one tailed t-test for comparison of effect size). When results are examined using the intent to treat method of evaluation, a more complete picture emerges. The iCBT program without coaching had an effect size of .47 with an average final lesson of 3.86 compared to .84 for the coaching sample with an average final lesson of 4.75 (p<.01 for comparison of both effect size and last lesson). By way of comparison, the effect size for the ACORN sample was .74, with an average final session of 3.6.  Differences in effect size between the different modes of the coaching condition (phone, email, text messaging) did not reach statistical significance (p>.05). Therefore, differences are noted only at the aggregate level for all modes of coaching.

Graph 1 displays the percentage of users/clients stopping treatment after each lesson/session. Those iCBT users receiving coaching displayed a much stronger tendency to continue to the next lesson than both other iCBT users and the ACORN clients.  This is particularly evident in early lessons.

Graph 1

Graph 2 displays the effect size depending on the last lesson/session completed. Again, it is apparent that the coaching condition is associated with greater change per session earlier in the program, even for those electing not to use all seven lessons.

Graph 2


These results reconfirm the overall effectiveness of the iCBT program, particularly for users that completed six lessons or more. Those taking advantage of the coaching option displayed significantly greater improvement both in earlier lessons as well as a higher retention rate after each lesson. The dip in effect size at session six for the coaching condition represents a relatively small percentage of cases and may be due to becoming discouraged with results.

These results provide support for the assertion that personal contact in the form of coaching and support may significantly enhance the effectiveness of iCBT programs. Of course, it is important to keep in mind that correlation does not equal causation in a naturalistic study such as this. While it is possible that those users likely to exhibit more change were more inclined to choose the coaching option, the findings are suggestive and call for more research on how to optimize the benefits of iCBT programs.

Be the 1st to vote.
Cite This Article

Brown, J.S., & Jones, E. (2020, December). Impact of coaching on rates of utilization and clinical change for digital self-care modules based on cognitive behavioral therapy. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/impact-of-coaching-on-rates-of-utilization-and-clinical-change-for-digital-self-care-modules-based-on-cognitive-behavioral-therapy


Andersson, G. (2018). Internet interventions: Past, present and future. Internet Interventions, 12(2018), 181-188. doi 10.1016/j.invent.2018.03.008

Attridge, M. D. (January, 2020). Internet-based cognitive-behavioral therapy for employees with anxiety, depression, social phobia, or insomnia: Clinical and work outcomes. Sage Open. Retrieved fromhttps://journals.sagepub.com/doi/full/10.1177/2158244020914398#articleCitationDownloadContainer

Attridge, M. D., Morfitt, R. C., Roseborough, D. J., & Jones, E. R. (2020). Impact of internet-delivered cognitive behavioral therapy on clinical and academic outcomes for college students with anxiety, depression, social anxiety, or insomnia: Four longitudinal studies of archival operational data and follow-up surveys. JMIR Preprints. 06/01/2020:17712. doi:10.2196/preprints.17712Brophy, C. J., Norvell, N. K., & Kiluk, D. J. (1988). An examination of the factor structure and convergent and discriminant validity of the SCL-90R in an outpatient clinic population. Journal of Personality Assessment, 52, 334. doi:10.1207/s15327752jpa5202_14

Enns, M. W., Cox, B. J., Parker, J. D., & Guertin, J. E. (1998). Confirmatory factor analysis of the Beck Anxiety and Depression Inventories in patients with major depression. Journal of Affective Disorders, 47, 195-200. doi:10.1016/S0165-0327(97)00103-1

Brown, J. S., Jones, E., & Cazauvieilh, C. (May, 2020). Effectiveness for online cognitive behavioral therapy versus outpatient treatment: A session by session analysis. Society for the Advancement of Psychotherapy. Retrieved from http://www.societyforpsychotherapy.org/effectiveness-for-online-cognitive-behavioral-therapy-versus-outpatient-treatment

Brown, G. S., Simon, A., Cameron, J., & Minami, T. (2015). A collaborative outcome resource network (ACORN): Tools for increasing the value of psychotherapy. Psychotherapy, 52, 412–421. doi:10.1037/pst0000033

Connor, K. M., Davidson, J. R., Churchill, L. E., Sherwood, A., Foa, E., & Weisler, R. H. (2000). Psychometric properties of the social phobia inventory (SPIN). The British Journal of Psychiatry, 176(4), 379-386. doi: 10.1192/bjp.176.4.379

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum

Cohen, J. (1992). A power primer. Psychological Bulletin, 112 (1), 155-159 doi: 10.1037//0033-2909.112.1.155.

Davies, E. B., Morriss, R., & Glazebrook, C. (2014). Computer-delivered and web-based interventions to improve depression, anxiety, and psychological well-being of university students: A systematic review and meta-analysis. Journal of Medical Internet Research, 16(5), e130. doi:10.2196/jmir.3142

Kroenke, K., & Spitzer, R.L. (2002). The PHQ-9: A new depression diagnostic and severity measure. Psychiatric Annals32(9), 509-515. doi:10.1155/2012/309094

Lattie, E. G., Adkins, E. C., Winquist, N., Stiles-Shields, C., Wafford, Q. E., & Graham, A. K. (2019). Digital mental health interventions for depression, anxiety, and enhancement of psychological well-being among college students: Systematic review. Journal of Medical Internet Research, 21(7), e12869. doi:10.2196/12869  Lo Coco, G., Chiappelli, M., Bensi, L., Gullo, S., Prestano, C., Lambert, M. J. (2008) The factorial structure of the outcome questionnaire-45: A study with an Italian sample. Clinical Psychology and Psychotherapy, 15, 418-423. doi:10.1002/cpp.601

Minami T., Brown G. S., McCulloch, J., & Bolstrom, B. J. (2012). Benchmarking therapists: Furthering the benchmarking method in its application to clinical practice. Quality and Quantity, 46, 699-708. doi:10.1007/s11135-011-9548-4

Minami, T., Davies, D. R., Tierney, S. C., Bettmann, J. E., McAward, S. M., Averill, L. A., … & Wampold, B. E. (2009). Preliminary evidence on the effectiveness of psychological treatments delivered at a university counseling center. Journal of Counseling Psychology, 56, 309-320. doi:10.1037/a0015398

Minami, T., Serlin, R. C., Wampold, B. E., Kircher, J. C., & Brown, G. S. (2008a). Using clinical trials to benchmark effects produced in clinical practice. Quality and Quantity, 42, 513-525. doi:10.1007/s11135-006-9057-z

Minami, T., Wampold, B. E., Serlin, R. C., Kircher, J. C., & Brown, G. S. (2007). Benchmarks for psychotherapy efficacy in adult major depression. Journal of Consulting and Clinical Psychology, 75, 232-243. doi:10.1037/0022-006X.75.2.232

Minami, T., Wampold, B. E., Serlin, R. C., Hamilton, E. G., Brown, G. S., & Kircher, J. C. (2008b). Benchmarking the effectiveness of psychotherapy treatment for adult depression in a managed care environment: A preliminary study. Journal of Consulting and Clinical Psychology, 76, 116-124. doi:10.1037/0022-006X.76.1.116

Spitzer, R. L., Kroenke, K., Williams, J. B., & Löwe, B. (2006). A brief measure for assessing generalized anxiety disorder: The GAD-7. Archives of Internal Medicine, 166(10), 1092-1097. doi:10.1001/archinte.166.10.1092

Tavakol, M & DennickInternational, R. (2011) Making sense of Chronbach’s alpha.   Journal of Medical Education. 2011; 2:53-55 Editorial ISSN: 2042-6372 DOI: 10.5116/ijme.4dfb.8dfd

Wampold, B. E., & Imel, Z. E. (2015). The great psychotherapy debate: The evidence for whatmakes psychotherapy work (2nd ed.). New York, NY. Routledge.


Submit a Comment

Your email address will not be published. Required fields are marked *