Web-only Feature

Web-only Feature

This paper builds upon prior analyses of the effectiveness of an online self-guided cognitive behavioral therapy (iCBT) program. Learn to Live, Inc. provides a digital behavioral healthcare platform with a no-cost option to all users to get personalized guidance and support from a non-licensed coach.  Coaching was found to result in users staying engaged with the self-care tools longer and getting better clinical outcomes.  The current research explores the development of algorithms to identify users at-risk for premature termination of engagement with the program due to a poor response.  These at-risk users might then be offered additional support. This work is based upon the lessons from psychotherapy research that developed algorithms for a similar purpose.

Method

The authors employed a methodology to identify at-risk users based on prior research on measurement and feedback in behavioral healthcare. Users of self-care tools who report worsening of symptoms at a rate that significantly exceeds the measurement error of outcome questionnaires can be targeted as at-risk for premature dropout from the intervention.

Results

The optimal point for intervention with coaching support was identified.  Clinical results were compared for users of the platform with and without coaching support.  Those users receiving coaching support reported significantly greater improvement at the second lesson than users without a coach, and they were significantly more likely to continue to the next lesson. When the criteria for at-risk are applied to users not receiving personalized coaching, this targets 17% of the users for a proactive intervention to reconsider coaching.

Discussion

The authors recommend implementing a process to reach out to those users within this high-risk group to offer coaching services. It is hypothesized that this intervention will result in a higher retention rate and associated enhanced outcomes for these at-risk users.  The proposed next steps are to implement this intervention, track the results over time, validate the assumptions, and fine-tune the intervention as needed.

Purpose

The purpose of the current investigation is to develop algorithms to guide interventions that improve user retention and outcomes for a self-care online cognitive behavioral therapy (iCBT) program developed by Learn to Live, Inc. The authors have published previous work on this commercially available iCBT product that collects user completed outcome questionnaires at each lesson.

This granular level user data on the measured benefits of the program provides a unique opportunity for quality improvement interventions that might improve the user’s experience and reported improvement across a variety of symptoms and problems.

Analyses of outcomes for this program have been reported in this journal (Brown et al, 2020; Brown & Jones, 2020), finding evidence that the addition of personalized coaching via phone, text or email was associated with remaining engaged in the program and getting more improvement per lesson. This suggests that increasing the percentage of users who take advantage of this coaching option (available to all users at no cost) will increase retention and improvement.

Of course, another option is to continuously encourage all users to accept personalized coaching. The problem is that users have already opted not to take advantage of this option.  Outreach to all users, regardless of clinical status, with encouragement to reconsider coaching may be perceived as overly intrusive.  In addition, a percentage of users will go on to experience benefits without coaching support.

Therefore, the purpose of this investigation is to use the existing data to develop algorithms for a targeted intervention designed to maximize the probability of user benefit, while at the same time optimizing the use of coaching resources within the company.

Method

User improvement was assessed based on user completed outcome questionnaires at every lesson. Learn to Live uses three different questionnaires depending on the lesson modules: depression (PHQ-9; K Kroenke et al, 2001), social anxiety (SPIN-17; KM Conner, et al., 2000) and general stress and worry (GAD-7; RL Spitzer et al.; 2006). Factor analyses revealed that the items on these questionnaires load on a common factor.  This permits conversion to a standard score and the use of the effect size statistic to report change (Brown et al, 2020).

A significant body of psychotherapy research supports the practice of routine outcome measurement combined with algorithm-driven feedback to therapists to improve the outcomes. (Able et al, 2014; Bickman et al, 2011; Brown et al 2001; Brown et al, 2015; Goodman et al, 2013; Hannan et al, 2004; Lambert, 2010a; Lambert, 2010b). The algorithms essentially track individual client improvement from session to session along with a comparison of actual change to an expected rate of improvement.  Clients that deviate significantly from this expected rate of improvement are much more likely to prematurely terminate treatment.

Therapy clients who report greater improvement than expected tend to self-terminate earlier in treatment.  This would seem to be a rational decision, as they likely feel their needs have been met. Yet clients displaying much less expected improvement are more likely to self-terminate due to discouragement.  Statically derived algorithms can be employed to bring these discouraged clients to the attention of the therapist, permitting a discussion of client concerns to modify treatment methods to keep the client engaged in treatment.

Prior analyses of results for the Learn to Live iCBT program reveal that people completing all seven lessons demonstrate improvement comparable to clients who complete seven sessions of psychotherapy (Brown & Jones, 2020a). While the Learn to Live iCBT digital platform is intended to be self-guided, the company offers all users the option of individualized coaching delivered via phone, email, or text messaging. Brown & Jones 2020b identified two key findings for users accepting the option of personalized coaching.  They reported significantly greater improvement in early sessions and a significantly greater probability of continuing for additional sessions, as compared with those declining individualized coaching.

The present study used a similar dataset employed in the Brown & Jones 2020b article.  The sample is larger since it has been updated with additional users of the platform since that earlier article.  This analysis starts by focusing on those users completing two lessons, permitting the calculation of an initial change score.  The earlier study established that the risk of premature termination is highest for those users not engaged in coaching. The goal is to create algorithms to target the 15%-20% of users at the second lesson that are at greatest risk for a poor outcome and might benefit from an intervention to reconsider coaching.

Failure to improve has been defined statistically as deterioration on the outcome questionnaire beyond what might be expected by chance (p<.05) given the reliability of the questionnaire. This is referred to as the Reliable Change Index (Jacobson& Truax, 1991; Spear, 1992). The Reliable Change Index (RCI) was calculated in this case to be -.8 effect size, using blended results from the three questionnaires.  The goal is to target the intervention to those users not engaged in coaching who show an initial degree of deterioration at lesson two that is greater than a chance event.

Results

Table 1 presents the effect sizes for groups.  There are two groups, those with coaching or no coaching as part of their use of the program, and each group is broken down further to show those terminating or continuing use of the program beyond session two.

Table 1

Graph 1 provides a visual representation of the effect sizes in Table 1.

Table 2 displays the total percentage of users in the no coaching condition with a level of deterioration that meeting or exceeds the RCI, which is to say those who deteriorate by -.8 effect size or greater

Table 2

Targeting users without prior coaching would result in reaching out to approximately 18% of users after lesson two.  Overall, 74% of users without coaching terminated at lesson two, compared with 53% of users receiving coaching. Also, those receiving coaching who did terminate at lesson two reported over twice as much improvement compared to the no coaching group (.41 effect size versus .18 effect size).  This suggests the coaching group may be terminating based more on a sense of goal achievement than discouragement, but this speculation cannot be proven.

Those no coaching users targeted as at risk at lesson two had an average effect size of -1.22, compared to .36 effect size for no coaching users that were not targeted.  This group of targeted at-risk users clearly represents a strong opportunity for a cost-effective intervention.  Getting these at-risk users into coaching could reduce the likelihood of termination and increase the average improvement per lesson.

Summary

These results provide the basis for developing a simple algorithm to identify users that are candidates for a targeted intervention. The optimal method for outreach remains to be determined. Prior analysis of the impact of the method of coaching (phone, text or email) revealed no meaningful differences in the method of contact. This suggests that the initial outreach could be a text or email offering additional assistance and explaining the potential benefits to the user. It will be important to track the impact of this initiative. The criteria for targeting at-risk users may be modified in the future depending on the success of the initial trial. Likewise, the method and content of outreach may be modified based on the lessons from the initial trial.

 

Be the 1st to vote.
Cite This Article

Brown, G. S., & Jones, E. (2021, March). Improving clinical outcomes for digital self-care. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/improving-clinical-outcomes-for-digital-self-care

References

Amble, I., Gude, T., Stubdal, S., B.J. & Wampold, B.E. (2014): The effect of implementing the Outcome Questionnaire-45.2 feedback system in Norway: A multisite randomized clinical trial in a naturalistic setting, Psychotherapy Research, DOI: 10.1080/10503307.2014.928756

Bickman, L., Kelley, S.D., Carolyn Breda, C., de Andrade, A.R., & Riemer, M. (2011) Effects of routine feedback to clinicians on mental health outcomes of youths: Results of a randomized trial. Psychiatric Services, 62, 1423-1429. https://ps.psychiatryonline.org/doi/pdf/10.1176/appi.ps.002052011

Brown, G. S., Burlingame, G. M., Lambert, M. J., Jones, E., & Vaccaro, M. D. (2001). Pushing the quality envelope: A new outcomes management system. Psychiatric Services, 52, 925-934. https://doi.org/10.1176/appi.ps.52.7.925

Brown, J. S., Jones, E., & Cazauvieilh, C. (2020, May). Effectiveness for online cognitive behavioral therapy versus outpatient treatment: A session by session analysis. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/effectiveness-for-online-cognitive-behavioral-therapy-versus-outpatient-treatment

Brown, J.S., & Jones, E. (2020, December). Impact of coaching on rates of utilization and clinical change for digital self-care modules based on cognitive behavioral therapy. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/impact-of-coaching-on-rates-of-utilization-and-clinical-change-for-digital-self-care-modules-based-on-cognitive-behavioral-therapy

Brown, G. S., Simon, A., Cameron, J., & Minami, T. (2015). A collaborative outcome resource network (ACORN): Tools for increasing the value of psychotherapy. Psychotherapy, 52, 412–421. doi:10.1037/pst0000033

Connor, K.M., Davidson, J.R., Churchill, L.E., Sherwood, A., Foa, E. & Weisler, R.H. (2000). Psychometric properties of the Social Phobia Inventory (SPIN). New self-rating scale. British Journal of Psychiatry, 176, 379-86.  doi: 10.1192/bjp.176.4.379.

Goodman, J. D., McKay, J. R. & Dephilippis, D. (2013). Progress monitoring in mental health and addiction treatment: A means of improving care. Professional Psychology: Research and Practice, 44, 231-246. doi:10.1037/a0032605

Hannan, C., Lambert, M. J., Harmon, C., Nielsen, S. L., Smart, D. W., Shimokawa, K., & Sutton, S. W. (2004). A lab test and algorithms for identifying clients at-risk for treatment failure. Journal of Clinical Psychology, 61, 155-163. doi:10.1002/jclp.20108

Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research. Journal Of Consulting And Clinical Psychology59, 12-19. doi:10.1037/0022-006X.59.1.12

Kroenke, K., & Spitzer, R.L. (2002). The PHQ-9: a new depression diagnostic and severity measure. Psychiatric Annals, 32(9), 509-515. doi:10.1155/2012/309094

Lambert, M. J. (2010a). Prevention of treatment failure: The use of measuring, monitoring, and feedback in clinical practice. Washington, DC: American Psychological Association.

Lambert, M. J. (2010b). Yes, it is time for clinicians to routinely monitor treatment outcome. In  B.  Duncan, S.  Miller, B.  Wampold, & M. Hubble (Eds.), The heart and soul of change: Delivering what works in therapy (2nd ed.; pp. 239-266). Washington, DC: American Psychological Association. doi:10.1037/12075-009

Speer, D. C. (1992). Clinically significant change: Jacobson and Truax (1991) revisited. Journal of Consulting and Clinical Psychology, 60, 402-408. doi:10.1037/0022-006X.60.3.402

Spitzer, R. L., Kroenke, K., Williams, J. B., & Löwe, B. (2006). A brief measure for assessing generalized anxiety disorder: The GAD-7. Archives of Internal Medicine166(10), 1092-1097. doi:10.1001/archinte.166.10.1092

 

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *