Web-only Feature

Web-only Feature

The past decade has witnessed the rise of digital therapeutics. This multi-billion-dollar healthcare segment provides consumer self-help tools with behavioral health as a dominant focus. A number of companies offer these programs, including Silver Cloud, Ginger, and Spring Health. The company studied here, Learn to Live, has self-paced digital modules for stress, depression, anxiety, and other conditions. In addition, coaching and other supportive personal contact is available while users complete their digital lessons.

Previous analyses of the effectiveness of Learn to Live’s programs, based on user-completed outcome questionnaires, showed significant clinical change. This study addresses the question of whether these clinical results have been maintained or changed for a subsequent cohort of users. Brown et al. (2020) reported results for users of the Learn to Live program between January 1, 2019, and March 15, 2020 (n= 4242), including a comparison of results to clients receiving outpatient psychotherapy (ACORN collaboration sample, n=120,671). This sample will be referred to as the baseline cohort.

That study showed digital users completing seven lessons reported improvement greater than those completing seven sessions of outpatient therapy. However, the drop out rate for digital users was higher, and reported change in early lessons was less than that found for psychotherapy clients. Results from this period serve as a baseline against which to evaluate results for a subsequent period in which the company sought to improve outcomes. The improvement of results from digital therapeutics is a new area of study since these tools have been prominent only during the past decade.

Learn to Live supports an active program of continuing evaluation and initiatives designed to improve results. For example, Brown & Jones (2021) propose an algorithm to identify Learn to Live users at risk for a poor outcome by the second lesson in order to encourage the at-risk users to accept additional personal support. Data is currently being collected to evaluate the effects of this particular intervention with at risk users.

Digital therapeutics companies may report positive outcomes, but they are naturalistic studies without the rigors of randomized, controlled designs. Therefore, follow-up validation studies can help assess to what extent real world studies are providing fortuitous findings. The current study of the Learn to Live program evaluates results for users of the platform between July 1, 2020 and December 31, 2021. This second group of users will be referred to as the quality improvement (QI) follow-up cohort. The sample size of 1,128 includes people who used the digital platform without any other support, those who chose to receive supportive coaching, and those who elected to receive automated text messages encouraging mindfulness to promote emotional health.

Method

The magnitude of improvement is reported as effect size. The use of the effect size statistic, also known as Cohen’s d, is important for benchmarking results since it provides a common metric independent of the questionnaire used (Cohen, 1988). For purposes of this paper, effect size is calculated as pre-post change divided by the standard deviation of the outcome measure at intake.

Most journals today require the use of effect size when reporting results. Many decades of research on psychotherapy outcomes yields an average effect size for psychotherapy of approximately 0.8. For this reason, we have classified effect sizes of .8 or greater as indicative of “highly effective” treatment. However, it bears noting that there is no evidence that effect size has increased over decades of researching evidence-based treatment. In contrast, evidence mounts that the ability of the therapists to form a positive relationship with the client is far more important than the method of therapy (Wampold & Imel, 2015; Minami et al., 2012; Brown et al., 2015a). This line of evidence points strongly to the importance of human contact in delivering various therapies. The results from this study further support that conclusion.

The methodology for benchmarking outcomes has been refined over the past decade by participants in the ACORN collaboration, notably Minami and Brown (Minami et al., 2007; Minami et al., 2008a; Minami et al., 2008b). The ACORN methodology for benchmarking outcomes has been well documented and validated across thousands of clinicians working in a variety of settings, with services funded through multiple types of payers. The methodology employs multi-variate predictive modeling to account for differences in case mix (Brown et al., 2015b). Brown et al. (2020) provides an in-depth discussion of how effect size was calculated for the Learn to Live samples.

Treatment outcomes can be evaluated for those completing treatment (which is seven lessons for Learn to Live modules) or as the “intent to treat” based on the last lesson completed. The intent to treat method results in smaller effect sizes given the failure to complete all lessons, but the calculation more accurately reflects the experience of most users. For this reason, the intent to treat method is employed, and this includes reporting effect size for those ending treatment at each lesson.

The results for the Learn to Live sample are broken out for four conditions based on whether they receive any type of personal support in addition to using the digital platform:  a) coaching, b) coaching and mindfulness texts, c) mindfulness texts, and d) no personal support.  Within the coaching condition, support from a non-licensed coach was provided via phone, text, or email, depending on the users’ wishes.  In the analysis of the baseline cohort,  the effects of coaching appear to be independent of the contact method (Brown & Jones, 2020). Coaching results are therefore reported in aggregate rather than by contact method.

Results

The main comparison involves results for the baseline and quality improvement (QI) cohorts.  However, the results for both groups can be compared with results for those in the ACORN psychotherapy sample.  Results for psychotherapy represent benchmark results for purposes of this analysis.

Table 1 presents the intent to treat results for the baseline and QI cohorts. There is a dramatic increase in effect size for the QI cohort (0.83 versus 0.68).  The effect size for this QI cohort significantly exceeds the effect size for the psychotherapy comparison sample (p<.01) and represents a 22% increase in self-reported improvement. The average number of completed lessons increases from3.9 to 4.25, a difference that is significant (p<.01. Analysis of variance confirms that the increase in effect size is significant even after controlling for lesson count (p<.05).   Part of the increase in  effect size appears to be the result of achieving more improvement per session.

Table 1

Table 2 summarizes the types of services received by the digital therapeutics users.  It shows an increase in the percentage of users receiving some form of personal support during use.  The percentage of users receiving no personal support dropped from 51% for the  baseline cohort to 35% for the QI cohort. The mindfulness text messages, received alone or in combination with coaching, increased as a form of support in the QI cohort.

Table 2

Graphs 1 and 2 display the effect sizes for each type of support received for the baseline and QI cohorts. The increase in effect size for mindfulness texts is significant (p<.01), as is the increase for coaching and mindfulness texts (p<.05), even after controlling for the increase in lesson count.

Graph 1

Graph 2

Discussion

The results confirm the earlier finding that personal support tends to increase effect size. During the follow-up period for the QI cohort, the percentage of users choosing options for personal support increases significantly, from 49% to 65%.  Platform users getting personal support experience more improvement per session. The combination of personal coaching and mindfulness texts appears to have an additive effect to the coaching results alone.

These are real world findings, and the lack of random assignment with a control group make any interpretation a matter of speculation.  The clinicians and technical staff with Learn to Live are constantly enhancing the platform to make it more engaging. They also craft mindfulness text messages with the intent of making people feel they are getting personal therapeutic input, and these messages are continually modified over time.  While these results are encouraging, it cannot be concluded that these quality improvement efforts caused these results.

Results can change not only due to quality improvement efforts, but also based on the changing composition of service users. Analyses were conducted of the membership using the platform to see if new business customers (e.g., health plans, large employers) were driving these changes in outcome. Group membership issues did not appear to be a source of any changes in outcomes. Also, data from the baseline cohort were reanalyzed to determine if an upward trend in results had been present all along. This was found not to be the case.

The current findings support the value of the digital therapeutic platform used without any personal support. Yet the evidence also suggests that results increase to the highly effective range for those who accept support. It is noteworthy that people receiving minimal support through mindfulness text messages improved to the highly effective range. This poses the question of how much personal support is needed to achieve better outcomes. Text messages are significantly less costly than coaching, and so the answer has important implications for providing the most cost-effective services.

Digital therapeutics platforms have unique value since they are available upon demand.  However, they are not recommended for certain groups. Companies providing these resources do not recommend that people who are acutely psychotic or suicidal use these services for symptom relief. On the other hand, the evidence suggests that many individuals who are severely distressed (with an undetermined diagnosis) gain considerable benefit from using digital therapeutic exercises. The authors are currently planning a study that integrates use of digital resources with outpatient psychotherapy. This may help determine both indications and contraindications for clients with known psychiatric diagnoses.

Be the 1st to vote.
Cite This Article

Brown, J., & Jones, E. (2022, May). Improving results for digital therapeutics. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/improving-results-for-digital-therapeutics

References

Brown, G. S. J., Simon, A., & Minami, T. (2015a). Are you any good…as a clinician? [Web article]. Retrieved from http://www.societyforpsychotherapy.org/are-you-anygood-as-a-clinician

Brown, G. S. (J.), Simon, A., Cameron, J., & Minami, T. (2015b). A collaborative outcome resource network (ACORN): Tools for increasing the value of psychotherapy. Psychotherapy, 52(4), 412–421. https://doi.org/10.1037/pst0000033

Brown, J. S., Jones, E., Cazauvieilh, C. (2020, May). Effectiveness for online cognitive behavioral therapy versus outpatient treatment: A session by session analysis. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/effectiveness-for-online-cognitive-behavioral-therapy-versus-outpatient-treatment

Brown, J. S., & Jones, E. (2020, December). Impact of coaching on rates of utilization and clinical change for digital self-care modules based on cognitive behavioral therapy. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/impact-of-coaching-on-rates-of-utilization-and-clinical-change-for-digital-self-care-modules-based-on-cognitive-behavioral-therapy

Brown, G. S., & Jones, E. (2021, March). Improving clinical outcomes for digital self-care. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/improving-clinical-outcomes-for-digital-self-care

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, N.J: Lawrence Erlbaum Associates.

Minami, T., Wampold, B. E., Serlin, R. C., Kircher, J. C., & Brown, G. S. (2007). Benchmarks for psychotherapy efficacy in adult major depression. Journal of Consulting and Clinical Psychology, 75, 232-243. doi:10.1037/0022-006X.75.2.232

Minami, T., Serlin, R. C., Wampold, B. E., Kircher, J. C., & Brown, G. S. (2008a). Using clinical trials to benchmark effects produced in clinical practice. Quality and Quantity, 42, 513525. doi:10.1007/s11135-006-9057-z

Minami, T., Wampold, B. E., Serlin, R. C., Hamilton, E. G., Brown, G. S., & Kircher, J. C. (2008b). Benchmarking the effectiveness of psychotherapy treatment for adult depression in a managed care environment: A preliminary study. Journal of Consulting and Clinical Psychology, 76, 116- 124. doi:10.1037/0022-006X.76.1.116

Minami, T., Brown, G. S., McCulloch, J., & Bolstrom, B. J. (2012). Benchmarking clinicians: Furthering the benchmarking method in its application to clinical practice. Quality and Quantity, 46, 1699-1708. doi:10.1007/s11135-011-9548-4

Minami, T., Wampold, B. E., Serlin, R. C., Kircher, J. C., & Brown, G. S. (2007). Benchmarks for psychotherapy efficacy in adult major depression. Journal of Consulting and Clinical Psychology, 75, 232-243. doi:10.1037/0022-006X.75.2.232

Minami, T., Wampold, B. E., Serlin, R. C., Hamilton, E. G., Brown, G. S., & Kircher, J. C. (2008b). Benchmarking the effectiveness of psychotherapy treatment for adult depression in a managed care environment: A preliminary study. Journal of Consulting and Clinical Psychology, 76, 116- 124. doi:10.1037/0022-006X.76.1.116

Wampold, B. E., & Imel, Z. E. (2015). The great psychotherapy debate: The evidence for what makes psychotherapy work (2nded.). New York, NY. Routledge.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *