THE randomised controlled trial (RCT) model is a vital tool in clinical practice and helps establish, with reasonable precision, the usefulness of novel drugs and therapeutic techniques. As is implied in the name, RCT models are characterised by employing randomisation techniques and the presence of a control group. Successful randomisation to experimental and control groups ensures that all participant characteristics are equally distributed across the experimental conditions. With that, any confounding factors, both known and unknown, should not unduly influence the outcomes in one group more than in the other. The experimenter can be reasonably certain that any changes in the outcome variable can be linked back to the variable of interest (ie, the treatment method or drug administered). The control group is equally important – only with a control is it possible to establish how big a treatment effect is compared with other options such as treatment as usual or some other treatment option.

While RCTs need to remain the gold standard when deciding if a new treatment or intervention works, a positive RCT result alone does not guarantee that a new program will have a meaningful impact. The limitations of traditional RCTs are particularly stark when evaluating new technologies.

Over the past decade, there has been an exponential increase in the number of digital mental health programs (ie, mental health websites and mobile applications) that have been developed and tested. Many of these new online or app-based interventions have been shown in RCTs to be effective (here, here, and here), leading to the suggestion that digital interventions may be an important part of the solution for persistently high rates of mental health symptoms in most countries. However, very little of this evidence has flowed through into the applications and programs being used in the real world. A recent investigation published in Nature estimated that only 2% of the popular depression smartphone apps had a reasonable evidence base.

Given these concerns, a key question that needs to be addressed is what additional insights are needed, beyond the traditional RCT analysis, in understanding the potential role for evidence-based digital mental health solutions. In answering this, a first step is to examine more closely what occurs within RCTs.

Engagement with a digital mental health tool is frequently defined as adherence to a predefined protocol. Thus, likely real-world uptake and usage often remains unknown and untested before rolling out the digital mental health tool as a public health resource. Differences between these contexts may arise because, for example, individuals who sign up to a digital mental health research project are more motivated to engage with such a program and often receive regular prompts from the research team to take up and continue program usage. These factors in turn can lead to an overestimation of program use – and thus effectiveness – in naturalistic settings.

In digital mental health evaluations, what constitutes adherence in a RCT is often chosen arbitrarily, or at least not made explicit, and generally follows the premise of “the more use the better”. As such, engagement patterns that are off protocol go unexamined. For example, when personal mental health goals are met in a short period, users may decide to discontinue use of a digital mental health program regardless of usage recommendations. There is evidence that this situation may be common within RCTs, with data suggesting that up to one-third of RCT participants drop out of studies prematurely due to early treatment success. Provided follow-up data are still able to be obtained and intention to treat analysis used, these early responders (sometimes termed “e-attainers”) should not disrupt the validity of the overall RCT findings.

However, a problem arises when recommendations need to be made about what constitutes effective use of these mental health tools. Judging the effectiveness of a mental health tool on compliance measures alone may underestimate the usefulness of the intervention for individuals who require less than the “prescribed dose” to achieve their personal mental health goals. Greater knowledge about effective use could have pragmatic implications for what should be recommended (eg, a reduced or personalised protocol) and could also have impacts on estimated costs and time required to engage with a proven intervention.

In a recently published article, researchers at the Black Dog Institute conducted an initial examination inspecting the general public’s use of a digital mental health program, myCompass, funded by the Department of Health and designed by the Black Dog Institute to reduce mild to moderate symptoms of distress (currently, the second version of myCompass is available here). The aim of this study was to classify community users based on their naturalistic usage patterns, and to determine whether differences in mental health outcomes were due to differences in usage.

The two main components of myCompass are symptom tracking and learning activity modules based on cognitive behavioural therapy principles. Two-step cluster analyses identified three distinct user groups among individuals who used myCompass: lower overall usage Moderates, high usage Trackers, and high overall usage Super users. Groups differed by usage frequency (ie, Moderates used all components less than Super users) and preferred components (ie, Trackers used mainly the symptom tracking function, whereas Moderates and Super users used all components). Interestingly, there were significant depression and anxiety symptom reductions over time for all usage groups and the degree to which symptoms reduced did not vary by group. All users experienced equivalent mental health benefits irrespective of their pattern of engagement, suggesting that users were able to stay engaged with a digital mental health program until they were satisfied with the outcome, but not necessarily any longer.

It should be noted that most e-Health mental health programs, such as myCompass, are designed for individuals with relatively mild symptoms, so it is unclear if this type of self-regulation and dose titration occurs among those with more severe or prolonged symptoms.

In conclusion, if digital mental health tools are going to be part of how we respond to the ongoing burden of mental health difficulties, we need to increasingly see the completion of RCTs as the start of the evidence journey, not the end point. There have been too many promising online or app-based mental health interventions where there has been no ongoing research activity following an RCT or that have not been made available at scale. Studying naturalistic usage patterns and how this usage is linked to mental health trajectories following a traditional RCT assessment may help make evidence-based mental health tools more successful and widespread in the real world and by further considering more fine-grained person-based facilitators and obstacles to digital mental health service use, researchers may ultimately be able to give clinicians more practical guidance on which patients should be encouraged to use digital mental health tools in clinical settings.

Dr Samineh Sanatkar is a post-doctoral research fellow within the Black Dog Institute’s Workplace Mental Health Research Program. Samineh also conducts research evaluating digital mental health tools, including examining user engagement.

Associate Professor Samuel Harvey is Chief Psychiatrist at the Black Dog Institute, where he runs the depression clinic and leads the Workplace Mental Health Research Program.

 

The statements or opinions expressed in this article reflect the views of the authors and do not represent the official policy of the AMA, the MJA or InSight+ unless so stated.

Leave a Reply

Your email address will not be published. Required fields are marked *