1st Oct 2015 Christiaan Vis

What’s in a word: evaluation

The MasterMind project aims to learn from large-scale implementation of eMental health in 15 different European regions. Through a sophisticated evaluation study, it attempts to charter the factors that facilitate and hamper further roll-out and upscaling of computerised CBT and videoconferencing technologies for the benefit of European citizens suffering from depressive disorder.

What do we do?

We attempt to evaluate the outcomes of the various implementation studies in MasterMind.  We defined implementation outcomes as the effects of deliberate and purposive actions to implement new treatments, practices, and services (Proctor et al. 2011). The primary outcome measure in this study is implementation effectiveness. Implementation success is defined as a function of reach, clinical effectiveness, acceptability and appropriateness, implementation costs, and sustainability (Proctor et al. 2011).

How do we do this?

In the MasterMind study, the MAST framework is used to both guide the implementation projects as well as to structure the evaluation. The MAST framework is founded on a broad view and analysis of the factors and areas to consider and account for when introducing and implementing telemedicine in an existing healthcare setting. The MAST assessment tool is a result from the MethoTelemed Study (Kidholm et al. 2010) and uses the EUnetHTA Core Health Technology Assessment Model (EUnetHTA n.d.) as a starting point.

In MasterMind, the evaluation focusses on seven different multidisciplinary research domains: 1) health problem, 2) patient safety and 3) clinical effect, 4) patient and healthcare professional perspectives, 5) economic aspects, 6) organisational change, and 7) social, legal and ethical issues related to the implementation of eMental health in routine practice.

Three distinct stakeholder groups will be assessed: 1) patients, 2) mental healthcare professionals, and 3) mental healthcare organisations. The primary focal points of interest are reach, clinical effect, acceptability, appropriateness, and sustainability of the interventions in practice. Mixed-methods are used to provide an understanding of what (quantitative) the implementation projects have achieved and their meaning to various stakeholders (qualitative).

How did we get there?

The development of the study protocol was lead by VU University Amsterdam, the Netherlands, in collaboration with GGZ InGeest, the Netherlands, and Health Information Management S.A., Belgium. All MasterMind partners co-developed, provided their input and critically reviewed the relevance and feasibility of the study objectives, study design, operationalised indicators, measurements and instruments. Data managers and trial site coordinators discussed and acknowledged feasibility in terms of either the availability of data in existing databases, or obtaining and administrating data via questionnaires and interviews.

For the operationalisation, first the project’s mission, objectives and expected results were translated into verifiable study objectives. With the study design in mind, and based on relevant literature, the objectives were operationalised into relevant concepts, dimensions and finally measurable indicators, including their definitions. The operationalisation is informed by relevant evaluation frameworks such as RE-AIM (Glasgow et al., 1999), the Normalisation Process Theory (NPT) (May & Finch, 2009), the Consolidated Framework for Implementation Research (CFIR) (Damschroder, et al. 2009) and the measurement instrument for determinants of innovations (MIDI) (Fleuren, et al., 2014). In addition, an external Scientific Committee was asked to provide focused feedback on the study design and operationalisation. In total 188 indicators are defined and included in various quantitative questionnaires, qualitative focus group discussions with healthcare professionals and semi-structured interviews with healthcare organisations.

Any lessons learned?

Yes we did. A lot. For example: no one size fits all. All implementation sites are different and although all sites implement a similar evidence-based intervention, all interventions are adapted to local needs. Also patient populations differ, the actual service providers are different, reimbursement systems and legal frameworks vary, and not to mention the local data collection infrastructures are different as are the ethical regulations. On top of that, political engagement and priorities vary and are subject to change.

All this heterogeneity amongst the trial sites challenged us to come-up with a study design and analyses plan that is both feasible and provides high quality data. We think we can manage.

2 responses to “What’s in a word: evaluation”

  1. Maria Navarro says:

    In a word: Congratulations!

  2. Mayke says:

    Yes, I think we will manage as well!

Leave a Reply

Your email address will not be published. Required fields are marked *