The evidence base for mHealth is still being built.  Often, even in the published literature, interventions are incompletely described, change attributable to the mHealth intervention is unclear, and there is a large variation in the quality of reporting.  Given that many governments in low and middle income countries (LMICs) are starting to invest in large scale mHealth projects, understanding how to improve the quality of mHealth evidence is increasingly important.  MHealth interventions have the capability to affect positive health system change through improving access to information and knowledge.  However, until high quality monitoring, evaluation, and reporting of evidence are common practice, the most effective mHealth interventions may not be scaled.

At the inaugural Global mHealth Forum, hosted by the mHealth Summit, evidence was the focus of several breakout sessions where panelists discussed the current evidence base for mHealth interventions and ways to improve mHealth research methodologies in LMICs.  Below are five tips on mHealth monitoring and evaluation (M&E) from the Global mHealth Forum taken from insights gathered during two evidence concurrent sessions entitled, “mMonitoring: Metrics, Modeling, & Monitoring” and “Does this work? Tools and Results from Evidence Grading.” 

Top Five Tips on mHealth M&E from the Global mHealth Forum

1.       Consider the evidence: One important takeaway from Smisha Agarwal was that the existing evidence base for a mHealth intervention should be considered when making decisions on evaluation measures.  A helpful hint: if there is known efficacy of a mHealth program, focus evaluation on process measures; if there is an absence of evidence for a mHealth program, focus on outcome measures.

2.       Be patient:  For those working in mHealth, the pace of technology can often seem at odds with conducting quality research, but Marc Mitchell suggested to the audience, “Let’s be patient,” warning, “There are way too many things that don’t work that have been scaled.”

3.       Dynamically apply data:  With enhanced capabilities to collect data in real time, what to do with that data can be somewhat daunting.  Hajo van Beijma, co-founder of Text to Change, encouraged the audience to “be dynamic with data analysis” and to “use instant feedback to improve user experience” of mHealth programs. 

4.       Model impact: Use LiST, the “lives saved tool” —discussed at the forum by Alain Labrique— to model the impact of mHealth programs by looking at changes in program coverage.  It can help demonstrate where to consider prioritizing investments and time in the earlier stages of program planning and evaluation. 

5.       Publish, publish, and publish:  While moving from program conception to program results has been expedited in mHealth, moving from results to publication is still difficult, in part because researchers have more and more data at their disposal to analyze.  High quality reporting is critical to building a useful mHealth evidence base and ensuring only the most effective technologies reach scale.