To measure innovation, we should have suited up differently

Like most projects, we designed our monitoring and evaluation (M&E) system at the beginning of the project.

That works… normally.

Standard projects go into design with the end-intervention predetermined. When teams know the intervention from the start, they can suit up appropriately to monitor and manage it from day 1.

But as a project built from a human-centered design (HCD) approach, A360 entered design to innovate on adolescent contraceptive programming. Without an intervention already in place, we allowed ourselves to commit to a new way of programming – guided by where young people, diverse experts and local health system partners would take us.

But we failed to extend that commitment to innovating how we do business. We applied a standard approach to setting up our M&E, despite embarking on a very different program journey.

As learnings from these new case studies reinforce: you can’t approach a new way of programming with an old way of managing.

While valuable for user-centered programming, iterating and tailoring interventions based on feedback loops (namely, a system to collect and react to users’ comments) and experiential learning poses challenges to standard evaluation approaches.

MeasureD research states its best:

“The value of integrating measurement into design comes from its role in making the outcomes we seek in the context of program intervention explicit, defining hypothetical pathways and drivers that lead to these outcomes (e.g., through a theory of change), and providing subjective and objective metrics to test the pathways over time, to help determine whether the intervention is creating relevant conditions critical to its overall success. The challenge for design-led implementers is to determine what kind of measurement is relevant at what stage of the design-implementation process.

But as MeasureD notes, decisions on quantitative monitoring approaches are best timed post-design, when teams can understand the full intended path of their newly designed intervention’s technical value.

At the same time, what fits as ‘appropriate’ monitoring during the design phase itself may be very different:

“In the design process, the hypothesis changes as the project evolves. This requires a fluid approach to measurement, which cannot only use fixed metrics, but must be flexible and adaptive. It is not always just looking back and counting what happened, but rather learning how to use data to help people think and problem solve along the way.

This adaptive approach requires measurement that is quick, grounded in studying or uncovering hypothesized pathways of change while assuring rigor and utility.

Frustration can result when there is an expectation for types and specificity of data that cannot be known during the design phase being evaluated.

For example, in early immersion and research with users, it is not always possible to derive meaningful quantitative data.

What can be learned, however, is often critical feedback on whether a “problem,” as defined by public health experts and designers is relevant to users. While this is a “low data” phase in terms of measurement, it is a high value phase in terms of potential cost savings and relevance of the work to come.”

The importance of timing and type of measurement applies in the A360 evaluation as well. This paper reinforces the case, from the A360 perspective:

“… a major challenge in designing the outcome evaluation for A360 was that when the outcome evaluation study protocols and data collection tools were being developed, the A360 project was in the mid-stages of intervention development.

If the final intervention package is significantly different from earlier prototypes, then the study protocol and data collection tools for the baseline study may not be as well tailored to the intervention as if the final package of interventions was known in advance.

If needed, changes to the end-line study protocol and data collection tools will be made to better capture the impact of the final A360 package of interventions.”

A360 moved fast throughout design.

When it comes to monitoring, we could, and should have employed a different approach to measuring the design phase than what we used for the implementation phase. We needed process, rather than service delivery indicators.

This might have included key process benchmarks, like the number of:

  • Young people recruited as designers and decision-makers
  • Young people trained in design – The # of reports produced on design research findings
  • Reports produced describing prototypes, and their resulting learning

The benefit

Doing so would have demonstrated accountability to our work and that the design process was moving with integrity, while also appropriately abstaining from suggesting that effectiveness at this phase could be accurately measured through service delivery or cost figures—as these could only have been distorted pictures of the true potential of the interventions once at their intended scale.

The (big) takeaway 

We needed design of our monitoring systems—and potentially even our outcome evaluation—to happen once the interventions were fully known.

This would have required building in a real pause before heading into implementation—to get the metrics right, and make sure we were kitted up to manage with a real picture of what matters most about the intervention.

Yes, service delivery numbers, adoption rates, method mix, quality, and cost.

But also: girls trained with income-generating skills, girls counseled through the aspirational program components, government collaboration, and client-experience.

As it happens, we have been nimble as we’ve been running, and we’ve found ways to generate this kind of data after the fact (thanks in part to a very productive relationship with our external evaluator, Itad).

In short

All HCD projects can course correct, especially when building from the experiences of those who’ve come before.

A big thank you to MeasureD for aggregating this learning in such a cohesive way.

A360 Impact, To Date

Data from Jan. 2018-Jan. 2020

328,560+

girls have voluntarily adopted modern contraception.

7 in 10

girls choose a method after engaging with A360.

2 in 5

girls will opt for a long-acting method.

Keep exploring

Are you an evaluator of an HCD and/or adaptive program? Or a donor and/or implementer working on an adolescent sexual reproductive health project?

A360’s Midterm Evaluation offers helpful recommendations for your taking. Dig in, today.