Taking a step back to move forward: Why AHEAd is starting with process before impact

19
February
2026
Type
Area of funding
No items found.
Focus areas
No items found.
Year
An orange background with white writing reads, 'Advancing Efforts Against Undernutrition in Crises (AHEAd): A decision tool and resource'. In the corner an illustrated icon shows a doctor weighing a baby on scale.

This blog is co-authored by Gillian McKay (Elrha) & Hilary Bower (London School of Hygiene and Tropical Medicine)

Preventing undernutrition in humanitarian crises is both urgent and complex. Decisions about what to prioritise, when, and how are made under severe constraints, with real consequences for children, families, and communities. The AHEAd (Advancing Holistic Efforts Against Undernutrition in Crises) programme was developed to support those decisions – by helping humanitarian actors identify context-appropriate, multisectoral packages of prevention interventions grounded in evidence and operational reality.

From the outset, AHEAd was conceived as more than a technical tool. It is a decision-support approach intended to strengthen how choices are made across sectors, rather than prescribing a single solution. That ambition has important implications for how the programme should be evaluated.

The need for careful evidence and why caution matters

As an organisation laser focused on research-based innovation, our role at Elrha is to ensure that investments generate evidence that is credible, usable, and relevant to practice. This means recognising when it is necessary to slow down – not because of a lack of confidence in an intervention, but because getting prevention wrong can be costly, and because testing impact before understanding real-world use risks answering the wrong questions.

The work described below reflects this principle in action. Rather than moving directly into an impact evaluation in the first instance, the AHEAd partners took a deliberate step back to focus first on understanding whether the tool itself is acceptable, feasible, and usable for those it is designed to support. This process-focused phase does not replace the need for impact evaluation. Instead, it lays the groundwork for it – ensuring that future evaluations test something that is genuinely ready to be tested.

Prioritising understanding over impact evaluation

What follows is London School of Hygiene and Tropical Medicine’s (LSHTM)’s reflection on that journey:

The AHEAd Decision Tool and Resource Guide was developed in partnership with Elrha, NutritionWorks, a Steering Committee and a network of nutrition in emergencies stakeholders, to support decision makers in humanitarian contexts in identifying multisectoral package of interventions for the prevention of undernutrition. It was designed to bring structure, evidence, and clarity to complex decision-making processes in resource-limited settings.

So when our LSHTM team was selected to design an impact evaluation of the packages of interventions selected using this tool, we were ready to set sail. With momentum on our side, we plunged straight in, only to realise, rather quickly, that the waters were murkier than they first appeared. Despite the detailed diagrams and careful mapping of Programme Impact Pathways and Theories of Change, for the variety of potential packages of interventions, we quickly arrived at a realisation that shifted everything:

“Why measure impact if we don’t even know if the tool is acceptable and feasible for those using it?”

It was a moment of clarity. Before we can evaluate the impact of the package of interventions, we need to understand the AHEAd tool itself. Will it be implemented as intended? Is it acceptable, feasible, and usable? Does it genuinely guide decision-making in the way its developers envisioned? Put simply, we were trying to evaluate downstream effects without first understanding the upstream mechanism.

So, we took a step back, our next steps became clearer as we reframed our work: a process evaluation of the AHEAd tool should come first.

A framework-informed approach to process evaluation

Following this decision and on the advice of an implementation expert in humanitarian settings, we decided to draw on a combination of three implementation frameworks: PRISM, RE-AIM, and the Theoretical Framework of Acceptability. From these, we developed a set of core constructs to guide the process evaluation. We then designed a mixed-methods study involving decision-makers, implementers, and community representatives to explore the acceptability and fidelity of the tool across at least two pilot humanitarian contexts. These will be selected from the country prioritisation we did based on a range of individual and household indicators.

With a clearer pathway, strengthened by collaboration and shared learning, Elrha and partners is now better positioned to move into the next phase of work, confident that they will be evaluating the right thing, in the right order, and in partnership with those who will ultimately use it.

Key lessons

1. Strong foundations cannot be assume: Impact evaluation is only meaningful when the underlying processes are functioning as intended.

2. Decision-support tools are interventions in the broad sense: They shape choices, behaviours, and resource allocations, so their acceptability, feasibility, fidelity, and usability must be understood to determine whether they are effectively influencing decision processes as intended.

3. Slowing down to accelerate effective progress later: Clarifying assumptions and examining implementation reduces the need to  redesign later.

4. Course correction following inter-sectoral dialogue is not failure, it’s good practice: Pivoting mid-project signals rigour, transparency, and responsiveness.

Looking ahead

This reframing marks an important step in the evolution of the AHEAd programme. By prioritising learning about implementation, decision-making, and real-world use, AHEAd partners will work to ensure that future impact evaluations – including experimental designs – are grounded in reality and aligned with practice.

There is clear scope to pair this process evaluation with a smaller-scale effectiveness evaluation, focused on a limited set of medium-term service improvement and nutrition outcomes, to enable us to build confidence in the approach, in advance of a full-scale impact evaluation. In humanitarian prevention, rigour is not only about methodological strength, but about sequencing, relevance, and humility in the face of complexity.

Slowing down at the right moment is not a loss of momentum; it is an investment in getting prevention right. It helps ensure we generate evidence that are grounded in reality and improve the health and wellbeing of children and women in complex settings.

Stay updated

Sign up for our newsletter to receive regular updates on resources, news, and insights like this. Don’t miss out on important information that can help you stay informed and engaged.

Related articles

all latest news
No items found.

Related projects

explore more projects
No items found.

Explore Elrha

Learn more about our mission, the organisations we support, and the resources we provide to drive research and innovation in humanitarian response.

No items found.