Shaping the future: Our strategy for research and innovation in humanitarian response.

A global organisation that finds solutions to complex humanitarian problems through research and innovation..
Our purpose is clear: we work in partnership with a global community of humanitarian actors, researchers and innovators to improve the quality of humanitarian action and deliver better outcomes for people affected by crises.
We empower the humanitarian community. Find out how we can support you...

The Research for Health in Humanitarian Crises (R2HC) programme has an explicit impact mission: the research funded through the programme should improve health outcomes for people affected by humanitarian crises.

R2HC uses case studies to evaluate not only the outcomes and impacts of funded research, but to understand the processes, activities and experiences that shape research impact. Our focus is on whether research is used by, and influences, humanitarian health policymakers and practitioners (more background in this blog).

We are two evaluators who have worked on the development of case studies between 2020 -2022.

Here’s five things we’ve learned about research impact

1. We need to be clear what we mean by research impact.

We had R2HC’s theory of change and results framework, to help identify what impact we’re looking for. We also had an idea of what forms impact might take (we use a slightly amended version of the four types), as defined in the Research Excellence Framework (REF) Impact Toolkit:

  • Conceptual (changes to stakeholder knowledge, attitudes and understanding)
  • Capacity (changes in the ability of researchers to conduct similar work, or of stakeholders to use and apply research)
  • Instrumental (changes to policy or practice)
  • Enduring Connectivity (changes to the existence or strength of networks, relationships of stakeholders who can use or apply research

But even when definitions and frameworks are as clear as we can get them, interviews sometimes generated information that felt hard to label. Is it impact if, for example, if a junior data collector is inspired to start a PhD?

When this kind of information comes up, we would discuss not only what happened, but where something happened and who was affected. Through this we could figure out together if this outcome was linked to the R2HC theory of change, or sat outside it, and whether it was therefore ‘meaningful’ impact for the case study. Some pieces of information ended up in a box marked ‘other’.  These are contributing to thinking as we overhaul our approach so that we can better understand the role of ‘unintended outcomes’ in overall programme impact.

2. Interviews can’t be rigid.

Both researchers and their humanitarian collaborators may have a lot of information about the project that was not in any final report, especially if the project finished some time ago. Evaluators must be alert to new pathways in the conversation where impact can hide, and not go into the conversation with too many preconceptions.

Asking open questions such as ‘tell me more about how you organised this meeting with policymakers?’ or ‘how was that publication received by x?’ can uncover impact that was not previously documented, and helped us better understand its nature.

It’s sometimes much easier to understand the ‘overall spirit’ of a project, and build rapport, by asking someone to explain it in their own words, instead of assuming you know how the research panned out. This is particularly important when documenting research uptake strategies (which are not always recorded in detail in R2HC’s files) or understanding research outside our own technical areas of expertise.

3. Research funders must find a way to fund and support impact data collection after research grants have closed.

Data collection (interviews with partners, stakeholders, and documentary source research) relied heavily on goodwill and the quality of relationships that researchers had with their humanitarian partners. But it is a burden on most stakeholders – they must give up their time for free to contribute to research impact evaluation.This is an issue – not least because if we only do impact case studies on the ‘cooperative’ grantees and partners, we are likely to end up with a skewed picture of programme impact. Even with the best will in the world, it is a big ask for someone working in a crisis setting, or with a huge academic workload, to take time out of their busy days to talk about a research project that has closed. We were grateful so many of them shared their insights with us – sometimes via email, if they couldn’t make time for a call.

4. People have different levels of comfort with talking about failure.

When Cordelia did her case studies, we didn’t really have much space for talking about failure. Increasingly, R2HC felt we needed somewhere to put this as some researchers were openly sharing these reflections with us, so we adapted the basic template to include it.

Researchers have different ways, and stages, of defining or coming to terms with “failure” –for example, some differentiated between failure they attributed to their own effort and that which was due to factors outside their control.

Even when researchers were circumspect about failure, we also felt that there were obvious learning opportunities for R2HC and the wider community from asking these questions. Failed research happens and failed uptake strategies also happen. Both are opportunities for learning and we should normalise this. But the first step is finding a better way of capturing and understanding it.

Gloria’s evaluations, which included questions about failure, generated some useful insights – often shared by researchers themselves. For example, one co-lead researcher reflected that he wished he had engaged central government policymakers much earlier in his research process, so that they were more sensitised to the end-results.

Another research team ended up with an underpowered RCT (due to various barriers common in humanitarian settings) but drew out learning which has built their expertise and capacity that they are now applying to new research. Such impacts would have gone unrecognised had we not asked about failure.

5. Researchers whose work has impact tend to know this already.

Sometimes, a researcher would know very little about what happened after they published their work. We noticed some correlation: teams that had delivered significant research impact also had a pretty good sense of the pathways their research had taken into policy and practice. They had maintained relationships and engaged policymakers and practitioners regularly, and could give a helpful steer on who had used their research and how (and connect us with them so we could check). From this experience, our conclusion is:  If you ‘have no idea what’s happened’ after you published… the answer might well be ‘not much’. Remaining engaged with stakeholders in policy and practice, and building relationships (not only popping up on their radar when you have a new article out), is a key ‘research impact’ competency.

This post first appeared as, 5 Things we Learned from Evaluating the Impact of Research, on the From Poverty to Power blog. 

Image credit: Freshspectrum (CC BY-NC 4.0)

Find Out More

Explore our work on research uptake and impact through our collection of  Research Impact Case Studies, and our Research Impact Framework.

Subscribe to our newsletter to stay up-to-date on our latest learning on research impact and evaluation.

Subscribe to our newsletters....

Subscribe
 
Elrha © 2018 - 2024 Elrha is a registered charity in England and Wales (1177110). KEEP IN TOUCH Want to stay up to date with our latest updates? Sign up to our newsletters
Elrha
Elrha Please upgrade your browser

You are seeing this because you are using a browser that is not supported. The Elrha website is built using modern technology and standards. We recommend upgrading your browser with one of the following to properly view our website:

Windows Mac

Please note that this is not an exhaustive list of browsers. We also do not intend to recommend a particular manufacturer's browser over another's; only to suggest upgrading to a browser version that is compliant with current standards to give you the best and most secure browsing experience.