Does humanitarian innovation really work? New ways to think about evidence

19
August
2019
Type
Elrha insights
Area of funding
Focus areas
Scale
Scaling innovation
Year
Thomas at the Humanitarian Innovation Exchange 2019. Credit: Mitchell de Jong (CC-BY-ND) & Centre for Innovation

By Thomas Baar, Centre for Innovation, Leiden University


This blog post is part of a series sharing insights from sessions at the Humanitarian Innovation Exchange, which took place on 26 June 2019. The event was jointly organised by Elrha, Leiden University’s Centre for Innovation and the Dutch Coalition for Humanitarian Innovation (DCHI).

Despite growing investments in humanitarian innovation, it is not always clear what value has been created as evidence of impact is often scarce. What kind of evidence is required to support humanitarian innovation processes and outcomes? And how do you generate evidence on humanitarian innovation?

At the Humanitarian Innovation Exchange, we discussed the role of evidence in humanitarian innovation with a wide range of actors. During the discussions it was clear that different actors have different needs when it comes to evidence. Here are some of our reflections.



What evidence do we need?


Evidence is about more than whether something works or not, and it plays different roles throughout the innovation process. Translators without Borders indicated, for example, the importance of evidence of problems to build a case for innovation and inform the subsequent process. War Child outlined the importance of evidence in scaling their Can’t Wait to Learn programme, to understand whether it could be transferred to other contexts.


Evidence should serve a purpose. It helps actors to answer different questions. Whereas donors indicated, amongst other things, their interest in receiving evidence on whether an innovation offers a cost-benefit to current approaches, practitioners expressed they wanted to know whether an innovation could be successfully adopted within their organisation or transferred to different contexts. The type of evidence required therefore depends on different actors and their related needs.


However, the humanitarian innovation community lacks a common language around evidence. We have few shared definitions and standards around evidence, leading to confusion and a lack of transparency about the success of humanitarian innovation projects. This makes it harder to define common processes for generating evidence and is particularly apparent when trying to define what we should accept as good (enough) evidence.



What counts as good (enough) evidence?


Different actors apply different criteria when it comes to assessing quality of evidence. Innovators face many challenges in attempting to generate evidence throughout the innovation process, and participants in the discussion widely acknowledged that:




“perfection should not become the enemy of good.”



Seeking to address this gap, the Response Innovation Lab presented their Innovation Evidence Toolkit to help innovators bring a more evidence-based approach into practice.



How to get the required evidence?


Understanding what evidence is required is only the first step. When it comes to generating that evidence, innovators face many further challenges. This holds particularly true for testing whether innovations work, when moving from controlled design and testing environments into complex humanitarian settings. For example, which methods are most appropriate for generating evidence in the pilot stage?


There is a bias towards certain scientific methods for testing innovation, but these might not always be appropriate. Randomised Control Trials (RCTs) or experimental designs are often suggested as the ideal methods to assess the success of newly developed solutions. However, these methods do not always capture the full effects of an innovation, and while experimental design might help prove if something works, it doesn’t explain how it works, and therefore doesn’t aid transferability. Instead of preferring a certain method, we have to determine on a case by case basis which method is most appropriate.



How should we assess which method is most appropriate?


While there are many factors to take into consideration when determining what method is the best fit for a given situation, there are three key dimensions: First, we should consider whether a method is able to answer certain questions. Second, we should assess the degree to which a given method is feasible given the constraints of a particular operating environment. Third, we should assess the degree to which a method is suitable for assessing innovative solutions as opposed to conventional ones (i.e. whether it can account for changes and adjustments being made to the solution while testing is in progress).



How can we bring this into practice?


There are very few resources available to help innovators take stock of the different tensions and trade-offs that come into play when considering between various methods for generating evidence in humanitarian contexts.


To address this gap, myself and Joseph Guay (The Do No Digital Harm Initiative) are supporting Elrha in developing a framework for assessing which methods are most appropriate based on a list of guiding questions compiled to address the above dimensions. On the basis of two case studies presented at the session (War Child’s Can’t Wait to Learn programme and Isôoko), we tried to assess which of these questions are most relevant.


Our conclusion was that while all the questions were relevant, not all are needed. Unsurprisingly, a long list of more than 100 questions was considered too long. Though it was agreed that the questions helped to shed light on why certain methods are problematic or preferential, and they could help to create more understanding between innovators, donors, practitioners and researchers as to why certain methods are more appropriate than others.



What is next?


The Humanitarian Innovation Guide aims to support an evidence-based approach to humanitarian innovation. Amongst other things, it offers guidance to help innovators collect relevant evidence as part of problem recognition activities and it offers guidance on assessing the performance of innovation projects as well as generating evidence on developed solutions.


We are currently working with Elrha to further the development of a common evidence framework for humanitarian innovation and develop new tools to support innovators in conducting research and learning activities. These will be published in forthcoming iterations of the Humanitarian Innovation Guide.


This post has been written on the basis of the outcomes of two working sessions at the Humanitarian Innovation Exchange event. The first session on ‘Evidence’ featured contributions from Alice Castellejo (Translators without Borders), Laura Miller (War Child), and Maxime Vieille (Response Innovation Lab).


The second session on ‘Planning Research for a Pilot Contribution’ featured case studies from Jasmine Turner (War Child) and Philipp Grunewald (Isôoko, King’s College London).


Special contributions were made by Alice Obrecht (ALNAP), and in the development of both sessions by Joseph Guay (The Do No Digital Harm initiative). Both sessions were hosted by Jesse van der Mijl and Thomas Baar (Centre for Innovation, Leiden University).

Stay updated

Sign up for our newsletter to receive regular updates on resources, news, and insights like this. Don’t miss out on important information that can help you stay informed and engaged.

Related articles

all latest news
No items found.

Related projects

explore more projects
No items found.

Explore Elrha

Learn more about our mission, the organisations we support, and the resources we provide to drive research and innovation in humanitarian response.

Scaling innovation