Help, hazard or hype: reflections from our humanitarian AI learning journey

28
May
2025
Type
Elrha insights
Area of funding
Humanitarian Innovation
Focus areas
Scale
No items found.
Year
Group of students sitting doing writing task. Credit: pressmaster

When used ethically and responsibly, AI has the potential to significantly enhance how we respond to humanitarian crises. The technology can help to overcome language barriers, automate the analysis of large datasets, provide real-time insights into community needs, support disease surveillance and medical diagnosis, and strengthen anticipatory action and early warning systems for natural disasters.  

However, as with all technology, it also presents substantial risks – including challenges around data security, bias and discrimination, environmental impacts and the potential for over-reliance. These risks are heightened in humanitarian contexts, where aid workers may lack the skills, knowledge or capacity to responsibly and ethically apply AI.  

To address this, we launched our AI for Humanitarians: Shaping Future Innovation funding call, to build capacity among humanitarian innovators and agencies. Last year, ten grantees took part in a six-month learning journey to build their knowledge of the technology, take part in group discussions and complete hands-on exercises trialling AI tools. 

With the learning journey now complete, teams have shared their thoughts on the possibilities of AI for humanitarian response, their experience of the course, and the considerations for NGOs and humanitarian agencies looking to adopt the technology. From increasing transparency to avoiding tokenism, you can explore key takeaways and read their full reflections below:  

Grantee reflections

Key takeaways

AI is only as accurate as the data it’s trained on

In humanitarian contexts, poor-quality or biased data can reinforce harmful inequalities, especially for communities affected by crisis. Even unintentional bias in data collection or processing can skew outcomes. To build fair and effective tools, we must prioritise transparent data practices, carefully vet datasets, and continually monitor systems for bias. This ensures that AI decisions reflect, rather than distort, the realities on the ground.

“One key challenge is the quality and representativeness of data. AI systems are only as effective as the data they are trained on, and biased or incomplete datasets can perpetuate existing inequalities or misunderstandings.” – Search for Common Ground

Co-design and community buy-in is vital to avoid tokenism and improve context retention

To ensure meaningful engagement and impact, AI tools must be co-designed with local communities, humanitarian workers, and policymakers. Involving those directly affected, ensures the tools reflect local priorities and cultural context. Building these relationships fosters trust, promotes inclusivity, and helps overcome barriers such as language, digital access, and literacy. It can also increase community engagement with, and understanding of, AI and emerging technology, turning it into a two-way learning process.

“The workshop underscored the importance of inclusivity in AI. Training AI models in local languages and indigenous knowledge ensures these systems are accessible and culturally relevant. This creates a two-way learning process: communities engage with AI, enhancing their scientific understanding, while AI learns from local and indigenous expertise.” – Start Network

Before AI use becomes widespread, better training in data privacy and security is essential

In humanitarian work, collecting personal or sensitive data carries serious risks, particularly in fragile or crisis-affected settings. Yet many practitioners lack the training needed to manage data responsibly. Strengthening data protection practices across teams can prevent harm, uphold ethical standards, and maintain trust. Before deploying AI at scale, we must equip staff with the skills to handle data ethically and securely.

“Training staff at all levels to use and understand how AI tools work, within an ethical framework, would help on many different levels, from transparency to improving data privacy and security.” – CartONG  

Transparent and explainable AI systems are key to building trust

In humanitarian settings, decisions influenced by AI must be clear and justifiable. Many commercial AI systems remain difficult to understand, making it hard to explain why a certain decision was made. Strong AI assurance processes – including independent audits, risk assessments, and documentation of how systems are trained, tested, and deployed – are essential to ensuring AI is reliable, explainable, and aligned with humanitarian principles. This builds trust, accountability and helps people understand, challenge, and improve how these tools are used.

“Transparency is key to fostering trust. This involves clear communication with stakeholders about how data is sourced, how models function, and how outputs are generated.” - Oxfam

AI is a tool to enhance – not replace – human decision-making

AI solutions are not standalone fixes, but tools to amplify human expertise. They can help humanitarian workers by automating repetitive tasks, thereby freeing up time for more complex decision-making. But AI cannot replace human insight, contextual understanding, or ethical judgement. Over-reliance can erode these vital strengths. We must approach AI as a tool to support not supplant existing skills and knowledge.  

“There is a danger that AI could be seen purely as a cost-cutting measure rather than a tool to empower skilled practitioners. This could lead to AI being used without sufficient human review or contextual understanding, potentially resulting in poor decisions that could have real humanitarian consequences.” – International Medical Corps

Find out more

Read more about AI for Humanitarians and sign-up to our innovation newsletter for upcoming news, insights and funding opportunities.

To follow the latest developments in AI and emerging technology, subscribe to the UK Humanitarian Innovation Hub's dedicated newsletter.

Stay updated

Sign up for our newsletter to receive regular updates on resources, news, and insights like this. Don’t miss out on important information that can help you stay informed and engaged.

Related articles

all latest news
Image placeholder
Elrha insights
Humanitarian innovations addressing gender-based violence are driving impact
Image placeholder
Elrha insights
Innovating for Impact: Tackling the sanitation crisis in humanitarian settings
Image placeholder
Grantee insights
The partnership of the MSQ project

Related projects

explore more projects
No items found.

Explore Elrha

Learn more about our mission, the organisations we support, and the resources we provide to drive research and innovation in humanitarian response.

No items found.