Artificial Intelligence for Humanitarians: Synthesis Report

Artificial intelligence can help humanitarian teams move faster and act smarter, translating across languages, surfacing insights from messy data, and strengthening early warning. But it also introduces real risks: bias, privacy harms, environmental costs, and the temptation to over-rely on opaque systems, especially in crisis settings where capacity is stretched. The AI for Humanitarians Synthesis Report brings together the most important lessons from a six-month Elrha learning programme with ten grantees, distilling how their understanding of AI evolved, what worked in practice, and what to watch out for as the sector scales responsible use.
What this report covers
- How grantees moved from “AI as quick fix” to human-centred augmentation, using AI to support, not replace, expert judgment in emergencies.
- The foundational building blocks: high-quality, diverse data; governance and consent, and political awareness of how data is collected and used.
- Applicable toolsets for humanitarian work today, particularly NLP/LLMs for multilingual, unstructured data, alongside predictive analytics, geospatial AI, image recognition, and chatbots.
- Ethical guardrails (privacy, transparency, accountability, cultural relevance, and “do no harm”) and how to embed human-in-the-loop decision-making across the AI lifecycle.
- Practical design habits including structured ideation, prototyping, and cross-functional workshops to turn ideas into actionable projects.
How teams applied AI in practice
Participants piloted AI to: amplify community voices in peacebuilding; tackle gender bias; support farmers with native-language tools; reduce context loss in qualitative analysis; improve waterborne disease detection; streamline outbreak data pipelines; optimise supply chains; and coordinate mental health and psychosocial support. These use cases show AI’s breadth while reinforcing that context, culture, and participation are non-negotiable.
Opportunities and the risks to manage
AI can help process large datasets quickly, spot patterns, support forecasting, and close language gaps, improving timeliness and relevance of response. Yet the risks are substantive: data quality and bias, privacy and security, over-reliance that sidelines local knowledge, limited transparency in commercial tools, digital exclusion, and costs of scale. The report sets out practical mitigations, from open approaches and capacity-building to clear accountability and community-centred design.
Who should read this
- Operational leads and cluster coordinators weighing if or where AI can responsibly add value.
- Programme designers and M&E teams seeking guardrails for data collection, consent, and bias mitigation.
- Technical and research partners building NLP/LLM-enabled workflows with humanitarian users.
- Funders and policymakers shaping incentives for safe, equitable AI adoption.
Five takeaways for responsible adoption
- Start with a real, user-defined problem and a clear theory of change.
- Invest early in data governance and protection; obtain meaningful consent.
- Keep humans in the loop, with transparent decision pathways and red-teaming for harm prevention.
- Prototype in the open; document assumptions, limitations, and failure modes.
- Build partnerships that bridge local actors and technical expertise, then budget for ongoing support and iteration.
-
What’s next: Humanitarian AI Lighthouse
Elrha will continue supporting applied research, convenings, and open resources through the Humanitarian AI Lighthouse, a community-led initiative to strengthen ethical, practical, and inclusive AI across the sector. Expect case studies, newsletters, and tooling that help organisations embed responsible AI over time.
-
-