Help, hazard or hype: reflections from our humanitarian AI learning journey

When used ethically and responsibly, AI has the potential to significantly enhance how we respond to humanitarian crises. The technology can help to overcome language barriers, automate the analysis of large datasets, provide real-time insights into community needs, support disease surveillance and medical diagnosis, and strengthen anticipatory action and early warning systems for natural disasters.
However, as with all technology, it also presents substantial risks – including challenges around data security, bias and discrimination, environmental impacts and the potential for over-reliance. These risks are heightened in humanitarian contexts, where aid workers may lack the skills, knowledge or capacity to responsibly and ethically apply AI.
To address this, we launched our AI for Humanitarians: Shaping Future Innovation funding call, to build capacity among humanitarian innovators and agencies. Last year, ten grantees took part in a six-month learning journey to build their knowledge of the technology, take part in group discussions and complete hands-on exercises trialling AI tools.
With the learning journey now complete, teams have shared their thoughts on the possibilities of AI for humanitarian response, their experience of the course, and the considerations for NGOs and humanitarian agencies looking to adopt the technology. From increasing transparency to avoiding tokenism, you can explore key takeaways and read their full reflections below:
Grantee reflections
- CartONG
- International Medical Corps UK
- Oxfam
- Search for Common Ground
- Start Network
- PUREFLOW
- SOCODEVI
- EpiAI
- IFRAD
- Syria Bright Future
Key takeaways
AI is only as accurate as the data it’s trained on
In humanitarian contexts, poor-quality or biased data can reinforce harmful inequalities, especially for communities affected by crisis. Even unintentional bias in data collection or processing can skew outcomes. To build fair and effective tools, we must prioritise transparent data practices, carefully vet datasets, and continually monitor systems for bias. This ensures that AI decisions reflect, rather than distort, the realities on the ground.
“One key challenge is the quality and representativeness of data. AI systems are only as effective as the data they are trained on, and biased or incomplete datasets can perpetuate existing inequalities or misunderstandings.” – Search for Common Ground
Co-design and community buy-in is vital to avoid tokenism and improve context retention
To ensure meaningful engagement and impact, AI tools must be co-designed with local communities, humanitarian workers, and policymakers. Involving those directly affected, ensures the tools reflect local priorities and cultural context. Building these relationships fosters trust, promotes inclusivity, and helps overcome barriers such as language, digital access, and literacy. It can also increase community engagement with, and understanding of, AI and emerging technology, turning it into a two-way learning process.
“The workshop underscored the importance of inclusivity in AI. Training AI models in local languages and indigenous knowledge ensures these systems are accessible and culturally relevant. This creates a two-way learning process: communities engage with AI, enhancing their scientific understanding, while AI learns from local and indigenous expertise.” – Start Network
Before AI use becomes widespread, better training in data privacy and security is essential
In humanitarian work, collecting personal or sensitive data carries serious risks, particularly in fragile or crisis-affected settings. Yet many practitioners lack the training needed to manage data responsibly. Strengthening data protection practices across teams can prevent harm, uphold ethical standards, and maintain trust. Before deploying AI at scale, we must equip staff with the skills to handle data ethically and securely.
“Training staff at all levels to use and understand how AI tools work, within an ethical framework, would help on many different levels, from transparency to improving data privacy and security.” – CartONG
Transparent and explainable AI systems are key to building trust
In humanitarian settings, decisions influenced by AI must be clear and justifiable. Many commercial AI systems remain difficult to understand, making it hard to explain why a certain decision was made. Strong AI assurance processes – including independent audits, risk assessments, and documentation of how systems are trained, tested, and deployed – are essential to ensuring AI is reliable, explainable, and aligned with humanitarian principles. This builds trust, accountability and helps people understand, challenge, and improve how these tools are used.
“Transparency is key to fostering trust. This involves clear communication with stakeholders about how data is sourced, how models function, and how outputs are generated.” – Oxfam
Collaboration and resource sharing are essential for sustainable AI
AI projects are resource-intensive and often require expertise across multiple disciplines. Attempting to build solutions in isolation can quickly lead to unsustainable models or tools that lackreal-world impact. Collaboration between NGOs, governments, local communities, or academic partners, etc, creates opportunities to pool resources, share knowledge, and strengthen collective capacity, rather than silo them.
“Deploying AI effectively requires substantial investment in financial resources, IT infrastructure and specialised programming skills. This highlighted the importance of collaborating with other teams to pool resources and expertise. Mutualising efforts emerged as a practical approach to overcome resource constraints and achieve greater impact.” – EpiAI
AI is an evolving technology, and our understanding must evolve with it
As a new technology, there is still a lot we don’t know, or have yet to discover about AI, its applications and its impacts. Responsible use requires ongoing curiosity, collaboration. Organisations must remain open to changing the way they use the technology and adapting approaches as new ethical and best practice guidance becomes available. By engaging with experts, learning continuously, and testing carefully, we can keep pace with AI’s evolution.
“The field of AI is vast and ever evolving, and it’s essential to embrace a mindset of continuous learning. My advice would be to approach it with curiosity, openness, and a willingness to iterate.” – PUREFLOWAI
AI can strengthen coordination and service delivery, especially in fragile and resource-constrained settings
AI’s ability to structure unorganised data and highlight service gaps makes it especially valuable in fragile settings. By improving coordination, scaling support tools, and bolstering internal systems, AI can ensure better service quality and outcomes, even in under-resourced sectors.
“AI can streamline data organisation, improve coordination among service providers, and enhance the overall quality of mental health interventions.” – Syria Bright Future
The use of AI must be planned into the whole project cycle
AI cannot be an afterthought but must be integrated into project design from the outset. Careful planning around data, ethics, and resources ensures systems are relevant, sustainable, and safe. Embedding AI into the full cycle supports responsible development and maximises its potential for meaningful impact.
“This learning journey has led us to better understand the need to start with deep thinking and planning, before embarking on the development of an AI solution.” – SOCODEVI
Both ethical and technical grounding is essential for when using AI
Approaching AI with humility and ethical awareness, as well as technical understanding, allows us to thoughtfully integrate the technology into humanitarian action. Awareness of and commitment inclusivity, transparency, and participatory design makes sure AI supports, rather than substitutes, people on the ground.
“My initial assumptions were centered on AI as a purely technical tool… I now appreciate the importance of incorporating local knowledge into data models to ensure their cultural relevance and acceptance.” – International Foundation for Recovery and Development (IFRAD)
AI is a tool to enhance – not replace – human decision-making
AI solutions are not standalone fixes, but tools to amplify human expertise. They can help humanitarian workers by automating repetitive tasks, thereby freeing up time for more complex decision-making. But AI cannot replace human insight, contextual understanding, or ethical judgement. Over-reliance can erode these vital strengths. We must approach AI as a tool to support not supplant existing skills and knowledge.
“There is a danger that AI could be seen purely as a cost-cutting measure rather than a tool to empower skilled practitioners. This could lead to AI being used without sufficient human review or contextual understanding, potentially resulting in poor decisions that could have real humanitarian consequences.” – International Medical Corps
Find out more
Read more about AI for Humanitarians and sign-up to our innovation newsletter for upcoming news, insights and funding opportunities.
To follow the latest developments in AI and emerging technology, subscribe to the UK Humanitarian Innovation Hub's dedicated newsletter.
Stay updated
Sign up for our newsletter to receive regular updates on resources, news, and insights like this. Don’t miss out on important information that can help you stay informed and engaged.
Related articles



Explore Elrha
Learn more about our mission, the organisations we support, and the resources we provide to drive research and innovation in humanitarian response.