Humanitarian assistance is still in early days when it comes to the practical application of AI, including helping to measure the impact of our work. We share some of our learnings as we explore the use of LLMs to expedite quantitative data analysis for an assessment on food security.
Increasing interest in Artificial Intelligence (AI) has caused a wave of thought pieces, research, and reflection about the role of this technology across sectors, from art to medicine to academia. While artificial intelligence has enormous potential to support global development in our work, can it also help us refine and implement programming?
AI in Humanitarian Assistance
Humanitarian assistance projects have begun integrating AI in a range of ways:
- Identifying high-risk populations for natural disasters and developing early warning systems;
- Social media analysis for situational awareness;
- Famine/food insecurity forecasting; and
- Using chatbots for service navigation in disaster response, among others.
Beyond these more direct approaches that aid in implementation, Dexis is using AI, specifically large language models (LLMs), to assess development efforts. With the right approach, preparing and employing LLMs can help to identify an intervention’s successes, areas needing improvement, and the potential changes needed.
Humanitarian assistance projects have begun integrating AI in a range of ways, including forecasting famine and food insecurity. Photo by Yasuyoshi CHIBA / AFP.
Applying LLMs to Assessments
Large language models are advanced artificial intelligence systems designed to understand, generate, and sometimes translate human language. By analyzing vast datasets of text, these models identify complex patterns and learn the nuances of language, including grammar, colloquialisms, and context. LLMs are trained using machine learning techniques on diverse bodies of text.
This training enables them to perform a wide range of tasks, such as answering questions, writing essays, summarizing documents, and engaging in conversation. The “large” in their name reflects the substantial amount of data they’re trained on and their extensive neural network architectures that allow them to process and generate language like humans.
In our work, an assessment team provides specific codes or queries for the machine learning software to use to analyze hundreds of technical documents related to the activity in question. Once the query is complete, the machine learning software produces a short excerpt that summarizes how that code appears in the set of texts.
Although still early in the process, initial outputs have required significant further contextualization, and some excerpts produced by the AI are not effectively capturing the nuances of the highly technical information required for the assessment—or in some cases, the output hasn’t matched the corresponding input.
Given the highly technical nature of the documentation within the context of humanitarian assistance strategic objectives, machine learning outputs have required considerable filtering and review by experts familiar with the intricacies of the data and context of operational objectives. Machine learning is not meant to be used in a vacuum, but these challenges are some of many to be navigated for AI to be effective in supporting assessments in humanitarian contexts.
Addressing Challenges with AI
If developed and used correctly, AI can be a force multiplier for humanitarian assistance. However, it requires teams to have the proper know-how, diligence, and commitment to do no harm.
You still need the human element. AI doesn’t have the capability to read between the lines to develop meaning. And there can be a disconnect between simply analyzing documents and applying know-how. As Dexis has learned in developing its social listening (or “SOLIS”) tool, there’s an extensive need to refine your AI.
Local knowledge is required. Relatedly, making LLM coding more effective necessitates having people from the country in question to examine language, especially the use of slang and colloquialisms.
Unintentional bias needs to be handled. As USAID stresses in its Digital Strategy, “Because AI-enabled tools often rely on machine-learning algorithms that use historical data to detect patterns and make predictions, they can reproduce or amplify biases that might be present in those data.”
Data privacy and security remain important concerns. Protecting the privacy and security of data used in AI and machine learning applications is essential, especially in sensitive areas like healthcare and humanitarian aid.
As AI continues to evolve, its integration into global development’s humanitarian assistance assessments and projects holds tremendous promise. The ongoing efforts by Dexis, USAID, and others showcase the potential benefits and the challenges that come with this technology. With thoughtful, informed strategies, we can harness the power of AI to better assess how to create more effective, efficient, and equitable development programs and outcomes.