Abstract
Fast disaster impact reporting is crucial in planning humanitarian assistance. Large Language Models (LLMs) are well known for their ability to write coherent text and fulfill a variety of tasks relevant to impact reporting, such as question answering or text summarization. However, LLMs are constrained by the knowledge within their training data and are prone to generating inaccurate, or "hallucinated", information. To address this, we introduce a sophisticated pipeline embodied in our tool FloodBrain (floodbrain.com), specialized in generating flood disaster impact reports by extracting and curating information from the web. Our pipeline assimilates information from web search results to produce detailed and accurate reports on flood events. We test different LLMs as backbones in our tool and compare their generated reports to human-written reports on different metrics. Similar to other studies, we find a notable correlation between the scores assigned by GPT-4 and the scores given by human evaluators when comparing our generated reports to human-authored ones. Additionally, we conduct an ablation study to test our single pipeline components and their relevancy for the final reports. With our tool, we aim to advance the use of LLMs for disaster impact reporting and reduce the time for coordination of humanitarian efforts in the wake of flood disasters.
Original language | English |
---|---|
Publication status | Published - 15 Dec 2023 |
Event | Artificial Intelligence for Humanitarian Assistance and Disaster Response Workshop - New Orleans, United States Duration: 15 Dec 2023 → 15 Dec 2023 https://www.hadr.ai/ |
Conference
Conference | Artificial Intelligence for Humanitarian Assistance and Disaster Response Workshop |
---|---|
Country/Territory | United States |
City | New Orleans |
Period | 15/12/23 → 15/12/23 |
Internet address |
Keywords
- large language models
- LLM
- disaster impact reporting
- flood
- FloodBrain