Posted on February 4th, 2014

Back to School – Humanitarian Assistance

Researchers at George Mason University set out to see how Web 2.0 technologies can benefit the victim of natural catastrophes such as hurricanes and tsunamis. The authors used agent-based modelling to simulate humanitarian response times, which allowed them to contextualize their needs and the behaviors of the afflicted. Their model operated on crowdsourced data such as population density, level of devastation, existing transportation networks, location of aid centers and data that detailed the environmental surroundings.

The Republic of Haiti and its 2009 population distribution.

They employed this model in an analysis of the recent Haitian earthquake, and simulated those making decisions on the ground in Port-au-Prince. They needed to have an estimate of population distribution throughout the city, so they used 2009 LandScan data. LandScan breaks down the world’s population into estimates of 1 km by 1 km squares. They also needed to assess the level of devastation in the area, and for this they relied on high-resolution satellite data taken just 4 days after the earthquake. Using vector road lines sourced from OpenStreetMap, the authors were able to construct likely paths of aid. Also included in the model was the location of aid centers.

Next, the authors simulated three separate scenarios for possible aid center locations. Each simulation was run 50 times, and the placements were as follows: random locations; locations near the highest need areas (good); and located in low population areas, off the road network (bad). The latter, or ‘bad’ locations, were chosen due to the convenience of providers, not victims. The scenarios were then run against each other to identify the best service in terms of aid and food distribution, as well as perceived psychological benefits on those receiving aid. The model indicated that all three were beneficial in their delivery of aid, but that the random and good placements resulted in superior food distribution. The random locations actually resulted in the highest amount of food distribution in the models.

Different aid positions as indicated by the stars. A is random, B is good and C is bad placement.

The authors propose that constructing models of past natural catastrophes could help providers see how to best serve their constituents in the future. By creating a familiarity with the region, population distribution and access network, when the need arises, providers can create real-time models to tailor services to the catastrophe as it unfolds. By teaming with other academic, private and public entities, governments can create contingency plans and frameworks to apply to given scenarios as the need warrants. Running test models based on real disasters could also point out shortfalls in resource allotment and spatial data that would be beneficial to future issues.

Justin Harmon
Staff Writer

This entry was posted in The Geospatial Times and tagged , , , Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

    The Geospatial Times Archive