Power of the Crowd: Collective Voices in HADR
| By Christopher Chen | Information provided by affected citizens and community volunteers informs decision-makers of the realities on the ground. This crowdsourced data now plays an important part in disaster responses. However, these efforts are still plagued by issues of quality, reliability and accuracy. How can aid organisations adapt to this changing landscape?
IN THE Asia-Pacific region, disaster relief organisations have begun to use crowdsourced data to augment their relief efforts. During the 2013 Typhoon Haiyan disaster, the United Nations Office for the Coordination of Humanitarian Affairs (UN OCHA) deployed officials to work with volunteer groups to coordinate crowdsourced mapping of damaged and flooded areas. The Humanitarian OpenStreetmap Team (HOT) drew on satellite and aerial imagery to create detailed maps of the affected areas. This was constantly updated by a community of volunteers, who flagged real-time developments on the ground such as damaged infrastructure and blocked roads. Lukewarm Interest? While this is a promising development in the Humanitarian Assistance and Disaster Relief (HADR) field, many humanitarian stakeholders are still reluctant to adopt the use of crowdsourced data in their work. Storm Harvey hit Houston, Texas on 25 August 2017 causing catastrophic flooding and damage. However, during the disaster, the US Coast Guard urged people not to tweet for help, and instead use official channels to seek recourse. Their rationale was that social media posts were simply too difficult to verify and could easily be missed. Evidently, uptake of crowdsourcing is slow as aid providers have yet to fully embrace these new sources of information. The time sensitive nature of humanitarian relief means that organisations are often reluctant to adopt novel approaches, preferring instead to utilise tried and tested practices in the field. They also harbour doubts regarding the reliability and quality of crowdsourced data. Valuable Information
The sheer amount of crowdsourced information available during disasters can be too overwhelming for aid organisations to handle, to the extent that verifiability becomes an issue. During the Haiti earthquake, 90% of aid requests sent via text messages were either inaccurate or repetitive. More recently, a photo of submerged planes in Houston’s airport started circulating on Twitter. However, it was later revealed to be a digitally composited image meant to illustrate the effects of rising sea levels. Without a proper system of curation, relief agencies face the daunting prospect of sorting through all the white noise and false reports. This can direct their efforts away from their main mandate – providing relief to affected communities. Relief providers also face the challenge of meeting the expectations of disaster victims. At-risk populations often misunderstand how information-gathering initiatives such as Ushahidi work; they liken it to a 911 call and expect a swift response. The Red Cross found that 75% of Americans expect help within three hours of posting an aid request on social media. People expect their pleas and voices to generate tangible action. Coupled with the tendency for citizens to exaggerate their predicaments, a lot of pressure is placed on aid providers to define what services they can or cannot provide. Harnessing Power of the Crowd Many organisations still do not have the technical capacity to convert crowdsourced data into actionable knowledge. Without this capability to manage and filter information, information overload becomes a huge problem. As such, there is a need to develop methods to verify and validate information generated by affected citizens and volunteer communities, and integrate them effectively into disaster responses. This can involve the enlisting of intermediaries to curate crowdsourced information. Some volunteer groups are already taking up this mantle of responsibility. The Standby Task Force (SBTF), launched at the 2010 International Conference on Crisis Mapping, is a volunteer-based network which trains and prepares its members to analyse tweets, text messages, and other social media content during disasters. This enables them to compile accurate and verifiable information regarding damaged areas and user needs, which are then disseminated to aid providers. By shifting some of the burden of establishing data reliability and utility to external volunteer organisations, relief organisations can focus on their main objective of delivering a better humanitarian response. Integrate Crowdsourced Data with HADR A lack of open and common standards in terms of data exchange is also preventing the expanded use of crowdsourced data by relief organisations. Inconsistency in hash-tagging might seem trivial, but in a disaster-context, it can hinder the tracking of relevant information. Raw crowdsourced data is often unstructured, which can be difficult to interpret, leading to a slowdown in response times. This exacerbates the already tenuous relationship that aid agencies have with the crowdsourcing community. Common standards for information exchange should thus be encouraged. This needs to be a constant process, where citizens and humanitarian stakeholders alike are instructed on protocols and best practices. In the case of Twitter, relief responders can decide beforehand on a standardised set of hashtags that users can use during the onset of a disaster. In 2014, UN OCHA produced a report – Hashtags Standards for Emergencies – suggesting three standardised hashtags for use during emergency situations. This enables similar data to be ‘clustered’ together and facilitates quick and efficient retrieval by agencies.
Relief organisations often operate with a centralised command structure, where complete control over the internal flow of information is exercised. Coupled with the perceived lack of credibility of crowdsourced information, it is obvious why there is relatively sluggish acceptance of third-party involvement in data management efforts.
This trend needs to be broken to foster better integration of crowdsourced data with humanitarian relief efforts. Technology is only as effective as the system in which it operates in. The humanitarian system needs to constantly adapt and leverage on the new tools available to them. This means extending acceptance to the new players in the field as well. (This article was contributed by the author and originally posted on the RSIS commentary.) |