Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Full Terms & Conditions of access and use can be found athttp://www.tandfonline.com/action/journalInformation?journalCode=tjss20
Download by: [University of Connecticut] Date: 28 January 2016, At: 11:44
Journal of Spatial Science
ISSN: 1449-8596 (Print) 1836-5655 (Online) Journal homepage: http://www.tandfonline.com/loi/tjss20
Towards an interoperable online volunteeredgeographic information system for disasterresponse
Chuanrong Zhang, Tian Zhao & Weidong Li
To cite this article: Chuanrong Zhang, Tian Zhao & Weidong Li (2015) Towards an interoperableonline volunteered geographic information system for disaster response, Journal of SpatialScience, 60:2, 257-275, DOI: 10.1080/14498596.2015.972996
To link to this article: http://dx.doi.org/10.1080/14498596.2015.972996
Published online: 24 Nov 2014.
Submit your article to this journal
Article views: 141
View related articles
View Crossmark data
Towards an interoperable online volunteered geographic information systemfor disaster response
Chuanrong Zhanga*, Tian Zhaob and Weidong Lia
aDepartment of Geography & Center of Environmental Sciences and Engineering, University of Connecticut,Storrs, CT, USA; bDepartment of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI,
USA
Although volunteered geographic information (VGI) (voluntarily provided crowd-sourcingspatial data) applications have proven to be useful for gathering information about a crisis, theseapplications have provided only limited utilities for response coordination. One major problemof existing implemented VGI systems is heterogeneous data semantics. To provide more utilitiesto disaster response, a VGI system that can encode data semantics is needed. The objective ofthis research is to propose such a VGI system based on the state-of-the-art Geospatial SemanticWeb technologies. A prototype for disaster response in Connecticut was implemented. Althoughthere are still issues waiting for further studies, the implemented prototype shows great promise.
Keywords: volunteered geographic information (VGI); geospatial semantic web; disasterresponse
1. Introduction
Emergency response and disaster management
have greatly benefited from recent advances in
information technologies, especially in GIS
and remote sensing (Cova 1999). However,
findings have shown that the use of GIS for
disaster management can readily fail due to the
inability to rapidly acquire spatial data and the
lack of interoperable GIS (Zerger & Smith
2003). According to a critical report released
by the U.S. House Select Bipartisan Commit-
tee (U.S. House Select Bipartisan Committee
2006), the federal government’s ineffective
response to Hurricane Katrina was partly a
failure of sharing and accessing information
in a timely manner (White house 2006).
The experiences suggest that the real barriers
to emergency response and disaster manage-
ment are in most cases difficulties in sharing
and integrating the heterogeneous data rather
than the lack of data (Donkervoort et al. 2008).
Data sharing facilitated by the advances of
network technologies is hampered by the
incompatibility of a variety of data models and
semantics used at different sites.
To facilitate exchange and sharing of spatial
data built on initial expenditures, Spatial Data
Infrastructures (SDIs), which are data infra-
structures including interactively connected
spatial data, metadata, and tools, have been
developed in many countries in the past two
decades (Rajabifard et al. 2006). Although the
fast development of SDIs and OGC (Open
Geospatial Consortium) web service technol-
ogies has undoubtedly improved sharing and
synchronisation of geospatial information
q 2014 Mapping Sciences Institute, Australia and Surveying and Spatial Sciences Institute
*Corresponding author. Email: [email protected]
Journal of Spatial Science, 201
Vol. 60, No. , 257–275, http://dx.doi.org/10.1080/14498596.2015.9729962
5
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
across the diverse sources, literature in SDIs
shows that there are limitations in the current
SDI implementation and it might still be
difficult to find data sources from the currently
implemented SDIs. The implemented SDIs only
emphasise technical data interoperability via
web services and standard interfaces, and
cannot resolve semantic heterogeneity problems
in spatial data sharing. However, difference in
semantics used in diverse data sources is one of
the major problems in spatial data sharing and
data interoperability (Bishr 1998).
In addition, it is difficult to obtain up-to-
date spatial information from the existing data
of SDIs. The informational databases that were
built up over many years through GISs require
trained GIS professionals to operate and
maintain. However, disasters can significantly
affect the local geography, making the existing
spatial data and maps outdated instantly. The
dynamic nature of a disaster situation requires
timely updating of a variety of data/infor-
mation from various sources at federal, state,
and local levels. However, it is challenging to
enable disaster responders to obtain updated
data/information rapidly based on the cur-
rently implemented SDIs. Although existing
data in the currently implemented SDIs can
provide disaster responders useful information
such as political boundaries, population, social
and economic siutations, availability of relief
and services, locations of shelters and
hospitals, and biophysical variables, they lack
up-to-date information such as accessibility to
roads and damaged areas, and number and
locations of injured people. For example,
shelters and hospitals may be closed because
of the outage of power, thus they will no longer
be useful for helping the people affected.
Therefore, timely up-to-date information is
critical for disaster response. The effectiveness
of disaster response not only depends on fast
access to a multitude of distributed existing
data but also relies on obtaining up-to-date
information to save lives.
Remotely sensed images can provide the
timely information that is particularly useful in
disaster response. For example, after the 2010
Haiti earthquake, groups of volunteers
remapped Haiti using satellite imagery. How-
ever, despite recent advances in both in situ and
remote sensing systems, there are still some
phenomena that cannot be sufficiently
measured, such as storms and the closing
down of hospitals. In addition, measurements
from sensor systems may not be available due
to communication interruptions or destruction
of sensors; for example, water gauges may be
destroyed by very severe floods. Moreover,
sensors may not be able to take measurements
at critical moments, as often happens in severe
weather conditions when images from optical
remote sensing systems are obstructed by
clouds. Also for remote sensing, a time delay
occurs due to data acquisition and processing.
Currently, satellite scheduling can take up to
25 hours. This means that images cannot be
acquired on the same day when a disaster
occurs. Some of these gaps may be filled by
Volunteered Geographic Information (VGI),
which is the crowd-sourcing geographic data
provided voluntarily by individuals and a
special case of the larger Web phenomenon
known as user-generated content (Goodchild
2007a, b; Jones et al. 2012).
Advances in geospatial positioning (GPS),
Web mapping, cellular communications (smart
phones), and wiki-based collaboration tech-
nologies have now outpaced the original
visions of the architects of SDIs (e.g. Craglia
et al. 2008). Collaborative web-based efforts
such as Open Street Map, the Ushahidi project,
Sahana2, Google Earth/Maps, Wikimapia, and
Flickr have been used to assist in disaster
response situations (e.g. Laituri & Kodrich
2008; Goodchild & Glennon 2010; Zook et al.
2010; Kawasaki et al. 2013; Stefanidis et al.
2013). Several rapid and successful VGI
deployments employing the aforementioned
collaborative web-based software helped to
coordinate disaster responses recently, such as
the earthquake that struck Haiti in January
2010, the Gulf of Mexico Oil Spill in April
2010, and the series of damaging wildfires that
C. Zhang et al.2258
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
affected Santa Barbara between 2007 and 2009
(Goodchild & Glennon 2010). The launch of
the Ushahidi platform in Haiti demonstrated
the potential of using mobile technology for
information gathering and communication
during disaster response and management.
Several case studies have shown the added
value of using VGI in various types of crisis
events (De Longueville et al. 2010a) such as
earthquakes (De Rubeis et al. 2009), forest
fires (De Longueville et al. 2009), political
crises (Bahree 2008), hurricanes (Hughes &
Palen 2009), floods (De Longueville et al.
2010b), and terrorist attacks (Palen et al. 2009).
Although these VGI (or crowd-sourcing)
applications have proven useful for gathering
information about a crisis, these applications
only provided limited utilities for response
coordination (Gao et al. 2011). This is mainly
caused by lack of compatibility within
different software packages and heterogeneity
among different data sources (Poser & Dransch
2010; Zook et al. 2010; Morrow et al. 2011).
One major problem of VGI is the existence of
the heterogeneous semantic problem. The
challenges of semantic interoperability have
been recognised in the literature (e.g. Zhang
et al. 2010c). Problems arise when VGI is to be
used as decision-support information because
the user-generated annotations are often
unstructured and their contents are not
machine-readable. It is difficult to use the
user-generated data to support the spatial
analysis required in decision-making processes
because computers do not understand the
contents of VGI. For example, with the
currently deployed VGI systems, it is not
easy to get answers to the questions:
Which roads have been blocked (thus
cannot be accessed) by the hurricane Irene
disaster? And which is the best evacuation
route?
The information required to find the
answers cannot be readily retrieved because
the information needs to be extracted from
multiple data sources and the extraction
process is time-consuming. It may take several
hours or days for a professional GIS person to
collect the needed information, update the
existing spatial database, and perform the
spatial analysis to find out the best evacuation
route.
Therefore, a semantic-empowered analysis
tool, which is able to automatically integrate
and analyse a broad variety of spatial data
contents, is needed to provide more utilities for
response coordination. The objective of this
research is to propose such a system for the
public and emergency responders, who usually
do not have many GIS skills, to update the
existing databases and automatically search
for the needed information. We propose such
an interoperable online VGI system based on
state-of-the-art Geospatial Semantic Web
technologies to overcome the limitations of
the currently implemented SDI and VGI
systems.
2. A framework of an interoperable onlineVGI system
The framework of an interoperable online VGI
system based on Geospatial Semantic Web
technologies for disaster response is proposed
as shown in Figure 1. The overall goal of this
framework is to overcome the semantic
heterogeneity limitation of the currently
implemented SDI and VGI systems and make
use of unstructured citizen-generated contents
for decision-making in disaster response.
The tools in the proposed VGI system should
be available and useful to many stakeholders,
especially local authorities, disaster respon-
ders, and local residents.
The framework is based on a distributed
local-responsible web service architecture.
Heterogeneous local geospatial databases
such as shapefiles and geodatabases in
ArcGIS, remotely sensed satellite images in
GeoTIFF format, and Geomedia databases are
published using OGC web services such as
Web Feature Services (WFSs), Web Map
Services (WMSs), andWeb Coverage Services
Journal of Spatial Science 3259
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
(WCSs). Real-time observation and spatial
data mined from social media network
(e.g., Twitter, Facebook, Chat, Skype, blogs,
and emails) can be used for adding, deleting,
and updating spatial feature data such as
accessibility to roads and damaged areas,
availability of relief and services, and numbers
and locations of injured people. Ontology,
which formally represents knowledge as a set
of concepts and the relationships among those
concepts within a domain and supports
reasoning about concepts, is used to add
computer processable meaning (semantics) to
the online VGI system over the World Wide
Web. The OGC web services are connected to
local ontologies through local data source
adapters. Heterogeneous local ontologies are
integrated into an ontology server for web
service discovery and integration. Users such
as disaster responders, authorities, and local
residents, who usually do not have much GIS
knowledge or skills, can update the existing
heterogeneous GIS databases from a variety of
disparate sources through user-friendly
graphic interfaces based on their real-time
observations or other knowledge, such as
mining information from social media network
(e.g., Twitter, Facebook, Chat, Skype, blogs,
and emails).
The major advantage of the proposed
framework is to allow emergency responders
or local residents to remotely update (add,
delete, change) semantically heterogeneous
GIS databases, and automatically query and
integrate data from a variety of different
sources based on data content. In such an
infrastructure each local ontology server offers
a lookup for local geospatial concepts within
its spatial scope. The local ontology server
should be maintained by the local community
running the service. Thus, the stored ontology
can be accurate and always the most updated.
Instead of introducing all of the technologies
applied in the framework, in the following
sections we only introduce the main advanced
technologies applied in the framework such as
Figure 1. A framework of the proposed interoperable online VGI system.
C. Zhang et al.4260
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
remotely updating spatial data, matching
geospatial features to the predefined geospatial
ontologies, and heterogeneous geospatial
ontology integration.
Remotely updating spatial data
To allow users such as disaster responders,
authorities, and local residents to remotely
update the outdated existing spatial data
distributed in different sources in real time,
the proposed frameworkmakes full use of OGC
Web Feature Services (WFSs). The WFSs
manipulate spatial data at feature level based on
OGC’s simple features (e.g. points, lines, and
polygons) (Zhang et al. 2003; Peng & Zhang
2004; Zhang & Li 2005). They are written in
XML (Extensible Markup Language) and use
an open-source standard GML (Geography
Markup Language) to represent features. Data
in GML are stored in text format, which is a
vendor-neutral universal format. BecauseGML
is not locked into a proprietary binary format, it
is easy to integrate GML data into other data
across a variety of platforms and devices. The
proprietary systems’ support of GML inWFS is
through DataStores. The DataStores can
transform a proprietary data format such as
ESRI’s Shapefiles into the GML feature
representation.
To support remotely updating spatial data
in the DataStores, five operations are defined
in the OGC WFS: GetCapabilities, Describe-
FeatureType, GetFeature, Transaction, and
LockFeature. GetCapabilities describes the
capabilities of a WFS server, such as which
feature types it can serve and what operations
are supported on each feature type; Describe-
FeatureType informs the structure of any
feature type upon a request; GetFeature
retrieves feature instances; LockFeature pro-
cesses a lock request on one or more instances
of a feature type for the duration of a
transaction; Transaction provides transaction
requests for such operations on features as
create, update, and delete. The capability of
creating, deleting, and updating features over
theWeb in real time allows users to manipulate
geospatial data remotely at feature level, which
may provide the most updated data for
conducting further spatial analysis, modelling,
and other operations. For example, local
residents can instantly edit a road feature to
flood status in its remote databases by using the
update capability of a WFS over the Web and
the first disaster responders can add a new
flood affected location to their remote data-
bases by using the create capability of a WFS
over the Web.
Although OGC WFSs provide capabilities
of remotely updating geospatial data, the
currently implemented OGC WFSs have
limitations: They only emphasise technical
data interoperability via standard interfaces
and cannot resolve semantic heterogeneity
problems in spatial data sharing (Zhang et al.
2010c). Thus to achieve the goal of remotely
updating spatial data at semantic level over the
Internet, in the proposed framework we extend
the existing OGC WFSs with geospatial
semantic web technologies. We map OGC
WFS descriptions to OWL ontologies to
provide a semantically based view of the web
services, which spans from abstract descrip-
tions of capabilities of the services to the actual
feature data contents that exchange with other
services. We focus on specifying semantic
descriptions for three operations: GetCapabil-
ities, DescribeFeatureType, and GetFeature.
We specify semantics, the meaning of
languages (e.g., words, phrases), for each
FeatureType and FeatureProperty in defined
GetCapabilities, DescribeFeatureType, and
GetFeature operations. The semantics of
FeatureTypes and FeatureProperties in WFSs
are mapped to disjunctions or conjunctions of
(possibly negated) concepts in OWL ontolo-
gies. Because the definitions of the semantic
concepts in the enhanced semantic WFSs are
available at referenced uniform resource
identifier (URI) ontology databases on the
Web, the WFS providers and the clients have a
means of sharing terms. The results are that, by
taking an OWL description of a WFS, a WFS
Journal of Spatial Science 5261
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
client can distinguish and properly interpret all
FeatureTypes and FeatureProperties in Get-
Capabilities, DescribeFeatureType, and Get-
Feature operations.
Matching geospatial features to thepredefined geospatial ontologies
To realise updating and discovery of geospa-
tial feature data at the semantic level, one
important challenge is to match geospatial
features to the predefined geospatial ontologies
in order to connect local ontology servers to
the corresponding geospatial web servers.
In our previous studies we manually matched
geospatial features to the predefined geospatial
ontologies (Zhao et al. 2008; Zhang et al.
2010a, b, c). In this research, we apply the
supervised machine learning techniques to
automate matching of the geospatial features
to the predefined geospatial ontologies. The
idea is to develop a matching tool with
machine learning components to identify
potential matching candidates. Using the
developed tool users should be able to make
final adjustments through an interface to
eliminate erroneous or unnecessary matching
candidates and output the correct ones. Note
that this matching step has been done offline to
set up the local ontology servers, and manual
improvement can always be made if the
automatic matching has failed to generate the
satisfactory results.
The basic steps of the supervised machine
learning method are provided as follows:
1. Define domain ontology concepts using
knowledge in literature.
2. Create a training data set by labelling
the selected geospatial schemas with
the correct ontology concepts.
3. Determine the feature representation of
the learned functions which are used to
map ontology concepts to geospatial
schemas. The features may include
geometry type, property name, values,
and data-types of the geospatial sche-
mas. Other features such as bounding
box, density, and relations of geometries
may also be considered.
4. Determine the structure of the learned
functions and the corresponding learn-
ing algorithms. We apply decision tree
learning (Berikov and Litvinenko 2003)
for this step since it is able to handle
categorical data (e.g., geometries of
points, lines, polygons).
5. Apply learning algorithms on the
training set to obtain parameters,
which will be adjusted to maximise
performance on the test data sets.
For example, let’s consider a scenario
where a hurricane has struck the town of New
Haven in Connecticut. To take immediate
rescue action, the emergency responders need
to find evacuation routes. Suppose we have a
domain ontology for a transit system and we
also have a collection of web features of transit
data such as streets, bus routes, bus stops,
patterns, time-points, trips, schedules, and
facilities. These web features are derived from
several regions with some differences and
similarities. As a set of training data, we need to
select a subset of web features and manually
map them to ontological concepts. After that,
we need to run a machine learning algorithm to
derive a matching function f using the training
data. Then we will apply the matching function
to the rest of the web features. To generate f, we
need to decide how to match ontology classes,
how to match ontology properties, and how
to generate ontology individuals. For example,
to decide how to match web feature types to
ontology classes, we can use feature geometry
(type and density), feature type name, feature
property list, and property values as indepen-
dent variables. The ontology class to which a
web feature should be matched is considered as
a dependent variable.
We use a decision tree learning algorithm
to classify web features into each ontology
class. Figure 2 shows an example of a decision
tree for classifying web features into the
C. Zhang et al.6262
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
ontology classes of Street, Route, Stop, and
Facility. In this example, we use two indepen-
dent variables – geometry type and density.
We start with four web features for each class at
the top node of the decision tree. Through the
learning algorithm, we find out that the sub-tree
of point geometry includes all of the web
features for Stop and Facility classes while the
sub-tree of line geometry includes all of the
features of Street and Route classes. There is no
feature for the sub-tree of polygon as expected.
Since there are many more bus stops than
facilities, the predictive attributes of geometry
density are able to separate features of Stop
from those of Facility. Similarly, we can
separate features of Street from those of
Route because Street class contains features
with a higher density than Route class does.
We can also use a related strategy to determine
how to match web feature properties with
ontology properties. For web feature proper-
ties, we consider web feature type, data-type,
property value, and property name.
The advantage of the machine learning
approach is that it can identify the hidden
semantic links between the existing ontology
concepts and the geospatial schemas of the
legacy data. Though itmay not resolve all of the
semantic ambiguities, supervised machine
learning can potentially extract the maximum
amount of information from the training data,
which has been annotated with the help of
domain experts. We can also use relevance
feedback, which grades the relevance of the
matching results as determined by experts, to
improve the precision of the learning algorithm.
Heterogeneous geospatial ontologyintegration
In our prior work, we have used a single-domain
ontology to ensure semantic interoperability
(Zhang et al. 2007; Zhao et al. 2008). However,
Figure 2. An example of a decision tree for classifying web features into several ontology classes.The dependent variable is the number of web features in each ontology class including Route, Stop, Street, andFacility. The independent variables are geometry type and density.
Journal of Spatial Science 7263
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
it is impractical to develop a global ontology for
all disaster management applications that
support the tasks envisaged by a distributed
environment like the Geospatial Semantic Web.
Toovercome this problem, in this studywe adopt
a distributed local-responsibility service infra-
structure, which is an environment withmultiple
independent systems where each system has its
own local ontology. The advantage of this
approach is that the local ontology can be
designed to suit the geospatial data in each
system. However, this approach brings the
possibility of conflicts and mismatches among
different local ontologies. Thus, it is necessary
for this study to develop algorithms to integrate
the heterogeneous local ontology.
Although in the IT literature algorithms and
tools have been proposed to resolve the problem
of heterogeneous ontology integration (e.g.,
Castano et al. 2006), they are not developed for
dealing with spatial data. Hess et al. (2006)
recently proposed the G-Match algorithm for
geographic ontology integration. To match and
integrate two different geographic ontologies,
the G-algorithm measures the overall similarity
Sim(C1, C2) of their concepts by combining
similarity measures of concept names, attri-
butes, taxonomies, and conventional as well as
topological relationships in a weighted sum.
The G-algorithm rejects a matching of two
ontology classes if the overall similarity
measure is below a predetermined threshold.
One problem with this approach is that classes
or properties with very similar names could be
considered equivalent even though they are not
in reality or they are not compatible in structure
so that translation is impossible. Also, this
approach does not consider the range types of
relations in computing their similarity.
In this study, we adopt a partition
refinement algorithm (Zhang et al. 2010a) for
integrating heterogeneous ontology. The fol-
lowing is a basic version of the partition
refinement algorithm:
Input: A set of classes, datatype proper-
ties, and object properties.
Output: Partitions of classes PC, object
properties PR, and datatype properties PA,
where each partition in PC contains
equivalent classes and each partition in
PA(PR) contains equivalent datatype
(object) properties.
Step 1: initialisation:
a. Divide the set of datatype properties
into PA where each partition in PAcontains datatype properties with
similar names and the same range
type.
b. Put the set of object properties into PR
with one partition for spatial proper-
ties and one partition for the non-
spatial properties. Note that at initi-
alisation it is not necessary to separate
spatial properties from the rest, but
this can help increase the precision of
the algorithm.
c. Divide the set of classes into PC
such that C1, C2 are in a partition in
PC if and only if for each partition p
in PA, jp > attr(C1)j ¼ jp > attr
(C2)j, where jp > attr(Ci)j is the
size of the set p > attr(Ci).
Step 2: partition refinement:
a. Refine PR such that r1, r2 are in a
partition in PR if and only if for each
partition p in PC, jp > range(r1)j ¼ -
p > range(r2)j.b. Refine PC such that C1, C2 are in a
partition in PC if and only if for each
partition p in PR, jp > rel(C1)j ¼ -
p > rel(C2)j.Step 3: repeat Step 2 until PR and PC
stabilise.
Step 4: refine PC and PR even further based
on the name similarity of the classes and
object properties. If this step results in any
changes, then go back to Step 2.
Figure 3 illustrates the process of finding
equivalent ontology classes and properties
according to the algorithm shown above. First,
we pool all sets of classes, object properties,
and datatype properties from both local and
C. Zhang et al.8264
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
server ontologies. Then we initialise partitions
of classes, object properties, and data type
properties to form equivalence partitions.
After that we first refine the partitions of
classes and object properties based on their
structure. If too many equivalent classes and
object properties are found, we refine the
partitions based on names of classes and object
properties. The refinement process will be
continued until the partitions become stable.
Finally we obtain the final stable partitions,
which will not change and are considered to be
equivalent ontology classes and properties.
The main advantage of the partition
refinement algorithm is that it finds matching
ontology classes and properties based on their
structures. Unlike the G-algorithm, which
heavily relies on a string-based similarity
measure, the partition refinement algorithm
makes full use of the structures of the ontology
being mapped. Thus, it allows translation of
instances among different ontologies. Further,
the partition refinement algorithm can deal
with a recursive structure efficiently while the
G-algorithm cannot.
3. A case study
Connecticut has faced many significant storms
in the recent past. For example the August
Tropical Storm Irene hurricane and the
October Nor’easter hurricane in 2011 caused
widespread damage and left a record numbers
of residents (almost one million residents)
without electricity, heat or reliable supplies of
water for up to 9–12 days. Irene downed
approximately 1–2 percent of the State’s
trees. Damage from both storms was estimated
to be $750 million to $1 billion (McGee et al.
2012).
The significant impact of these storms has
served as a wake-up call to Connecticut. After
the storms, the State Governor created a Two
Storm Panel to review the preparedness,
response and recovery efforts during the
Irene and Nor’easter storms. Based on the
final report of the Two Storm Panel (http://
www.governor.ct.gov/malloy/lib/malloy/two_
storm_panel_final_report.pdf), two of the
important factors preventing a fast response
to the two disasters were (1) the inefficient
collaboration among municipalities, state
Figure 3. The process of finding equivalent ontology classes and properties through partitioning refinement.
Journal of Spatial Science 9265
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
resources, and electric utility service provi-
ders, and (2) the inadequate communication
between labour and management in all
utilities. One important reason for the
inadequate collaboration and ineffective com-
munication is the failure to share updated data/
information. The outage maps provided during
the two storms did not provide local details.
Questions as to which streets were blocked,
what poles and wires were down, where the
power was on and where it was off were
consistent complaints during the two storms.
Although the utility companies provided
information on line breaks based on their
existing grid system, they relied on consumers
to identify exactly where the resulting
problems were. However, there is no system
to allow consumers to share the information
with them. In addition, based on the final
report, some text messaging services offered
by utilities directed individuals to shelters that
had been closed or moved. The text infor-
mation was not updated in a timely fashion.
In this case study, we intend to develop a
prototype based on the aforementioned frame-
work to overcome some of the problems met
during the two storms in Connecticut.
We anticipate that the prototype will provide
useful tools to allow users to create and update
data/information over the Internet, thus they
can provide timely, accurate, and up-to-date
information for better disaster response. The
system should facilitate real-time sharing of
the existing legacy data and the new updated
information among different organisations,
such as town responders, utility companies,
medical departments, and police departments.
Thus they can work together and coordinate
seamlessly in the event of a disaster. The
contents of the implemented prototype should
also be machine-readable and computers
should be able to understand the content of
the databases; thus the required information
for disaster response can be readily retrieved
from multiple data sources.
The implemented prototype can be acces-
sible from the website: http://boyang.cs.uwm.
edu:8080/newHaven/. The prototype uses
several layers of map data for New Haven,
CT, and its surrounding areas. The map layers
include streets, roads, places, and city
boundaries. Both street and road features are
line geometries though street geometries are
mostly shorter than road geometries and street
features have more detailed attributes than
road features. The places are point geometries
for locations, such as schools, bridges, and
railroad stations.
For our experiments, we created only a few
ontology classes to demonstrate the feasibility
of the prototype. Figure 4 illustrates the
ontology classes created for the prototype.
The ontology class Line is the parent ontology
class of Streets and Roads while Point is the
parent ontology class of Places. The ontology
classes are used to overcome the semantic
heterogeneity problem in the original data sets.
For example, the ontology Streets is used to
overcome the semantic heterogeneity problem
in the original two Street data files – one file
called ‘path’ and another file called ‘lane’ for a
narrow road. In the future, additional classes
can be added to the hierarchy without any
change to the implementation.
The Streets ontology class is mapped to the
street feature type in the back-end web
services, while the Roads and Places ontology
classes are mapped to the road and place
feature types, respectively. The properties of
the ontology classes are mapped to the
attributes of the feature types. For example,
the following is the mapping between the
properties of the Road ontology class and the
nh:road feature type in OGC web services:
Figure 4. Ontology classes created for theprototype.
C. Zhang et al.10266
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
Attributes of the road feature type: Proper-
ties of the Roads ontology class
‘OBJECTID’ : ‘id’,‘FULLNAME’ : ‘name’,‘RTTYP’ : ‘type’,‘Shape_Leng’ : ‘length’,‘Blocked’ : ‘isBlocked’,
Similar mappings are defined for values of
some attributes to provide more readable query
results. For example, below is the mapping
between the values of the attribute ‘RTTYP’ in
the original data source and the values of the
property ‘type’ in the Roads ontology class:
Values of the attribute ‘RTTYP’: Values of
the property ‘type’ in the ontology class
‘C’ : ‘County’,‘I’ : ‘Interstate’,‘M’ : ‘Common name’,‘O’ : ‘Other’,‘S’ : ‘State recognised’,‘U’ : ‘US’
By using ontology classes, the
implemented prototype can allow different
stakeholder groups, such as local authorities,
relief specialists, nongovernmental agencies,
disaster responders, and the local residents, to
understand the vocabulary of the original data.
Thus it can deliver more meaningful spatial
information, which would be more useful in
the disaster response.
In addition to the datatype properties
mapped from web feature attributes, each
ontology class includes object properties such
as DWITHIN to allow users to specify spatial
queries in the form of ontology constraints. For
example, the following ontology query
retrieves streets near any high school:
select ?s ?p where?s rdf:type streets.?p rdf:type places.?p nh:category ?c.?c¼ ¼ High School.?s DWITHIN ?p
where the statement select ?s ?p specifies
the variables ?s and ?p as the queried ontology
instances. The constraints ?s rdf:type streets
and ?p rdf:type places are triples of the form
subject predicate object, which specify that
the variable ?s must be assigned instances of
the streets type and ?p must be assigned the
places type. The constraint ?p nh:category ?c
creates a new variable ?c, which is set to be
equal to High School in the constraint ?c ¼ ¼High School. The predicate nh:category is
based on the ontology property category of the
Streets class and nh: is a namespace prefix to
distinguish the property from other types of
predicates. Finally, the constraint ?s DWI-
THIN ?p says that the retrieved streets ?s must
be near the retrieved places ?p. The predicate
DWITHIN corresponds to an ontology prop-
erty that is not mapped to attributes since it
relates two spatial objects by the distance
between their geometries. In fact, there is no
explicit representation of the relationship since
it would be too inefficient to pre-compute and
store the relationships while only a small
fraction of them will be needed.
An additional benefit of the ontology-
based query is that users do not need to always
specify the types of the queried objects. For
example, the following query returns any
spatial objects by the name of Lawrence:
select ?s where?s nh:name ?name.?name , Lawrence*
where , is a predicate for the ‘LIKE’
relationship to match names such as Lawrence
St. The three ontology classes we defined all
have the property ‘name’. The above query is
in fact translated into three separate WFS
requests to retrieve features of the Web feature
types: streets, roads, and places. Each of the
WFS requests includes a filter with a different
name attribute: FENAME for streets, FULL-
NAME for roads, and NAME for places.
The prototype interface is a dynamic
HTML page embedded with about a thousand
lines of JavaScript code that use a geospatial
JavaScript library – OpenLayers to query/
update/render geospatial features through
WFS/WMS services. The Web service back-
Journal of Spatial Science 11267
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
end is an instance of a GeoServer program
running on a Linux workstation that provides
WFS/WMS services for several types of
geospatial features used in our experiments.
As shown in Figure 5, the client interface
includes an interactive map with several
control buttons and a form for ontology-
based queries.
The buttons ‘Sel streets’ and ‘Sel roads’
allow users to select streets or road features on
the map using the mouse. After having
selected some features, the users can click on
the buttons ‘Block streets’ or ‘Block roads’ to
save the selected streets or roads as being
blocked by the hurricane event. If the users
find out that they have made a mistake, they
can change the ‘blocked’ attributes to
‘unblocked’ status by selecting the ‘blocked’
streets or roads and clicking on the ‘Unblock
streets’ or ‘Unblock roads’ buttons.
Other than directly selecting streets or
roads, users can also click on the ‘Sel places’
button to enable a control to select point
features such as schools and bridges from the
map. After some places have been selected, the
interface client will display the attribute values
of the selected places below the map along
with two hyperlinks that allow users to select
the streets and roads near the selected places.
Figure 6 shows two selected places in blue
circles with their attributes being shown below
the map. The results indicate that one of the
places is a railroad station and the other place
is a high school. Users can click on the
dynamically generated hyperlinks ‘nearby
streets’ or ‘nearby roads’ to obtain line features
within a certain vicinity of the selected place.
For example, if a user clicks the hyperlink
‘nearby streets’ for the ‘New Haven Station’,
the resulting map will include the selected
streets, which are stored in a layer of vector
features that are displayed in a different colour
than the underlying street map layer (Figure 7).
Note that the displayed feature attributes
are transformed through ontology classes from
the original attributes stored in the data source.
Figure 5. The client interface of the implemented prototype.
C. Zhang et al.12268
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
Figure 6. Attributes and hyperlinks of the two selected places.
Figure 7. Results of clicking the hyperlink ‘nearby streets’ for the ‘New Haven Station’.
Journal of Spatial Science 13269
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
For example, the attribute for representing a
blocked street is called ‘Blocked’ while the
similar attribute for a blocked road feature is
called ‘Obstructed’. We create a mapping from
these attribute names to an ontology property
so that they are all accessed as the ‘isBlocked’
property.
The client interface also includes a textual
query form to allow users to select features
using the ontology-based query statements.
The textual query can be used to select features
using more fine-grained criteria that are more
difficult to express using the graphic interface.
For example, we may want to find out the
elementary schools and streets near the road
‘Connecticut Tpke’. We can write this query
using a SPARQL-like query statement to select
features of the ‘streets’, ‘roads’, and ‘places’
types so that the ‘category’ of the places is
‘Elementary school’, the road name is ‘Con-
necticut Tpke’ and the streets, the roads, and
the places are near each other (i.e., their
distances are within a certain threshold).
The implemented prototype not only is
able to dynamically update new sources of
information but also can automatically
integrate heterogeneous information at the
semantic level. It can, therefore, be used by
users including disaster responders and local
residents to report information about events
and damage in their neighbourhood and update
the related spatial information over the Web.
Users of the prototype need only to understand
the definitions for general concepts of streets,
roads, and places to be able to query and
update the spatial information, and they don’t
need to understand the underlying definitions
of the spatial data, which may be different
across regions or across data sources. As long
as the definitions of the original spatial data are
properly mapped to the ontology-based defi-
nitions, ontology-based queries can be under-
stood by the prototype.
The implemented prototype is capable of
turning the people with smart phones or
internet access into a big human sensor
network for fast updating of the existing
databases for disaster response. Using a mobile
client, a citizen can report disaster events such
as the failure of a bridge (for example,
Q Bridge) by simply touching a smartphone
screen. As shown in Figure 8, the system can
Figure 8. The affected roads across a failed bridge.
C. Zhang et al.14270
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
automatically highlight the affected streets and
roads that cross a failed bridge.
4. Discussion
The implemented case study in this paper
provides only a client interface for Internet
users. However, the VGI framework proposed
in this study can be implemented as a mobile
VGI system for smartphone users by modify-
ing the input control so it is suitable for touch
screens. The results show that the developed
prototype can allow users to add new spatial
information and update existing spatial infor-
mation over the Internet. Because the proto-
type was developed using ontology semantic
web technologies, it generates structured
information. The generated structured infor-
mation is machine-readable and can be
automatically processed by computers. The
information derived from the system can be
easily shared and used by different appli-
cations. Although the results of the prototype
show some promise towards an interoperable
online VGI system by using Geospatial
Semantic Web technologies for disaster
response, there are many issues such as
multilingual interoperability, fault/error hand-
ling and security issues still needing further
research to implement a fully workable system
for real-world applications. Here we address
four important issues in following paragraphs.
The first issue is performance. In fact, the
performance issue of OGC web services has
been recognised in the geospatial community
for a while and different approaches have been
suggested (e.g Zhao et al. 2008). Although the
prototype only involves data from one town in
Connecticut, New Haven, some layers
contain several thousand geospatial features.
For example, the street layer includes
almost four thousand geospatial features.
More geospatial features will be involved if
we implement the prototype for the whole
Connecticut state. Even for the New Haven
data sets, the system is slow to update the street
layer data. This is because the WFSs need to
transport the text-based GML data through the
network. When GML-coded geospatial data
are transported, all of the markup elements that
describe spatial and non-spatial features,
geometries, and spatial reference systems of
the data are also transported to the recipient.
This is important for data interoperability,
because the GML-coded data could be saved
and used by any other client-side applications
that can read GML data. However, this also
greatly slows down the performance of the
system. Compared with some binary GIS data
formats, the size of GML data files is large.
Large file sizes may hinder the use of GML
files as a means of data transport over the
Internet. Although solutions such as using
compression or sending the GML data to the
client in stages or progressively have been
proposed in the literature (Peng & Zhang
2004), issues still exist for improving the
performance.
In addition, currently the DL (Description
Logic)-based ontology knowledge base is
unlikely to efficiently handle large geographi-
cal knowledge bases. While DL is well suited
for representation of structured or semi-
structured attribute information, things
become complicated if a ‘non-abstract’ space
is considered. For example, for knowledge
bases it would be necessary to be able to
compute spatial relationships from the geome-
tries of objects. However, the DL-based
system is unlikely to efficiently compute
such spatial relationships. A huge ABox
containing all of the topological relationships
of the spatial data will be quite prohibitive for
querying or updating the spatial data without
using appropriate index structures or optimis-
ation techniques. There are many index
structures or optimisation techniques in the
computer literature. Which one is the best
index structure or optimisation technique for
handling spatial data? This is an important
issue that needs further study in the future.
The second issue is the data quality issue.
Compared with the traditional authoritative
geographic information, VGI lacks map
Journal of Spatial Science 15271
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
specifications, mechanisms or procedures to
assure quality. The main mechanism to assure
the quality of VGI is called Linus’s Law,
which states that ‘given enough eyes, all bugs
are shallow’ (Raymond 1999). If one volunteer
gives wrong information, others will be
expected to edit and correct the wrong
information. The success of this mechanism
depends on others who check the contribution.
Although the idea that ‘given enough eyeballs,
all bugs are shallow’ (Raymond 1999) was
proposed a decade ago, the reliability and
credibility of VGI data still face debates.
Because a VGI system does not use a strict
procedure to take the publication data quality
standards into account as done by traditional
authoritative sources, data quality is one of the
major issues faced by VGI. Several studies
have recognised the concerns in the literature
(e.g. Elwood et al. 2012; Ho & Rajabifard
2010; Flanagin & Metzger 2008) and several
approaches have been adopted to overcome the
limitations of the VGI data quality (e.g. Bishr
& Mantelas 2008; Zielstra et al. 2013;
Senaratne et al. 2013). In the implemented
prototype we developed an algorithm to first
evaluate the quality of the user’s input
information in the context of the existing
data before the system accepts the users’ input
information. In the future research, we may use
more elaborate methods for handling error
data, such as the quality filter method adopted
for Wikipedia. We may also use some
mechanisms to allow the related data publish-
ers under various departments to check the
volunteers’ input data and make sure that the
quality of the data conforms to the high
standards set forth by the original data
publishers. In addition, we also may develop
algorithms to automatically monitor and
correct errors. For example, address spelling
errors may be automatically recognised and
fixed by using approaches similar to those
employed by the geocoding functions in GIS
systems.
The third issue is how to create high-
quality ontologies. Currently, ontologies are
typically built by a small number of people, in
most cases researchers, using the existing
ontology tools and editors such as Protege.
These ontology tools and editors supporting
ontological modelling have improved over the
last few years and many functions are
available now, such as ontology consistency
checking, import of existing ontologies, and
visualisation of ontologies. Even so, ontology
building manually has proven to be a very
difficult and error-prone task and is becoming
the bottleneck of knowledge-acquiring pro-
cesses. For instance, it is unrealistic for non-
domain-experts to use these tools to build
high-quality ontologies. Although transform-
ation algorithms have been proposed by Zhang
et al. (2008) to automatically transform the
existing UML to OWL so as to avoid errors
and provide a cost-efficient method for the
development of high-quality ontologies, there
are many issues yet to be resolved due to the
differences between UML and OWL.
The fourth issue is the credibility of VGI.
VGI has brought many benefits to society.
However, it has also brought some challenges.
One of them is credibility. Traditional data
have been created by a small trusted group,
including mainly professionals and academics.
The creation of VGI is completely open.
Anyone can create VGI as long as he/she
can access the Internet or a mobile phone.
The contents published without control and
verification imply that users may post wrong
data or information. For example, a terrorist
may post false information about accessibility
of some important roads to create panic. How
to trust the authenticity and quality of the data
or information that has been published by
users? How to check the validity of contents
before confidently using the data/information?
How to provide a trusted VGI? Is the
traditional identity-based security possible to
provide a VGI user confidence about the
contents without personally knowing who
created the contents? How to warn users
against fraudulent, malicious, or incorrect
information? These are challenging questions
C. Zhang et al.16272
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
faced by a VGI system. Although techniques
have been proposed in the literature to create
trusted VGI (Havlik et al. 2013), they are still
in the initial research stage and further studies
are needed to answer the aforementioned
questions. In general, multiple sources of VGI
that ensure vast information availability also
make the verification of the credibility of
information very complex.
5. Conclusion
Considering the fact that Internet users are fast
to report disaster information, in this paper we
propose a framework of an interoperable
online VGI system based on Geospatial
Semantic Web technologies for timely updat-
ing, aggregating, disseminating, and sharing
information for disaster response. The import-
ant advantage of the proposed system is that it
can deliver more meaningful spatial infor-
mation to different stakeholder groups such as
local authorities, disaster responders, and local
residents by using ontology. People equipped
with smartphones or Internet access can use
the system to rapidly update the existing
heterogeneous databases in a shorter period of
time. The updated information can direct first
disaster responders to find the affected
locations (e.g., blocked roads, injured or
missing residents, damaged buildings,
or closed hospitals) and available relief or
services. In general, the proposed system
allows data to be updated not only by
professionals and official agencies, but also
by volunteers, who may not have much GIS
skill or knowledge. Although some geographic
information provided by such a system may
not have the same high quality as that from
traditional authoritative sources, it still will be
helpful to assist recovery workers because it
allows new information, which may reflect the
real disaster situation, to be incorporated and
distributed in nearly real time.
Because the system ensures the spatial
information is interoperable at the semantic
level, it can be reused across a wide variety of
stakeholders involved in relief efforts.
Although there are still many issues waiting
for further studies to fully implement such a
workable system, the implemented prototype
shows great promise for updating and integrat-
ing the existing legacy spatial databases over
the Internet.
References
Bahree, M. (2008) Citizen voices. Forbes MagazineWWW document. Available from: http://www.forbes.com/forbes/2008/1208/083.html(Accessed 1 October 2013).
Berikov, V., & Litvinenko, A. (2003) Methods forstatistical data analysis with decision trees,WWW document. Available from: http://math.nsc.ru/AP/datamine/eng/context.pdf (Accessed1 October 2013).
Bishr, M., & Mantelas, L. (2008) A trust andreputation model for filtering and classifyingknowledge about urban growth. GeoJournal,vol. 72, no. 3–4, pp. 229–237.
Bishr, Y. (1998) Overcoming the semantic and otherbarriers to GIS interoperability. InternationalJournal of Geographical Information Science,vol. 12, no. 4, pp. 299–314.
Castano, S., Ferrara, A., & Montanelli, S. (2006)Matching ontologies in open networkedsystems: techniques and applications, DataSemantics V, Lecture Notes in ComputerScience, Springer, Berlin, pp. 25–63.
Cova, T.J. (1999) GIS in emergency management.In: Longley, P.A., Goodchild, M.F., Maguire,D.J., & Rhind, D.W., eds. GeographicalInformation Systems: Principles, Techniques,Applications, and Management, John Wiley &Sons, New York, NY, pp. 845–858.
Craglia, M., Goodchild, M.F., Annoni, A., Camera,G., Gould, M., Kuhn, W., Mark, D., Masser, I.,Maguire, D., Liang, S., & Parsons, E. (2008)Next-generation digital earth: a position paperfrom the vespucci initiative for the advance-ment of geographic information science.International Journal of Spatial Data Infra-structures Research, vol. 3, pp. 146–167.
De Longueville, B., Smith, R.S., & Luraschi, G.(2009) “OMG, from here, I can see the flames!”:a use case of mining location based socialnetworks to acquire spatio-temporal data onforest fires, Proceedings of the 2009 Inter-national Workshop on Location Based SocialNetworks Seattle, WA, November 2009, ACM,7380.
Journal of Spatial Science 17273
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
De Longueville, B., Luraschi, G., Smits, P., Peedell,S., & De Groeve, T. (2010a) Citizens as sensorsfor natural hazards: a VGI integration work-flow. Geomatica, vol. 64, no. 1, pp. 41–59.
De Longueville, B., Annoni, A., Schade, S.,Ostlaender, N., &Whitmore, C. (2010b) Digitalearth’s nervous system for crisis events: real-time sensor web enablement of volunteeredgeographic information. International Journalof Digital Earth, vol. 3, no. 3, pp. 242–259.
De Rubeis, V., Sbarra, P., & Tosi, P. (2009) Webbased macroseismic survey: fast informationexchange and elaboration of seismic intensityeffects in Italy, Proceeding of the 6thInternational ISCRAM Conference Gothenburg,Sweden,May 2009.WWWdocument.Availablefrom: http://www.iscram.org/ISCRAM2009/papers/ (Accessed 1 October 2013).
Donkervoort, S., Dolan, S.M., Beckwith, M.,Northrup, T.P., & Sozer, A. (2008) Enhancingaccurate data collection in mass fatality kinshipidentifications: lessons learned from HurricaneKatrina. Forensic Science International: Gen-etics, vol. 2, no. 4, pp. 354–362.
Elwood, S., Goodchild, M., & Sui, D. (2012)Researching volunteered geographic infor-mation: spatial data, geographic research, andnew social practice. Annals of the Association ofAmerican Geographers, vol. 102, no. 3,pp. 571–590.
Flanagin, A.J., & Metzger, M.J. (2008) Thecredibility of volunteered geographic infor-mation. GeoJournal, vol. 72, no. 3–4,pp. 137–148.
Gao, H., Wang, X., Barbier, G., & Liu, H. (2011)Promoting coordination for disaster relief – fromcrowdsourcing to coordination, ProceedingSBP’11 Proceedings of the 4th InternationalConference on Social Computing, Behavioral-CulturalModeling and Prediction, Lecture Notesin Computer Science (LNCS) 6589 Springer,Berlin, pp. 197–204.
Goodchild, M.F. (2007a) Citizens as sensors: theworld of volunteered geography. GeoJournal,vol. 69, no. 4, pp. 211–221.
Goodchild, M.F. (2007b) Citizens as voluntarysensors: spatial data infrastructure in the worldof Web 2.0. International Journal of SpatialData Infrastructures Research, vol. 2,pp. 24–32.
Goodchild, M.F., & Glennon, J.A. (2010) Crowd-sourcing geographic information for disasterresponse: a research frontier. InternationalJournal of Digital Earth, vol. 3, no. 3,pp. 231–241.
Havlik, D., Egly, M., Huber, H., Kutschera, P.,Falgenhauer, M., & Cizek, M. (2013) Robustand trusted crowd-sourcing and crowd-taskingin the future internet. Environmental SoftwareSystems. Fostering Information Sharing IFIPAdvances in Information and CommunicationTechnology, vol. 413, pp. 164–176.
Hess, G.N., Iochpe, C., & Castano, S. (2006) Analgorithm and implementation for geoontologiesintegration, WWW document. Available from:http://www.geoinfo.info/geoinfo2006/papers/p46.pdf (Accessed 1 October 2013).
Ho, S., & Rajabifard, A. (2010) Learning from thecrowd: the role of volunteered geographicinformation in realising a spatially enabledsociety, WWW document. Available from:http://www.gsdi.org/gsdiconf/gsdi12/papers/45.pdf (Accessed 1 October 2013).
Hughes, A.L., & Palen, L. (2009) Twitter adoptionand use in mass convergence and emergencyevents, Proceedings of the 6th InternationalISCRAM Conference Gothenburg, Sweden,May 2009. WWW document. Available from:http://www.iscram.org/ISCRAM2009/papers/(Accessed 1 October 2013).
Jones, C.E., Mount, N.J., & Weber, P. (2012) Therise of the GIS volunteer. Transactions in GIS,vol. 16, no. 4, pp. 431–434.
Kawasaki, A., Berman, M.L., & Guan, W. (2013)The growing role of web-based geospatialtechnology in disaster response and support.Disasters, vol. 37, no. 2, pp. 201–221.
Laituri, M., & Kodrich, K. (2008) On line disasterresponse community: people as sensors of highmagnitude disasters using internet GIS. Sensors,vol. 8, no. 5, pp. 3037–3055.
McGee, J., Carozza, P., Edelstein, T., Hoffman, L.,Jackson, S., McGrath, R., & Osten, C. (2012)Report of the two storm panel, WWWdocument. Available from: http://www.governor.ct.gov/malloy/lib/malloy/two_storm_panel_final_report.pdf (Accessed 8 July 2014).
Morrow, N., Mock, N., Papendieck, A., & Kocmich,N. (2011) Independent evaluation of theUshahidi Haiti project, Tech. Rep. The UHPIndependent Evaluation Team. WWW docu-ment. Available from: http://ggs684.pbworks.com/w/file/fetch/60819963/1282.pdf (Accessed1 October 2013).
Palen, L., Vieweg, S., Liu, S.B., & Hughes, A.L.(2009) Crisis in a networked world: features ofcomputer-mediated communication in the April16, 2007, Virginia Tech event. Social ScienceComputer Review, vol. 27, no. 4, pp. 467–480.
Peng, Z.-R., & Zhang, C. (2004) The roles ofgeography markup language (GML), scalable
C. Zhang et al.18274
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016
vector graphics (SVG), and Web feature service(WFS) specifications in the development ofinternet geographic information systems (GIS).Journal of Geographical Systems, vol. 6, no. 2,pp. 95–116.
Poser, K., & Dransch, D. (2010) Volunteeredgeographic information for disaster manage-ment with application to rapid flood damageestimation. Geomatica, vol. 64, no. 1,pp. 89–98.
Rajabifard, A., Binns, A., & Williamson, I. (2006)Virtual Australia: developing an enabling plat-form to improve opportunities in the spatialinformation industry. Journal of SpatialScience, vol. 51, no. 1, pp. 63–78.
Raymond, E. (1999) The Cathedral and the Bazaar:Musings on Linux and Open Source by anAccidental Revolutionary, O’Reilly Media,Sebastapol, CA.
Senaratne, H., Broring, A., & Schreck, T. (2013)Using reverse viewshed analysis to assess thelocation correctness of visually generated VGI.Transactions in GIS, vol. 17, no. 3,pp. 369–386.
Stefanidis, A., Crooks, A., & Radzikowski, J.(2013) Harvesting ambient geospatial infor-mation from social media feeds. GeoJournal,vol. 78, no. 2, pp. 319–338.
U.S. House Select Bipartisan Committee toInvestigate the Preparation for and Responseto Hurricanes Katrina and Rita (2006) Washing-ton, DC: February 15, A Failure of Initiative:The Final Report of the Select BipartisanCommittee to Investigate the Preparation forand Response to Hurricanes Katrina and Rita.
White House (2006) The Federal Response toHurricane Katrina: Lessons Learned, ExecutiveOffice of the President, WWW document.Available from: http://www.whitehouse.gov/reports/Katrina-lessons-learned.pdf (Accessed1 October 2013) Washington, DC.
Zerger, A., & Smith, D.I. (2003) Impediments tousing GIS for real-time disaster decisionsupport. Computers, Environment and UrbanSystems, vol. 27, no. 2, pp. 123–141.
Zhang, C., & Li, W. (2005) The roles of web featureand web map services in real-time geospatialdata sharing for time-critical applications.
Cartography and Geographic InformationScience, vol. 32, no. 4, pp. 269–283.
Zhang, C., Li, W., Day, Z.-R., & Peng, M. (2003)GML-based interoperable geographical data-bases. Cartography, vol. 32, no. 2, pp. 1–16.
Zhang, C., Li, W., & Zhao, T. (2007) Geospatialdata sharing based on geospatial semantic webtechnologies. Journal of Spatial Science,vol. 52, no. 2, pp. 35–49.
Zhang, C., Peng, Z.-R., Zhao, T., & Li, W. (2008)Transformation of transportation data modelsfrom unified modeling language to webontology language. Transportation ResearchRecord: Journal of the Transportation ResearchBoard, vol. 2064, no. 1, pp. 81–89.
Zhang, C., Zhao, T., & Li, W. (2010a) Theframework of a geospatial semantic web-basedspatial decision support system for digital earth.International Journal of Digital Earth, vol. 3,no. 2, pp. 111–134.
Zhang, C., Zhao, T., & Li, W. (2010b) Automaticsearch of geospatial features for disaster andemergency management. International Journalof Applied Earth Observation and Geoinforma-tion, vol. 12, no. 6, pp. 409–418.
Zhang, C., Zhao, T., Li, W., & Osleeb, J. (2010c)Towards logic-based geospatial feature discov-ery and integration using web feature serviceand geospatial semantic web. InternationalJournal of Geographical Information Science,vol. 24, no. 6, pp. 903–923.
Zhao, T., Zhang, C., Wei, M., & Peng, Z.-R. (2008)Ontology-based geospatial data query andintegration, Lecture Notes in Computer ScienceLNCS: Geographic Information Science, 5266Springer, Berlin, pp. 370–392.
Zielstra, D., Hochmair, H.H., & Neis, P. (2013)Assessing the effect of data imports on thecompleteness of OpenStreetMap – a UnitedStates case study. Transactions in GIS, vol. 17,no. 3, pp. 315–334.
Zook, M., Graham, M., Shelton, T., & Gorman, S.(2010) Volunteered geographic information andcrowdsourcing disaster relief: a case study ofthe Haitian earthquake. World Medical &Health Policy, vol. 2, no. 2, pp. 7–33. doi:10.2202/1948-4682.1069.
Journal of Spatial Science 19275
Dow
nloa
ded
by [
Uni
vers
ity o
f C
onne
ctic
ut]
at 1
1:44
28
Janu
ary
2016