cadalyst
GIS

GIS Footnotes on Political Theater

16 Apr, 2007 By: Kenneth Wong

Reexamining the Role of Metadata in the Wake of Google's Katrina Maps Controversy


Shortly after Hurricane Katrina crashed into the Gulf Coast in August 2005, the do-no-evil search engine Google was doing a lot of good. Forbes wrote that Google was "coming to the rescue of victims of Hurricane Katrina," adding, "Ad hoc communities of Internet users are using mapping technologies from Google to track storm damage, analyze aerial photos and try to make sense of what little information is available” (“Google is Everywhere,” September 2, 2005). BBC News remarked, "Using maps and images provided through Google Maps and Google Earth, a number of hackers are building detailed models of the flood-damaged areas” (“Net Offers Map Help after the Flood,” September 2, 2005). Subsequently, the National Geospatial Intelligence Agency, an amalgam of defense and intelligence units, honored Google with the Hurricane Katrina Recognition Award "for their direct support during the Katrina disaster."

But a year and a half later, the same poster child of corporate altruism took a direct hit in a political firestorm. Representative Brad Miller (D-NC), chairman of the U.S. House Committee on Science and Technology's subcommittee, accused the previously lauded Google of "doing the victims of Hurricane Katrina a great injustice by airbrushing history." He was upset that Google had replaced the post-Katrina aerial maps with those predating the storm. The company quickly did damage control, restoring the most updated dataset that reflected the changes in the devastated region. John Hanke, director of Google Maps and Google Earth, explained that the revision to pre-Katrina maps was not an attempt "to rewrite history" but a decision motivated by the desire to provide "aerial photography of much higher resolution."

Google might have avoided the latest debacle if its free mapping applications clearly showed when the aerial images were acquired, distinguishing those showing the Gulf Coast without damages as the ones taken before the storm. But the absence of this critical metadata gave some people the impression that Google was presenting the images without storm damage as the most current status of the region. Public debate on this topic, now spreading from the mainstream media to the more specialized blogs, shines the spotlight once again on the gravity of metadata.

Not Much Has Changed
In September 2005, a month after Katrina's landfall, Directions Magazine published an editorial titled, "Geospatial Technology Offers Katrina Response Much, Delivers Some." Commenting on it, Doug Nebert from the FGDC (Federal Geographic Data Committee) wrote, "What we see evident in the Katrina preparation and response is the lack of full engagement by agencies and the private sector in the act of publishing proper descriptions (metadata) in their own clearinghouses and applications that can be harvested into geodata.gov. … The value of metadata, however, is amplified in times of emergency … in the appraisal of their currentness and fitness-for-use in real-time response."

This month, Directions Magazine's editors revived the issue on All Points Blog. "Will the Google/Katrina Affair Finally Push Metadata on [to Google Maps/Google Earth]?" they asked. James Fee, a GIS developer, carried the discussion over to his own blog Spatially Adjusted. Echoing the previous comment from FGDC's Nebert, Fee observed, "If we are to perform [any] sort of analysis using these free online tools, we'll need to know more about the metadata and sources (and accuracy)."

Kat Malinowska from Google offered what seems like a partial solution. "Currently this feature [for identifying when an image is captured] doesn't exist," she said, "but the Digital Globe layer in Google Earth could be used to help get acquisition date information about some of the imagery."

Data Provenance
David G. Smith, a licensed land surveyor and engineer and the VP and director of Geospatial Information Technology for Synergist Technology Group, remarked, "We now have these powerful tools like Google Earth, Google Maps, Yahoo Maps and Microsoft Live Local/Virtual Earth. Now, with Google letting people easily create their own mashups [for example, through the new My Maps feature in Google Maps], even relatively inexperienced users can build a personal map, host it and post it somewhere on the Web. They let us access geospatial data in so many ways, but also open doors to new problems. Basically we don't know what we're looking at: How old is the aerial photo? Is it accurate? Has it been modified in any way? Is it being refreshed in a timely fashion?"

Geoff Zeiss, director of technology for Autodesk Geospatial, pointed out, "With CAD data, there's a standard template of metadata. When professional architects or engineers create a drawing, typically, they have to identify who did the drawing, specify the precision and sign it off. All that stuff is pretty well standardized. But if you look at most maps, they'll have a scale on it, but won't typically have something that tells you how accurate or precise that is."

Zeiss explained that, in the ISO (International Organization for Standardization) metadata standard, called ISO 19115, two mandatory dates are required: the reference date, which is when the data was captured, and the date the data was published. This metadata might have served as important clues to the timeliness of the photos.

The Transition from FGDC to ISO
Zeiss said that standardizing metadata is a relatively new effort. The public sector in the United States currently uses the FGDC Metadata Standard, which came into existence in the 1990s. The FGDC originally adopted the CSDGM (Content Standard for Digital Geospatial Metadata) in 1994 and revised it in 1998. The committee's online literature states, "According to Executive Order 12096 all federal agencies are ordered to use this standard to document geospatial data created as of January 1995. The standard is often referred to as the FGDC Metadata Standard and has been implemented beyond the federal level with state and local governments adopting the metadata standard as well."

"It's only within the last two or three years that this standard has been widely adopted. Most U.S. geospatial vendors support that," said Zeiss. "But that's now in the process of being supplanted by a another standard, the U.S. National Profile of the ISO 19115 from ISO." At the ISO, the United States is represented by the American National Standards Institute; therefore, sooner or later, the FGDC standard (conceived between 1992 and 1994) will have to comply with the ISO 19115 (conceived in 2003).

Zeiss pointed out that, if we compare the FGDC standard to the ISO 19115, the former contains more input fields. "In the ISO standard," he said, "there are mandatory fields and conditional fields. The mandatory fields are required information like title, reference date, language and point of contact. The conditional fields include quality, spatial data organization and spatial reference system, The second thing -- which is probably the one that's more relevant to this discussion [about Google's Katrina images] -- is data quality … If you're dealing with property lines, for instance, you really need to know whether those lines are accurate to within plus or minus 20 meters, one meter or one centimeter. Because there could be legal implications … data quality is a conditional field, which means it has to be there if applicable."

On another front, the Open Geospatial Consortium, an international industry consortium of 339 companies, is working to develop and promulgate a set of interface specifications designed to make GIS tools interoperable. In 2001, the consortium adopted ISO 19115 as part of its Abstract Specification.

Composite Headaches
Synergist Technology Group's Smith raised a question that has no clear answer, at least for the present. "The issue usually comes up with composite datasets," he said. "For example, the Environmental Protection Agency has certain facilities data. They get this from a wide variety of facilities, regulated under different programs -- for permit issuing, water quality, toxic release inventory and others. So if they aggregate several sources into a single file, it might have different lat/long readings and multiple points, all associated with the same facility but collected at different times, for different purposes and by a variety of different data collection methods -- GPS, photo interpolation and so on. You might not even be able to make sense of those by looking at the consolidated metadata. You might also need to separately see the records of each component."

He suggests that, in some instances, metadata elements may need to be embedded at record level because file-level metadata can be, at best, only an aggregated rollup.“If [the data providers] know certain points and polygons are collected by GPS and others by photo interpolation, they can assign codes to each, so even though there are all those points of varying accuracy, the user can filter them accordingly. So if you click on one point, you know exactly how accurate that is, compared to the others. That’s record-level metadata treatment. At file level, the provider might state the accuracy of all the points and polygons within a certain range -- for instance, by mining the file-level metadata, they can say 80% of the points were collected by GPS, and their accuracy ranges from five to 100 meters.”

As a contractor involved in GIS work for Katrina cleanup, Smith praises Google for putting up the infrastructure necessary to deliver free maps over the Web, a service of significant value and importance to the first responders and relief agencies. Though he can't fully fathom Google's reason for switching its general-use site back to pre-Katrina datasets, he thinks an official congressional investigation is somewhat heavy handed, especially since Google has always maintained a dedicated site with Katrina photos.

Smith is glad the whole affair serves to underscore the need for metadata to be supplied with datasets. However, he summed up the political hullabaloo as "much ado about nothing," a Shakespearean farce about mistrust and perceived betrayal that led to public blunders.


About the Author: Kenneth Wong


More News and Resources from Cadalyst Partners

For Mold Designers! Cadalyst has an area of our site focused on technologies and resources specific to the mold design professional. Sponsored by Siemens NX.  Visit the Equipped Mold Designer here!


For Architects! Cadalyst has an area of our site focused on technologies and resources specific to the building design professional. Sponsored by HP.  Visit the Equipped Architect here!