In a presentation to a recent meeting (GIS Evolutions; Sydney 27-28 March), Rasmussen said his organisation’s mission is to organise the world’s information. His own brief is a sub-mission of this: to organise the world’s information geographically.
To that end, Google has arrangements with data suppliers all over the world. The quality of the data available, of course, varies enormously. Google Earth contains satellite data for the whole world at about 10 metres, with most urban areas imaged at one metre by high-resolution commercial satellite providers (usually DigitalGlobe).
Google Maps is more problematic. So far, street maps of North America, most of Europe, Australia and New Zealand (MDS), Hong Kong and Singapore (TeleAtlas) and South Africa (AND) are available, but most of Asia is still a white blob. Parts of India has been mapped.
Geo-coding is also available in some geographies, but not others. This is important, because it makes it possible to link street addresses to places on the map.
However, as people try to build these maps into business processes, inaccuracy becomes a significant irritant.
Normal map updating procedures are followed as data suppliers update their products in the usual way, but this implies that changes take months or years to be reflected in the map on the website.
Rasmussen says he favours a ‘Wiki’-type approach, where users update errors themselves. He says that Google has already established a map error reporting function, but when this information is sent back to suppliers, it often ‘disappears downs a black hole’.
The problem is not that data suppliers are lazy or incompetent. Rather, Rasmussen says, they prefer to check error reports themselves before including them in their data products.
This level of editorial control makes fast response to errors or changes impossible.
A ‘Wiki’ approach (after the Wikipedia online encylopaedia) gives editorial control to users. The Wikipedia experience is that this approach makes errors and fraudulent behaviour possible.
On the other hand, it is also self-correcting. Malicious or erroneous entries in Wikipedia are usually spotted quickly.
‘You may not get it right all the time by relying on users’, says Rasmussen. ‘The question is, is this way better than relying on the existing data supply chain? My guess is that users will, on average, give you a better result.’
But, Rassmussen agrees, this leaves open some pretty formidable questions of liability.