Thursday, December 18, 2008

We don't need a data czar



One of the proposals that pops up repeatedly in discussions about cyberinfrastructure, geoinformatics, etc, is the purported need for someone or some body to oversee quality control of any and all data being made available online. [right, posters at AGU Fall meeting]

Deborah McGuiness from RPI offered ideas at yesterday's AGU informatics session about measuring or assigning levels of trust to data that are increasingly being aggregated through interoperability mechanisms like we are developing at AZGS for the USGS and state geological surveys.

Deborah suggests developing metrics that track how often, where and by whom, data are downloaded or used, somewhat like a citation index. A reference that is used a lot and in influential publications is viewed as reliable and important. Similarly, data that are repeatedly used may be viewed as more trustworthy. Her model would allow users to establish their own trust metrics to apply from the desktop.

This is much more a market-driven approach, indicative of the established scientific process where researchers decide what data to accept or reject on a variety of factors. There is no data czar today that approves what’s published or put into databases and I don’t see the need for one simply because of Web aggregation and access. But the increasing ease in pulling data together from anywhere in the world, allows us to succumb to the temptation to accept everything that shows up on the screen in front of us. Deborah’s starting to put some substance into a solution that some of us have promoted for most of this decade. It’s one more piece in building a geoinformatics system.

No comments:

Post a Comment