Title: | Reprogramming The Museum |
Authors: | Luke Dearnley |
Type: | Paper |
Publication: | MW2011: Museums and the Web 2011 |
Year: | 2011 |
Abstract: | Many museums have been busy building APIs since the Brookyln Museum set the example, and in 2009 the Powerhouse made the decision to offer a much of its collection data downloadable as a data dump. The decision was primarily a pragmatic one, as the Museum wanted to test the waters and examine how the data might be best used before dedicating resources to developing a full API. The release of data was timed to coincide with a number of 'government 2.0' activities occurring in Australia, and it was hoped that by providing downloadable data, it would be used in government 2.0 mashup contest. In the end, several individuals in the contests did make use of the data - the most notable being Flip Explorer, a dynamic catalogue created for in-gallery use by two developers who were not known to the Powerhouse. However the best usage was by the Digital NZ project that incorporated it into their search - thus providing access across many New Zealand websites.
Drawing on the in-sector work of Bernstein (2008), Ellis (2008, 2009), and Morgan (2009) as well as that of Digital NZ; as well as similar APIs provided by Flickr, Open Library, OCLC Worldcat and others, the Powerhouse has reconsidered the appropriate functionality of a museum API.
As the Powerhouse had already deployed several techniques to add value and meaning to the collection data - 'folksonomic' tags drawn both from the Powerhouse site and the Flickr Commons, as well as Flickr comments and geolocation data (and more) about objects, as well as semantic information mined using Thompson Reuters' Calais Tagging API; and an extensive log of the relationships between search terms and objects - it was decided that this structured data should also be made available.
This is in stark contrast to the original data dump that simply contained a subset of descriptive fields about collection objects. Thus the enhanced API allows access into our collection from the starting point of themes, collections, categories, subjects and object popularity rather than just objects.
Drawing on the work of Chan (2010) the API use and its resulting output is able to be tracked in some detail, allowing the Museum to monitor the return on investment of the the provision of the API service.
This paper looks at how the API launched in 2010 quantitatively and qualitatively improves upon the access provided by the data dump, as well as how the tracking methods were built into the API to ensure that the project is best able to adapt to the user needs of API developers. It provides details on the lessons learned and suggests best practices for API development in the cultural sector.
|
Link: | https://www.museumsandtheweb.com/mw2011/papers/reprogramming_the_museum |