In writing for Ariadne, I have had occasion to report on a number of personal ‘firsts’, including my first trip to the southern hemisphere and my first taste of Finnish tar-flavoured ice cream. The meeting reported here proved no exception, with my first flight in an aeroplane sans jet engine, and my first time snowed in at an airport. Writing for Ariadne is, as you can see, a never-ending round of thrills and spills!
The event for which I - and others - travelled from the UK to snowy Luxembourg was organised by the Telematics for Libraries Programme  of the European Commission’s Directorate General XIII, and was intended to explore some of the ways in which evolving metadata initiatives such as the Dublin Core might benefit projects funded through the Telematics Programme and, by extension, other areas of the Commission’s work. More than 50 people attended the two-day workshop, travelling from eleven member states and drawn from a wide range of projects funded through the Telematics Programme and other initiatives.
The first day was dominated by a detailed metadata tutorial given by Lorcan Dempsey and Andy Powell of UKOLN. This tutorial, which is available on the UKOLN Web site along with the other presentations , served to introduce all members of the audience to a number of the issues underpinning current work on metadata, and offered an excellent synopsis of the many initiatives currently underway in this field.
Following a night of pizza, mainland European beers, and digesting of the metadata glossary provided by UKOLN the day before, participants were ready to begin the second day with a series of short project presentations. Building upon the broad foundations laid the day before, these examined detailed issues affecting the use and application of metadata to specific case studies.
Firstly, Juha Hakala from Helsinki University Library discussed the use of Dublin Core within the Nordic Metadata Project . This important project, funded by NORDINFO , has been running since 1996 and is exploring the use of Dublin Core, both as a native metadata format and as a translation from the various Nordic flavours of MARC, in enhancing the accessibility of online information throughout Scandinavia. The provision of various tools  is an important part of this effort, and they are being used to create metadata for insertion into Web pages across Scandinavia.
The second presentation, from Mike Stapleton of the UK’s System Simulation Limited, broadened the discussion beyond Dublin Core with a review of experiences working with the Z39.50 protocol in heritage projects such as Aquarelle , CIMI’s Project CHIO , and SCRAN . He demonstrated the way in which a protocol such as Z39.50 was essential in enabling the creation of an almost seamless virtual database from a collection of spatially disparate and technologically varied resources.
The final paper, by the author, outlined the work of the Arts & Humanities Data Service  in exploring the application of Dublin Core to cross-domain resource discovery problems.
With the papers out of the way, participants were broken into three groups in order to discuss issues related to the creation and maintenance of metadata, mechanisms for effective retrieval of metadata, and the possibilities for harvesting metadata, once created. The group addressing metadata creation explored a wide range of issues, including perceived inhibitors to widespread adoption of something like the Dublin Core, and a number of suggestions for the means by which current work might be progressed further. Perceived inhibitors, perhaps unsurprisingly, included the apparent lack of stability of Dublin Core at present, a feeling that - despite (or, perhaps, because of!) the explosion of Dublin Core-related articles on the Web - clearly definitive and authoritative information was lacking, and a worry about the lack of widespread deployment beyond experimental or pilot projects. The group called for a clearly managed development path for the Dublin Core, and highlighted the importance to potential users of being able to identify individuals or organisations considered in some way to be ‘responsible’ for the process. Finally, this group asked the European Commission to call for the formulation of a clear organisational structure for the Dublin Core effort, including unambiguous and easily identifiable procedures for participating in the process; a clear European entry point or focus for discussion; and a statement of the current status and projected development path of the Dublin Core effort as a whole. It was clearly felt that, whilst a wealth of (occasionally contradictory) literature was available online and the discussion process was, in fact, open to all, the development of Dublin Core appeared chaotic and inaccessible to those not currently involved. These perceptions harm the process as a whole, and undoubtedly serve to make it difficult for non-experimental potential implementers to justify the costs of involvement.
The group discussing retrieval addressed the potential for allying Dublin Core-like metadata descriptions of resources with the Z39.50 protocol in order to facilitate cross-domain resource discovery amongst a number of physically remote and structurally different meta-databases. They identified the potential for mapping Dublin Core to existing Z39.50 applications using, for example, bib-1, and also discussed the proposal for a new Dublin Core profile. There was a strong feeling here, as elsewhere, that further controlled testing of the use of Dublin Core was required in order to gauge its true potential.
The harvesting group was able to draw upon experiences from pioneering results such as those from the Nordic Metadata Project’s Web index, and identified a number of issues associated with the manner in which different disciplinary groups recorded information and controlled their vocabulary. As with other discussions throughout the workshop, there was a feeling that further experimental work was required to test the usefulness of Dublin Core under controlled conditions. Use of Dublin Core’s optional SCHEME sub-element was discussed as one of the ways in which cross-disciplinary metadata might be made more useful for searching, enabling creators to use terminology from identifiable controlled vocabularies such as the Library of Congress’ Subject Headings rather than simply entering an uncontrolled string of words. For such a system to work, it was suggested that a registry of acceptable SCHEMEs would be required, as well as mechanisms for encouraging their use. A separate issue identified by the harvesting group was that of levels of metadata such as, for example, a single record for the Louvre and all its collections, related in some fashion to individual records for each work of art within the museum.
In the closing discussion, a strong case appeared to emerge both for testing the value of creating and harvesting Dublin Core metadata under controlled conditions, and for a clear European focus to further work on resource discovery using the Dublin Core and other methods. The former closely parallels evolving plans within the Consortium for the Computer Interchange of Museum Information (CIMI) for a test bed project to explore many of the implicit assumptions behind use of resource discovery techniques , and both have been taken on board by staff at the Libraries Unit for possible exploration in the future.
The summary of meeting resolutions, to be made available on the Web site  in the near future, will outline possible future directions for these and the meeting’s other outcomes.
Thanks to all at DGXIII who were involved in the smooth running of this workshop; and especially to Makx Dekkers and Pat Manson, both for inviting me to speak in the first place and for their hospitality whilst I was there.
Author detailsPaul Miller
Archaeology Data Service
King’s Manor, YORK YO1 2EP
tel: 01904 433954
fax: 01904 433939