NYC’s Museum of Modern Art owns sixteen Piet Mondrian oil paintings, the most comprehensive collection in North America. From this starting point, conservator Cynthia Albertson and research scientist Ana Martins embarked on an impressive project, both in breadth and in consequence—an in-depth technical examination across all sixteen Mondrians. All examined paintings are fully documented, and the primary preservation goal is returning the artwork to the artist’s intended state. Paint instability in the artist’s later paintings will also be treated with insight from the technical examination.
The initial scope of the project focused on nondestructive analysis of MoMA’s sixteen oil paintings. As more questions arose, other collections and museum conservators were called upon to provide information on their Mondrians. Over 200 other paintings were consulted over the course of the project. Of special importance to the conservators were untreated Mondrians, as they could help answer questions about the artist’s original varnish choices and artist-modified frames. Mondrian’s technique of reworking areas of his own paintings was also under scrutiny, as it called into question whether newer paint on a canvas was his, or a restorer’s overpaint. Fortunately, the MoMA research team had a variety of technology at their disposal: X-Radiography, Reflectance Transformation Imaging, and X-ray Fluorescence (XRF) spectroscopy and XRF mapping were all tools referenced in the presentation.
The lecture discussed three paintings to provide an example of how preservation issues were addressed and how the research process revealed information on unstable paint layers in later Mondrian paintings. The paintings were Tableau no. 2 / Composition no. V (1914), Composition with Color Planes 5 (1917), and Composition C (1920), but for demonstration’s sake only the analysis of the earliest painting will be used as an example here.
Tableau no. 2 / Composition no. V (1914) was on a stretcher that was too thick, wax-lined, covered in a thick, glossy varnish, and had corrosion products along the tacking edges. Research identified the corrosion as accretions from a gold frame that the artist added for an exhibition. The painting has some obviously reworked areas, distinguished by dramatic variations in texture, and a painted-over signature; these changes are visible in the technical analysis. The same research that identified the source of the corrosion also explained that Mondrian reworked and resigned the painting for the exhibition. XRF mapping of the pigments, fillers, and additives provided an early baseline of materials to compare later works to, as the paint here did not exhibit the cracking of later examples. Ultimately, the restorer’s varnish was removed to return the paint surface to its intended matte appearance, and the wax lining was mechanically separated from the canvas with a specially produced Teflon spatula. Composition no. V (1914) was then strip-lined, and re-stretched to a more appropriate-width stretcher.
It is possible to create a timeline of Mondrian’s working methods with information gleaned from the technical examination of all three paintings. His technique had evolved from an overall matte surface, to variations in varnish glossiness between painted areas. XRF analysis demonstrated a shift in his palette, with the addition of vermillion, cobalt, and cadmium red in his later works. XRF also revealed that the artist used registration lines of zinc and lead whites mixed together and used on their own. Knowing the chemical composition of Mondrian’s paint is vital to understanding the nature of the cracking media and identifying techniques to preserve it.
The underpinning of all this research is documentation. This means both accounting for un-documented or poorly documented past restorations, as well as elaborating upon existing references. Many of the MoMA paintings had minimal photographic documentation, which hinders the ability of conservators to identify changes to the work over time. The wealth of information gathered by the conservation and research team remains within the museum’s internal database, but there are plans to expand access to the project’s data. Having already worked in collaboration with many Dutch museums for access to their Mondrian collections, it’s clear to the MoMA team how a compiled database of all their research and documentation would be groundbreaking for the conservation and art history fields.
Tag: databases
42nd Annual Meeting – Digital Resources & Conservation Interest Session, May 31, "Charting the Digital Landscape of the Conservation Profession" by FAIC
What digital tools and resources do conservators use and create?
Who are the audiences for conservation content?
How can this content be delivered to these groups by digital means?
What kinds of digital tools, resources, and platforms will be needed as the profession continues to grow?
It is with the above questions that “Charting the Digital Landscape of the Conservation Profession,” a project of the Foundation of the American Institute for Conservation (FAIC), interrogates our profession’s origin, its role in this particular technological moment, and its propagation into the future with the aid of technology. As all AIC members have been made aware with the recent mailing, funding from the Mellon, Kress, and Getty Foundations is supporting FAIC in its investigation into the so-called “digital landscape” of the profession. This will help develop a baseline report on the discipline’s use of digital resources in order to better understand its breadth and complexity, and to identify areas critical to the community both now and into the future.
This session was the first in a series of planned forums designed to both map the digital landscape of the profession and to contextualize the data gleaned from the recent survey by discussing the tools currently used and their possible development in the future. An expert panel was brought together for brief presentations, after which there was a lengthy, free-form discussion amongst all attendees.
Please note: This post will err on the side of being longer: Although a report on the survey results will be published by FAIC, this interest session, which put so many experienced professionals and stake-holders in dialogue, is unlikely to be published as delivered. Additionally, many attendees voiced concern that the session was scheduled over many other specialty events, preventing stakeholders from attending to hear more about the project or to voice their concerns about the digital future of the discipline.
To those who are interested in the intimate details: Read on!
To those who would prefer to skim: Know that the FAIC’s report is expected in December 2014, and stay tuned for future forums in the “Digital Landscape” series.
And it goes without saying: If you have not yet participated in the survey, now would be a good time. Our research habits are changing. Help Plan the Digital Future of Conservation and Preservation!
1. Introduction
2. Speaker: Ken Hamma (Consultant and Representative of the Mellon Foundation)
3. Speaker: Nancie Ravenel (Conservator at the Shelburne Museum)
4. Speaker: David Bloom (Coordinator of VertNet)
5. Discussion
1. INTRODUCTION
Introducing the session, Eric Pourchot, the FAIC Institutional Advancement Director, began by discussion the project and the initial survey findings. FAIC’s investigation, he said, seeks to identify the critical issues surrounding the digital tools and resources used to shape both the questions and answers concerning urgent need, target audience, and content delivery methods.
He began by outlining five components of the project:
- A review of existing resources
- A survey of creators of digital resources as well as of the end users
- Meetings (and phone interviews) with key stake holders
- Formulation of recommendations, priorities, and conclusions
Although I halted a bit at all of this business-speak about timeline and budget and reports and endgames, I was curious as to the initial results of the survey, which I did take. Additionally, the survey goal of identifying the major ways in which digital resources are created, used, and shared both now and in the future, gets at interesting problems and questions we should all ask ourselves.
560 responses to the professionally-designed survey had been completed by the date of the presentation, so, Eric emphasized, the data is still very preliminary. More international participation will be sought before the survey closes and the data is analyzed for accuracy and for various statistical “cross-tabs” by the contracted company.
Of the population queried, two-thirds go online regularly, and one-third logs on daily. When asked to list the sites most consulted, 30% listed CoOL/DisList as their primary resource, 30% listed Google, and 13% named AIC/JAIC. AATA/Getty, CAMEO, CCI, JSTOR, BCIN, NPS, Wikipedia, and AIC Specialty Groups were present in three-fourths of the fill-in responses.
When asked for the success rate of finding information on a certain topic, those searching for information on Preventive Conservation, for environmental guidelines, for material suppliers, as well as for disaster planning information were successful more than half the time. Unsurprisingly, when it was treatment information that was sought, more than half of the users were unsuccessful. To qualify the lack of “success” of a search, 70% of users cited the lake of information specific to their exact needs. 49% are concerned that the information is not up-to-date. 43% cite concern about the reliability; and 32% were dismayed by the time it took to find the information.
Eric expressed surprise that an archive of treatments topped the list of enhancements desired by the respondents. I do not remember if this was a fill-in question or what I personally responded, but this result did not necessarily strike me as surprising. Rather, I see it being in line with the lack of information on treatment procedures—both historic and current—that was noted in the above section of the survey.
From among the list of Digital Tools used most often, Eric noted the absence of collaborative spaces, such as Basecamp and Dropbox, from the list of image and document management tools, but suggested that maybe some forgot to list these oft-used programs, as they are not conservation-specific.
Finally respondents identified policy issues that were of most concern to them as obstacles to creating, sharing, and accessing content: Copyright/IP (Getty), Institutional/repository policies, time (?), and standards/terminology ranked high. It was unclear at first what was meant by the latter, but David Bloom’s talk (below) did a good deal to illuminate the importance of this.
Eric concluded by noting that although a web-survey platform does self-select for respondents with certain habits, sympathies, and concerns (i.e., those who access the internet regularly and seek to use it as a professional tool), the data represents a good range of age and experience. These groups can be correlated to certain responses; for example, 45-65 year-olds are more likely to search for collections info and are more interested in faster internet access and better online communication. Younger stakeholders, are searching more for professional information and jobs.
Again, be reminded that this data is very preliminary. A final report can be expected by December 2014.
2. SPEAKER: Ken Hamma
Ken Hamma then discussed the Mellon Foundation’s efforts in the areas of conservation and digitization, the goals and directions of these efforts, and their relationship to larger movements in the Digital Humanities.
An immensely appropriate choice to speak at this session, Ken Hamma is at once a consultant at Yale Center for British Art, the Office of Digital Assets and Infrastructure (ODAI) at Yale, ResearchSpace and the Museums and Art Conservation Program at the Andrew W. Mellon Foundation. He is a former executive director for Digital Policy and Initiatives at the J. Paul Getty Trust and has also served as a member of the Steering Committee of the Coalition for Networked Information (CNI), a member of the Reasearch Libraries Group (RLG) Programs Council of OCLC, and a member of the At-Large Advisory Committee of the Internet Corporation for Assigned Names and Numbers (ICANN).
In 2003, Hamma began his advocacy for the use of digital tools in conservation documentation, when a meeting was convened between a select number of institutional heads and conservators to feel out expectations of the Mellon in these matters—how best it should invest in the digitization of treatment records, how and if these should accessible, and by what audiences. This initial meeting was followed by the Issues in Conservation Documentation series, with a meeting in New York City in 2006 and in London in 2007. As the respective directors and heads of conservation of each host institution were present, this represented a recognition of the importance of institutional policy to what are fundamentally institutional records. Outcomes of these meetings were mixed, with European institutions being more comfortable with an open-access approach, perhaps due to the national status of their museums and the corresponding legal requirements for access. This was exemplified in the response of the National Gallery: The Raphael Project includes full scans of all conservation dossiers. Even NGL staff were surprised this became public! (More pilot projects resulting from this Mellon initiative are listed here).
In America, the Mellon began considering supporting digitization efforts and moving conservation documentation online: In 2009 it funded the design phase of ConservationSpace.org to begin imagining online, inclusive, and sustainable routes for sharing. Merv Richard of the National Gallery lead 100 conservators in the development of its structure, its priorities, and its breadth, presenting a discussion session at AIC’s 41st Annual Meeting, Indianapolis.
Important observations are being made when studying potential models, notably the similarities in which the National Park Service, libraries, natural science collections, etc. handle networked information. Although there were necessarily different emphases on workflow and information, there were also large intersections.
In the meantime, CoOL shows its age. It’s long history has necessitated a few migrations over hosts and models—from Stanford Libraries to AIC, and from Gopher to WAIS to W3. It is still, however, based on a library-catalogue model, in which everything is represented to the user as a hypertext (hypermedia) object. In such a system, there are only two options available: to follow a link or to send a query to a server. As important as this resource has been for our professional communication and for the development of our discipline, it lacks the tools to for collaboration over networked content. Having become a legacy resource, it is discontinuous from other infrastructures, such as Wikipedia (pdf), Hathi Trust, Shared Digital Future, and Google Books, all of which which point to a more expansive set of technological opportunity, such as indexing, semantic resource discovery, and linking to related fields.
Our discipline does not exist in a vacuum, and the structuring of our online resources should not show otherwise. Additionally, we need to be able to identity trustworthy information, and this is not a unique problem: We have to open ourselves up to the solutions that other disciplines have come to implement.
Ken encourages us to think of accessible data as infrastructure, which forces the creator to think about applications of the data. A web-platform should be more than just switches and networks! It should support collaborative research, annotation, sharing, and publication. This plat form should increase our ability to contribute to, extract from, and recombine a harmonized infrastructure that we fell represents us.
Planning for the extent of our needs and building it is not beyond a shared professional effort. We will find it to have been worth it.
3. SPEAKER: Nancie Ravenel
Nancie Ravenel, Conservator at the Shelburne Museum, former Chair of Publications and Board Director of Communications, works very hard to create and disseminate information about digital tools and their use to conservators. She is continuously defining the digital cutting-edge, at once “demystifying” conservation through outreach, embodying the essential competencies, and articulating the value of this profession. Her segment of the session provided an overview of key resources she uses as a conservator, noting how the inaccessibility of certain resources (e.g. ARTstor, ILL, and other resources requiring an institutional subscription) changes how she locates and navigates information.
“What does Nancie do in the digital landscape?,“ Ravenel asked. She makes stuff. She finds stuff. She uses and organizes what she makes and finds. And she shares what she’s learned.
Nancie divided her presentation of each function into four sections:
◦ Key resources she uses as a conservator
◦ Expectations of these resources
◦ What is missing
◦ and What remains problematic
In our capacity of makers of stuff, many of us, like Nancie, have begun to experiment, or are already proficient at, using Photoshop for image processing and analysis, experimenting with 3D images and printing, gleaning information for CT scans, producing video, and generating reports.
Where making stuff is concerned, further development is needed in the area of best practices and standards for createng, processing, and preservation of digital assets! We need to pay attention to how assets are created so that they can be easily shared, compared, and preserved. Of great concern to Ravenel is the fact that Adobe’s new licensing model increases the expense of doing work.
On the frontier of finding stuff, certain resources get more use from researchers like Nancie, perhaps for their ease-of-use. Ravenel identifies CoOL/CoOL DistList, jurn.org, AATA, JSTOR, Google Scholar/Books/Images/Art Project/Patent, CAMEO, Digital Public Library of America (dp.la), WorldCat, Internet Archive, SIRIS, any number of other art museum collections and databases (such as Yale University Art Museum or Rhode Island Furniture Archive) and other conservation-related websites, such as MuseumPests.net.
The pseudo-faceted search offered by Google Scholar, which collates different versions, pulls from CoOL, and provides links to all, is noted as being a big plus!
There is, however, lots of what Nancie terms “grey literature” in our field—which is not published in a formal peer-reviewed manner (such as listserv or post-print content, as well as newsletters, blogs, or video content). The profusion of places where content is available, the inconsistent terminology, and the inconsistent metadata or keywords (that which is read by reference management or that which facilitates search) applied to some resources are the most problematic when finding stuff.
As Richard McCoy has always insisted to us, “if you can’t ‘google’ it, it doesn’t exist,” Nancie reiterates a similar concern: If you can’t find it and access it after a reasonable search period, it might as well not exist. In the way of a list of what is harder to find and access she provides the following areas in need:
• AIC Specialty Group Postprints that are not digitized, that are inconsistently abstracted within AATA, or whose manner of distribution makes access challenging.
• Posts in AIC Specialty Group electronic mailing list archives are difficult to access due to lack of keyword search
• Conservation papers within archives often have skeletal finding aids; and information is needed about which archives will take conservation records.
• ARTstor does not provide images of comparative objects that aren’t fine art.
Any effort to wrangle these new ways of assembling and mining information using technology need to consider using linked resources, combining resources, employing a more faceted search engine, and deploying better search options for finding related objects. Research on changing search habits of everyone from chemists to art historians should help us along the way.
In her capacity as a user and organizer what she makes and finds, Nancie knows that not every tool works for everyone. However, she highlights digital tools such as Bamboo DiRT, which, as a compendium of digital-humanities research tools, works and synch across platforms, browsers, and devices, allows for exporting and sharing, and can allow you to look at your research practices in new and different ways. Practices to be analyzed include note taking, note management, reference management, image and document annotation, image analysis, and time tracking. Databases such as these offer structure for documenting and analyzing workflow; and if used systematically, they can greatly increase the scientific validity of any project over the mere anecdotal approach. For a large cleaning project, such as that undertaken with the Shelburne carousel horses, this is indispensable.
What is missing or problematic? A digital lab notebook is not ideal around liquids but is very suited to logging details and organizing image captures. These methods cannot measure the results of treatments using computational methods. Missing are also good tools for comparing, annotating, and adding metadata to images on mobile devices and well as for improved cooperation between tools.
And after all of this analysis of one’s use of digital tools, how is it best to share what one has learned? The AIC Code of Ethics reminds us that:
“the conservation professional shall contribute to the evolution and growth of the profession…This contribution may be made by such means as continuing development of personal skills and knowledge, sharing of information and experience with colleagues, adding to the profession’s written body of knowledge, and providing and promoting educational opportunities in the field.”
The self-reflexive exercise that Nancie Ravenel modeled in her talk—of analyzing personal use of digital tools and how personal needs and goals may reflect and inform those of others—will not only be indispensable to the future development of digital tools which will meet this call to share, but it contains in itself a call to share: Nancie asks, what do you use to share and collaborate with your colleagues. How may these systems serve as a model for further infrastructure?
Email, listservs, and forums; the AIC Wiki; research blogs, and project wikis enabling collaboration and peer review; document repositories like ResearchGate.net and Academia.edu; shared bibliographies on reference management systems like Zotero.org and Mendeley.com; collaboration and document-sharing software like Basecamp, Google Drive, and Dropbox; and social-media platforms allowing for real-time interaction like Google Hangouts are all good examples of tools finding use now.
Missing or problematic factors in our attempts to share with colleges include the lack of streamlined ways of finding and sharing treatment histories/images of specific artworks and artifacts; the lack of archives that will accept conservation records from private practices; and the persistent problem of antiquated IP legislation which is often confusing.
In addition to sharing information with other conservators, we must also consider our obligation to share with the public. Here better, more interactive tools for the display of complex information. As media platforms are ever-changing, these tools but be adaptable and provide for some evaluation of the suitability of the effort to the application.
4. SPEAKER: David Bloom
Described by Eric Pourchot as a “professional museophile,” David Bloom was a seeming non-sequitur to the flow of the event. However, as coordinator of VertNet, and NSF-funded collaborative project making biodiversity data freely available online, he spoke very eloquently about the importance of and the opportunities offered by data-sharing and online collaboration. He addressed issues of community engagement in digital projects, interdisciplinary collaborations, and sustaining efforts and applicability throughout these projects. As argued in the other short talks, conservation is yet another “data-sharing community” which can learn from the challenges met by other disciplines.
As described by Bloom, VertNet is a scalable, searchable, cloud-hosted, taxa-based network containing millions of records pertaining to vertebrate biodiversity. It has evolved (pun-intended) from the first networked-information system built in 1999 and has grown over various revisions as well as by simple economies of scale—as the addition of new data-fields became necessary. It is used by researchers, educators, students, and policy-makers, to name a few. As the network is a compilation of data from multiple institutions, it is maintained for the benefit or the community, and decisions are made with multiple stakeholders under consideration.
Amongst the considerable technical challenges through all of its iterations, VertNet has struggled to establish cloud-based aggregation, to cache and index, to establish search and download infrastructure, and to reign in all associated costs.
Additionally, intellectual property considerations must be mentioned, as even though the data is factual (the information cannot be copyrighted), the data “belongs” to the host institution, as they are the historical keepers. As a trust, VertNet does not come to own the data directly. This made a distributed network with star-shaped sub-networks necessary, even though it was expensive to maintain, especially for a small institution, requiring many servers with many possible points of failure. Once one point failed, it was difficult to locate. Costing about 200k/yr, this was an expensive system to maintain, and although it was still the best and most secure way to structure the network, it was not as inclusive as it could have been for its expense.
There are always social challenges to building such “socio-technical networks,” and this is something that the FAIC is discovering by simply attempting to poll its membership. It doesn’t work if people don’t want to play. What ensues are knowledge gaps, variable reliability, and a lack of resources. To speak more broadly, any entity entrusted with indexing information needs for people to get over their fear of sharing to learn the benefits and acquire the skills associated with being connected (i.e. Social-media privacy controversies). All the knowledge and time needed to meet everyone where they are technologically and bring them along in a respectful manner does not exist in one place, so priorities must be defined for the best investment of time and funds to bring the discipline forward.
Bloom found that disparate data hosts could not communicate with each other—they either had different names for similar data fields which needed to be streamlined or they did not maintain consistent terminology, either globally or internally.
This problem had already been solved in a number of ways. For example, Darwin Core classification system was developed by Dublin Core; ABCD is the European standard; and Biodiversity Information Standards was developed by TDWG. There are 186 fields defined by Darwin Core with a standardized vocabulary in as many fields as possible. These standards are community-ratified and community maintained in order to not be easily or unnecessarily changed. This allows for easy importation by mapping incoming data-sets to a Darwin-Core standard; all the data is optimized for searchability and discoverability; and publication and citation tools are hence streamlined.
This type of study of the state of the art, necessary when designing new database infrastructure, can serve as a model for the field of conservation. At the foundation of a successful system, will be a serious study of what has been done in other fields and of what is most useful to prioritize for this one.
As VertNet is based entirely on voluntary participation, it is critical that participants understand the benefits of submitting their data to the trust. The staff at VertNet makes themselves available to help the host institution through any technical difficulties encountered in the data exportation and importation process. Backups of this data are scrupulously maintained throughout the migration process. A major benefit to the exporting institution is VertNet’s data-quality checks which will complete, clean up, and streamline fields and then will send back a report so that the client can update their own databases. This brings local data-maintenance standards in-line with those maintained by the global database.
Additionally, the NSF grant has made training workshops, the development of analytical tools, and certain instances of impromptu instruction possible for clients. This has lead to VertNet’s exponential growth without advertising. The repository now represents 176 institutions with 488 collections and many, many more want in from the waiting list. All these institutions are voluntarily submitting their data despite historical concerns about “ownership.” All these institutions realize the benefit of membership for themselves, for researchers, and for the state of the discipline.
Unfortunately, however, this “traditional” (eek) model of procuring NSF (or NEH, IMLS, etc.) funding to maintain cost is becoming unsustainable. Support for these services is desperately needed now that its utility is established. The value-add model is difficult even if VertNet does believe in “free data.”
The associated cost does not change; however, the database was built as community tool. So even though the common perception is an unchanging status-quo, the community will have to support the project insofar as they find the resource valuable and important. A common misconception propagated by recalcitrant host institutions is that “we can do it ourselves.”. The fact is, however, that most stewards of data can’t—and even more won’t—turn around and make these records available to the community for revision, maintenance, reference, or analysis.
5. DISCUSSION
The audience then exploded with responses :
Pamela Hatchfield (Head of Objects Conservation at the Museum of Fine Arts Boston and AIC Board President) began by reminding those who had been romanced by visions of star-shaped networks that concerns about maintaining privacy are still driven by private funding. Although there is now a conservation module in TMS, and terminological standardization is a frequently cited concern, this data is clearly not intended for the public. Historically, private institutions maintain the attitude that data should be tightly held. There is a huge revenue stream from images at the MFA, and as such it is difficult even for staff to obtain publication rights.
Terry Drayman-Weisser (Director of Conservation and Technical Research at the Walters Art Museum) pointed out the the Walters walks the middle path by providing a judiciously selected summary of the conservation record associated with an object. Not all of the information is published.
Certain institutions, such as at the British Museum, have an obligation to make these records public, unless the object falls into certain categories. The 2007 Mellon “Issues in Conservation Documentation” meeting at the National Gallery, London, provides summary of the participants’ approaches to public access at the time of publication.
I did have time to ask a question about the privacy concerns attendant on a biodiversity database. Why does it seem that there is less hesitancy at the prospect of sharing? In reality, these institutions do overcome certain hurdles when deciding what to make publicly available: It turns out that certain data about endangered species should not be shared. Although he did not have time to elaborate, I was curious how this “species privacy” might compare to “object privacy.”
VertNet, it turns out, cannot even find protection under the “Sweat-of-the-Brow” doctrine, as this factual information cannot be copyrighted. What about those portions of conservation documentation which are markedly drawn from speculation, interpretation, and original research? This information can be copyrighted, as per each institution’s policies, but our culture is changing. “We don’t train students to cite resources properly,” he noted, “and then we wonder why we don’t get cited.”
The time allotted for the session was drawing to a close, and everyone expressed their regrets that the conversation could not go on for longer and that more people could have attended.
I would personally like to thank FAIC, the speakers, the Mellon, Kress, and Getty Foundations, and all of the participants for their part in a very though-provoking discussion. I hope and trust that it will continue in future fora.