To promote awareness and a clearer understanding of different pathways into specializations that require particular training, the Emerging Conservation Professionals Network (ECPN) is conducting a series of interviews with conservation professionals in these specialties. We kicked off the series with Chinese and Japanese painting conservation, and now we are focusing on practitioners in AIC’s Electronic Media Group (EMG). These conservators work with time-based media, which can include moving components, performance, light or sound elements, film and video, analog or born-digital materials. We’ve asked our interviewees to share some thoughts about their career paths, which we hope will inspire new conservation professionals and provide valuable insight into these areas of our professional field.
This is the third post from ECPN’s EMG blog series, for which we first interview Nick Kaplan and more recently, Alex Nichols. For our third interview from the EMG series, we spoke with Yasmin Dessem, currently Head of the Audiovisual Preservation Studio at UCLA Library where she serves as the technical lead as the library continues to develop its program of preservation, digitization and access of its moving image and sound holdings. Previously she managed archive deliverables for new feature releases at Paramount Pictures. She has experience working with a wide variety of moving image and sound formats, as well as pre-film animation devices, silent-era cameras, costumes and paper collections. Yasmin holds Master’s degrees in Art History and Moving Image Archive Studies from UCLA.
ECPN: Please tell us a little bit about yourself and your current position.
Yasmin Dessem (YD): I oversee the preservation of moving image and recorded sound materials at the UCLA Library’s Preservation Department. For nearly 90 years, the UCLA Library has collected audiovisual materials with content such as home movies, oral histories, and radio broadcasts. Examples are home movies of Susan Sontag’s parents sailing to China in the 1920s and field interviews with Watts residents after the 1965 riots. Audiovisual preservation (AV) at the library is a relatively young unit—a dedicated AV preservationist first came on board in 2011. We offer a number of in-house digitization and preservation services and are currently focusing on increasing our capacity and launching a survey.
ECPN: How were you first introduced to conservation, and why did you decide to pursue conservation?
YD: The 1996 re-release of the restored version of Vertigo first made me aware of film restoration and preservation as an actual practice. Later, as I was finishing my Masters in Art History at UCLA, I took a wonderful class on restoration, preservation, and conservation with Professor David A. Scott. The course covered the material care issues and decision-making ethics for a wide breadth of cultural heritage materials. The class struck a deep chord with me, but I was eager to graduate and start working. After graduation, I ended up working in the film industry for about six years. I was tracking down historic stock footage at one job when my mind circled back to the preservation field as I considered how the films were stored and made available. I had entertained the idea of potentially returning to graduate school to study art conservation some day, but around that time the idea of film preservation as a possible career path began to fully materialize for me. As a result, I began exploring potential graduate programs.
ECPN: Of all specializations, what contributed to your decision to pursue electronic media conservation?
YD: My longtime love for film and music intersected with my curiosity for all things historical and technology-related. These were topics that in one form or another always interested me, but I don’t think I had a full grasp on how to combine them meaningfully into a profession. Preservation was the missing key. My exposure to preservation and conservation while studying art history and my later experience working at film studios both helped direct me towards the specialization.
ECPN: What has been your training pathway? Please list any universities, apprenticeships, technical experience, and any related jobs or hobbies.
YD: I pursued my studies in the Moving Image Archive Studies (MIAS) Program at UCLA—which persists today as a Master of Library and Information Science (M.L.I.S.) with a Media Archival Studies specialization. While in the program, I completed internships with Universal Pictures and the Academy of Motion Pictures Arts and Sciences, and volunteered at the Hugh Hefner Moving Image Archive at the University of Southern California. Throughout the two-year MIAS program, I also worked as a fellow at the Center for Primary Research and Training program at UCLA Library Special Collections, where I learned archival processing. My experiences weren’t limited to preserving moving image and sound media, but included paper-based collections, costumes, and film technology. After graduating I attended the International Federation of Film Archives (FIAF) Film Restoration Summer School hosted by the Cineteca di Bologna and L’Immagine Ritrovata.
ECPN: Are there any particular skills that you feel are important or unique to your discipline?
YD: Digital preservation will continue to be a key area of expertise that’s needed in museums and archives. Preserving the original source material and digitizing content is not enough. There are more resources than ever for strategies and tools for digital preservation, and it’s important to seek them out. Another valuable skill is developing a level of comfort with handling and understanding the unique characteristics of a wide variety of physical analog formats such as film, videotape, audiotape, and grooved media (LP, 78s, lacquer discs, wax cylinders, etc.). Similarly, it’s helpful to have a familiarity with playback devices for these obsolete media formats (equipment like open-reel decks or video decks.) Lastly, metadata can be an unsung hero in media preservation. Often, we’re the first to see or hear a recording in decades, so capturing metadata around the point of transfer is critical. Metadata standards can be a rabbit hole of complexities, especially when it comes to describing audiovisual media, but understanding their application is an essential skill.
ECPN: What are some of your current projects, research, or interests?
YD: We’re just wrapping up digitization of materials from the Golden State Mutual Life Insurance Company (GSM), an African American-owned and operated insurance firm established in Los Angeles in 1925 in response to discriminatory practices that restricted the ability of African American residents to purchase insurance. GSM operated for 85 years and their collection is a vibrant resource documenting Los Angeles and the empowerment of a community. We received grants from the National Film Preservation Foundation and the John Randolph Haynes and Dora Haynes Foundation to support this work. The digitized collection is now available on Calisphere. We’ve just started a crowd sourcing project working with former GSM staffers to describe any unidentified content. It’s been one of the most rewarding experiences of my career, hearing everyone’s stories and seeing how much it means to everyone involved to have this collection preserved and made available.
We’ve also been in preparation to launch a large-scale survey that will help us gather data on the Library’s audiovisual collections that can be used for long term-planning. Outside of UCLA, we’ve been involved with ongoing work with cultural heritage institutions in Cuba. Last February, I set up equipment and held a workshop on the digitization of radio transcription discs held at the Instituto de Historia de Cuba (IHC) in Havana. I’m heading back there next week to begin a project to transfer IHC’s open reel audio collections.
ECPN: In your opinion, what is an important research area or need in your specialization?
YD: It’s crucial to preserve the expertise related to the operation and repair of playback equipment. Playback equipment will become more and more difficult to source in the future. Engineers, whose entire careers are dedicated to the use and care of this equipment, are some of the best resources for this knowledge. Their knowledge is shared through conversation, YouTube videos, social media, and professional workshops. Documenting the skills required to handle, maintain, calibrate, and service this equipment in a more formalized way and sharing that knowledge widely will ensure that the preservationists can keep their equipment viable for longer.
ECPN: Do you have any advice for prospective emerging conservators who would like to pursue this specialization?
YD: Try everything. Media preservation requires a wide variety of skills from computer coding to soldering decades-old circuit boards. Depending on where your career takes you, it’s good to have at least a passing familiarity with the full range of skills you may need to call upon. Apply for internships or fellowships with organizations, like the National Digital Stewardship Residency. Volunteer at community-based archives that need help getting their collections in order. Join professional organizations, like the Association of Recorded Sound Collections (ARSC) or the Association of Moving Image Archivists. Attend conferences like code4lib, the Preservation and Archiving Special Interest Group (PASIG), or the Digital Asset Symposium (DAS). Network with engineers or preservation professionals to continue to grow your own expertise, but also share your own skills when you can. Collaboration and knowledge-sharing are a fundamental part of the profession.
ECPN: Please share any last thoughts or reflections.
YD: One thing to be aware of, if you’re a woman in the field of audiovisual preservation, is that you may occasionally run into people who are surprised to see a woman working with technology (much less wielding a screwdriver!). This response persists to some degree despite the presence of many successful female professionals in the field. What’s encouraging, however, is seeing the growth of groups like the Women in Recorded Sound collective at ARSC providing support.
Audiovisual preservation is such a gratifying profession. Having the opportunity to make historic content available is incredibly meaningful work that I feel lucky to be a part of everyday. On an even more basic level, figuring out a new workflow or getting a piece of equipment to finally work is just so viscerally satisfying. I’m part of an amazing team whose passion, humor and willingness to try out new things inspires me every day and makes me feel so lucky to be doing this work.
Back in June we posted a series of tips to the ECPN Facebook page. Now that school is back in full swing we thought we’d post a reminder. We hope you enjoyed this collection of digital resources! Feel free to contribute your own tips in the comments below. 1: Zotero Bibliography management tool (https://www.zotero.org/)
Zotero allows you to make bibliographies easily and keep track of abstracts (it pulls them directly from some sources) or your own notes. It also helps you to keep track of artworks from museum collections, and you can keep all the relevant information (catalog information, dimensions, conservation history notes) in one place. Zotero is free and if you install it as a plug-in to your preferred internet browser you just click and –ta da!– it magically saves all the bibliographic information for you. You can share collected references and notes with other Zotero users through groups as well. Image 1: Desktop Zotero application.
Image 2: Saving an artwork from a museum’s online catalogue using Zotero on an internet browser (Firefox or Chrome).
2: Compound Interest has lots of infographics (http://www.compoundchem.com/infographics/) which are great references for chemistry topics. The site has lots of good information on analytical techniques as well as fun chemistry facts and a weekly roundup of chemistry news. Print materials out for your lab!
Some examples of particular interest to conservators:
3: With Inkpad Pro or other vector drawing apps, you can make diagrams for condition mapping, mounts, and packing. These apps are generally far less expensive than the PC-based programs they emulate, like Illustrator or Photoshop, and range from free to a few dollars. You can use a stylus on your iPad to trace from photographs and annotate. There are lots of color, line weight, and arrow options, and it’s easy to do overlays. Since the iPad is also smaller and more portable, you can do your condition mapping in the gallery or during installations as well. You can export your final drawings as PDFs and share them through Dropbox or email. Images 3-6: Creating a vector drawing and condition map from a photograph using the iPad app InkPad Pro.
4: Podcasts
We’d like to highlight one of our favorite podcasts, “Chemistry in its Element” by the Royal Society of Chemistry. There are short episodes about all sorts of interesting chemical compounds. Of particular interest to conservators are podcasts on mauveine, carminic acid, citric acid, calcium hydroxide, goethite, vermillion, and PVC, for example. Episodes are about 5 minutes long each.
(link: https://www.chemistryworld.com/podcasts) 5: RSS feeds for Cultural Heritage Blogs
Using an RSS feed can help you keep tabs on conservation news reported on blogs. We recommend Old Reader, a free replacement for Google Reader (https://theoldreader.com/), to keep track of the many conservation blogs. AIC has a blogroll list that can help you find conservation blogs: look to the right sidebar here on Conservators Converse.
There are too many great blogs to name, but one favorite is the Penn Museum’s “In the Artifact Lab” (http://www.penn.museum/sites/artifactlab/), which is frequently updated with great photos and stories about conservation treatments underway. Another one you might like is Things Organized Neatly (http://thingsorganizedneatly.tumblr.com/)– not strictly speaking a conservation blog, but definitely has some appeal for conservators!
Feel free to add your favorites tips and tools below in the comments!
All images courtesy of Jessica Walthew, Professional Education & Training Officer, Emerging Conservation Professionals Network (ECPN).
The presenters began by explaining that they had changed the title to reflect the emphasis of presentation. The new title became "An exploration of significance and dependency in the conservation of software-based artwork."
Based upon their research, the presenters decided to focus on dependencies rather than obsolesence per se. The project was related to PERICLES, a pan-European risk assessment project for preserving digital content. PERICLES was a four-year collaboration that included systems engineers and other specialists, modeling systems to predict change.
The presenters used two case studies from the Tate to examine key concepts of dependencies and significant properties. Significant properties were described as values defined by the artist. Dependency is the connection between different elements in a system, defined by the function of those elements, such as the speed of a processor. The research focused on works of art where software is the essential part of the art. The presenters explained that there were four categories of software-based artwork: contained, networked, user-dependent, and generative. The featured case studies were examples of contained and networked artworks. These categories were defined not only in terms of behavior, but also in terms of dependencies.
Michael Craig-Martin's Becoming was a contained artwork. The changing composition of images was comprised of animation of the artist’s drawings on LCD screen, using proprietary software. Playback speed is an example of an essential property that could be changed, if there were a future change in hardware, for example.
Jose Carlos Martinat Mendoza's Brutalism: Stereo Reality Environment 3 was the second case study discussed by the presenters. This work of art is organized around a visual pun, evoking the Brutalist architecture of the Peruvian “Pentagonito,” a government Ministry of Defense office associated with the human rights abuses of a brutal regime. Both the overall physical form of the installation, when viewed merely as sculpture, and the photographic image of the original structure reinforce the architectural message. A printer integrated into the exhibit conveys textual messages gleaned from internet searches of brutality. While the networked connection permitted a degree of randomness and spontaneity in the information flowing from the printer, there was a backup MySQL database to provide content, in the event of an interruption in the internet connection.
The presenters emphasized that the dependencies for software-based art were built around aesthetic considerations of function. A diagram was used to illustrate the connection between artwork-level dependencies. With "artwork" in the center, three spokes radiated outward toward knowledge, interface, and computation. An example of knowledge might be the use of a password to have administrative rights to access or modify the work. A joystick or a game controller would be examples of interfaces. In Brutalism, the printer is an interface. Computation refers to the capacity and processor speed of the computer itself.
Virtualization has been offered as an approach to preserving these essential relationships. It separates hardware from software, creating a single file out of many. It can act as a diagnostic tool and a preservation strategy that mitigates against hardware failure. The drawbacks were that it could mean copying unnecessary or undesirable files or that the virtual machine (and the x86 virtualization architecture) could become obsolete. Another concern is that virtualization may not capture all of the significant properties that give the artwork its unique character. A major advantage of virtualization is that it permits the testing of dependencies such as processor speed. It also facilitates version control and comparison of different versions.The authors did not really explain the difference between emulation and virtualization, perhaps assuming that the audience already knew the difference. Emulation uses software to replicate the original hardware environment to run different operating systems, whereas virtualization uses the existing underlying hardware to run different operating systems. The hardware emulation step decreases performance.
The presenters then explained the process that is used at the Tate. They create a copy of the hardware and software. A copy is kept on the Tate servers. Collections are maintained in a High Value Digital Asset Repository. The presenters also described the relationship of the artist's installation requirements to the dependencies and significant properties. For example, Becoming requires a monitor with a clean black frame of specific dimensions and aspect ratio. The software controls the timing and speed of image rotation and the randomness or image changes, as well as traditional artistic elements of color and scale. With Brutalism, the language (Spanish to English) is another essential factor, along with "liveness" of search.
During the question and answer period, the presenters explained that they were using VMware, because it was practical and readily available. An audience member asked an interesting question about the limitations of virtualization for the GPU (graphics processing unit). The current methodology at the Tate works for the CPU(central processing unit) only, not the graphics unit. The presenters indicated that they anticipated future support for the GPU.
This presentation emphasized the importance of curatorship of significant propeeties and documentation of dependencies in conserving software-based art. It was important to understand the artist's intent and to capture the essence of the artwork as it was meant to be presented, while recognizing that the artist’s hardware, operating system, applications, and hardware drivers could all become obsolete. It was clear from the presentation that a few unanswered questions remain, but virtualization appears to be a viable preservation strategy.
Documenting textile impressions or pseudomorphs on archaeological objects is very challenging. In my own experience, I’ve found trying to photograph textile pseudomorphs, especially when they are poorly preserved, very difficult and involves taking multiple shots with varying light angles, which still often results in poor quality images. This is why Emily Frank‘s paper was of particular interest to me because it provided an alternative to digital photography that would be feasible and more effective in documenting textile impressions: Reflectance Transformation Imaging (RTI). RTI is a computational documentation method that allows for multiple images of an object to be merged into one and viewed interactively to allow the direction of light to be changed so that surface features are enhanced. The process involves changing the direction of the light when each photo is taken. Using open source software, a single image is rendered using various algorithms that allows the viewer to move a dial and change the direction/angle of light the image can be viewed at. Additional components in the software allow for the images to be viewed using different filters or light effects that make visualization of surface features easier. RTI is gaining in popularity as a documentation tool in conservation due to its low cost and feasibility and several papers presented at this year’s conference touched on the use of this technique (including this paper I also blogged about).
There are two general light sources used for RTI. One uses a dome outfitted with many LED lights that will turn off and on as photographs are taken. An RTI light dome is pictured on Cultural Heritage Imaging’s website that was used at the Worcester Art Museum (CHI is a non-profit organization that provides training and tools for this technique). However, most conservators use a lower tech method where a light source (a camera flash or lamp for example) is held at a fixed distance from the artifact and manually moved around at different angles when each photo is taken. You can see an example of this method used in the field in this blog post from UCLA/Getty Conservation Program student Heather White.
In her paper, Emily focused on documenting textile or basketry impressions on ceramics and more ephemeral impressions, such as those left in the soil by deteriorated textiles or baskets, using RTI. By using the various tools offered by the RTI software (changing light angle, using diffuse light or changing it so that concave surfaces of impressions look convex), she was able to see fine features not clearly visible with standard digital photography, such as the angle of fibers, striations on the surface of plant material or the weave structure. For impressions of textiles left in soil (these were mock-ups she made in potting soil) she noted that digital photography was not very effective in recording these because there was no contrast and the impressions were so fragile that they could not be lifted or moved for better examination or imaging. However using RTI she was able to clearly see that the textiles were crocheted.
In describing her set up and work flow, Emily took photos of the impressions indoors, as well as outdoors (for the soil impressions). She was able to take good images outdoors, but it was better to do RTI at dusk with lower light. She took a minimum of 12 shots per impression at 3 different angles. For her light source she used a flash. In all, she said it took her about 10 minutes to shoot each impression.
When compared to digital photography, RTI is a useful and feasible technique for the documentation of impressions, and worked well for most of the impressions Emily tried to record. It seems that RTI worked well as the stand alone documentation method for impressions in about 40% of the images she took, but is more effective as an examination and documentation tool in combination with standard digital photography. RTI is on its way to becoming a more standardized documentation method in conservation. It appears to be effective for recording low contrast, low relief surfaces, such as textile impressions, and may be the best method to record ephemeral or extremely fragile surfaces that are not possible to preserve. I’m excited about the potential of RTI for impressions and look forward to trying it out the next time I have to record textile impressions or organic pseudomorphs on an archaeological object.
NYC’s Museum of Modern Art owns sixteen Piet Mondrian oil paintings, the most comprehensive collection in North America. From this starting point, conservator Cynthia Albertson and research scientist Ana Martins embarked on an impressive project, both in breadth and in consequence—an in-depth technical examination across all sixteen Mondrians. All examined paintings are fully documented, and the primary preservation goal is returning the artwork to the artist’s intended state. Paint instability in the artist’s later paintings will also be treated with insight from the technical examination.
The initial scope of the project focused on nondestructive analysis of MoMA’s sixteen oil paintings. As more questions arose, other collections and museum conservators were called upon to provide information on their Mondrians. Over 200 other paintings were consulted over the course of the project. Of special importance to the conservators were untreated Mondrians, as they could help answer questions about the artist’s original varnish choices and artist-modified frames. Mondrian’s technique of reworking areas of his own paintings was also under scrutiny, as it called into question whether newer paint on a canvas was his, or a restorer’s overpaint. Fortunately, the MoMA research team had a variety of technology at their disposal: X-Radiography, Reflectance Transformation Imaging, and X-ray Fluorescence (XRF) spectroscopy and XRF mapping were all tools referenced in the presentation.
The lecture discussed three paintings to provide an example of how preservation issues were addressed and how the research process revealed information on unstable paint layers in later Mondrian paintings. The paintings were Tableau no. 2 / Composition no. V (1914), Composition with Color Planes 5 (1917), and Composition C (1920), but for demonstration’s sake only the analysis of the earliest painting will be used as an example here. Tableau no. 2 / Composition no. V (1914) was on a stretcher that was too thick, wax-lined, covered in a thick, glossy varnish, and had corrosion products along the tacking edges. Research identified the corrosion as accretions from a gold frame that the artist added for an exhibition. The painting has some obviously reworked areas, distinguished by dramatic variations in texture, and a painted-over signature; these changes are visible in the technical analysis. The same research that identified the source of the corrosion also explained that Mondrian reworked and resigned the painting for the exhibition. XRF mapping of the pigments, fillers, and additives provided an early baseline of materials to compare later works to, as the paint here did not exhibit the cracking of later examples. Ultimately, the restorer’s varnish was removed to return the paint surface to its intended matte appearance, and the wax lining was mechanically separated from the canvas with a specially produced Teflon spatula. Composition no. V (1914) was then strip-lined, and re-stretched to a more appropriate-width stretcher.
It is possible to create a timeline of Mondrian’s working methods with information gleaned from the technical examination of all three paintings. His technique had evolved from an overall matte surface, to variations in varnish glossiness between painted areas. XRF analysis demonstrated a shift in his palette, with the addition of vermillion, cobalt, and cadmium red in his later works. XRF also revealed that the artist used registration lines of zinc and lead whites mixed together and used on their own. Knowing the chemical composition of Mondrian’s paint is vital to understanding the nature of the cracking media and identifying techniques to preserve it.
The underpinning of all this research is documentation. This means both accounting for un-documented or poorly documented past restorations, as well as elaborating upon existing references. Many of the MoMA paintings had minimal photographic documentation, which hinders the ability of conservators to identify changes to the work over time. The wealth of information gathered by the conservation and research team remains within the museum’s internal database, but there are plans to expand access to the project’s data. Having already worked in collaboration with many Dutch museums for access to their Mondrian collections, it’s clear to the MoMA team how a compiled database of all their research and documentation would be groundbreaking for the conservation and art history fields.
What digital tools and resources do conservators use and create?
Who are the audiences for conservation content?
How can this content be delivered to these groups by digital means?
What kinds of digital tools, resources, and platforms will be needed as the profession continues to grow?
It is with the above questions that “Charting the Digital Landscape of the Conservation Profession,” a project of the Foundation of the American Institute for Conservation (FAIC), interrogates our profession’s origin, its role in this particular technological moment, and its propagation into the future with the aid of technology. As all AIC members have been made aware with the recent mailing, funding from the Mellon, Kress, and Getty Foundations is supporting FAIC in its investigation into the so-called “digital landscape” of the profession. This will help develop a baseline report on the discipline’s use of digital resources in order to better understand its breadth and complexity, and to identify areas critical to the community both now and into the future.
This session was the first in a series of planned forums designed to both map the digital landscape of the profession and to contextualize the data gleaned from the recent survey by discussing the tools currently used and their possible development in the future. An expert panel was brought together for brief presentations, after which there was a lengthy, free-form discussion amongst all attendees.
Please note: This post will err on the side of being longer: Although a report on the survey results will be published by FAIC, this interest session, which put so many experienced professionals and stake-holders in dialogue, is unlikely to be published as delivered. Additionally, many attendees voiced concern that the session was scheduled over many other specialty events, preventing stakeholders from attending to hear more about the project or to voice their concerns about the digital future of the discipline.
To those who are interested in the intimate details:Read on!
To those who would prefer to skim:Know that the FAIC’s report is expected in December 2014, and stay tuned for future forums in the “Digital Landscape” series.
Introducing the session, Eric Pourchot, the FAIC Institutional Advancement Director, began by discussion the project and the initial survey findings. FAIC’s investigation, he said, seeks to identify the critical issues surrounding the digital tools and resources used to shape both the questions and answers concerning urgent need, target audience, and content delivery methods.
He began by outlining five components of the project:
A review of existing resources
A survey of creators of digital resources as well as of the end users
Meetings (and phone interviews) with key stake holders
Formulation of recommendations, priorities, and conclusions
Although I halted a bit at all of this business-speak about timeline and budget and reports and endgames, I was curious as to the initial results of the survey, which I did take. Additionally, the survey goal of identifying the major ways in which digital resources are created, used, and shared both now and in the future, gets at interesting problems and questions we should all ask ourselves.
560 responses to the professionally-designed survey had been completed by the date of the presentation, so, Eric emphasized, the data is still very preliminary. More international participation will be sought before the survey closes and the data is analyzed for accuracy and for various statistical “cross-tabs” by the contracted company.
Of the population queried, two-thirds go online regularly, and one-third logs on daily. When asked to list the sites most consulted, 30% listed CoOL/DisList as their primary resource, 30% listed Google, and 13% named AIC/JAIC. AATA/Getty, CAMEO, CCI, JSTOR, BCIN, NPS, Wikipedia, and AIC Specialty Groups were present in three-fourths of the fill-in responses.
When asked for the success rate of finding information on a certain topic, those searching for information on Preventive Conservation, for environmental guidelines, for material suppliers, as well as for disaster planning information were successful more than half the time. Unsurprisingly, when it was treatment information that was sought, more than half of the users were unsuccessful. To qualify the lack of “success” of a search, 70% of users cited the lake of information specific to their exact needs. 49% are concerned that the information is not up-to-date. 43% cite concern about the reliability; and 32% were dismayed by the time it took to find the information.
Eric expressed surprise that an archive of treatments topped the list of enhancements desired by the respondents. I do not remember if this was a fill-in question or what I personally responded, but this result did not necessarily strike me as surprising. Rather, I see it being in line with the lack of information on treatment procedures—both historic and current—that was noted in the above section of the survey.
From among the list of Digital Tools used most often, Eric noted the absence of collaborative spaces, such as Basecamp and Dropbox, from the list of image and document management tools, but suggested that maybe some forgot to list these oft-used programs, as they are not conservation-specific.
Finally respondents identified policy issues that were of most concern to them as obstacles to creating, sharing, and accessing content: Copyright/IP (Getty), Institutional/repository policies, time (?), and standards/terminology ranked high. It was unclear at first what was meant by the latter, but David Bloom’s talk (below) did a good deal to illuminate the importance of this.
Eric concluded by noting that although a web-survey platform does self-select for respondents with certain habits, sympathies, and concerns (i.e., those who access the internet regularly and seek to use it as a professional tool), the data represents a good range of age and experience. These groups can be correlated to certain responses; for example, 45-65 year-olds are more likely to search for collections info and are more interested in faster internet access and better online communication. Younger stakeholders, are searching more for professional information and jobs.
Again, be reminded that this data is very preliminary. A final report can be expected by December 2014.
2. SPEAKER: Ken Hamma
Ken Hamma then discussed the Mellon Foundation’s efforts in the areas of conservation and digitization, the goals and directions of these efforts, and their relationship to larger movements in the Digital Humanities.
An immensely appropriate choice to speak at this session, Ken Hamma is at once a consultant at Yale Center for British Art, the Office of Digital Assets and Infrastructure (ODAI) at Yale, ResearchSpace and the Museums and Art Conservation Program at the Andrew W. Mellon Foundation. He is a former executive director for Digital Policy and Initiatives at the J. Paul Getty Trust and has also served as a member of the Steering Committee of the Coalition for Networked Information (CNI), a member of the Reasearch Libraries Group (RLG) Programs Council of OCLC, and a member of the At-Large Advisory Committee of the Internet Corporation for Assigned Names and Numbers (ICANN).
In 2003, Hamma began his advocacy for the use of digital tools in conservation documentation, when a meeting was convened between a select number of institutional heads and conservators to feel out expectations of the Mellon in these matters—how best it should invest in the digitization of treatment records, how and if these should accessible, and by what audiences. This initial meeting was followed by the Issues in Conservation Documentation series, with a meeting in New York City in 2006 and in London in 2007. As the respective directors and heads of conservation of each host institution were present, this represented a recognition of the importance of institutional policy to what are fundamentally institutional records. Outcomes of these meetings were mixed, with European institutions being more comfortable with an open-access approach, perhaps due to the national status of their museums and the corresponding legal requirements for access. This was exemplified in the response of the National Gallery: The Raphael Project includes full scans of all conservation dossiers. Even NGL staff were surprised this became public! (More pilot projects resulting from this Mellon initiative are listed here).
In America, the Mellon began considering supporting digitization efforts and moving conservation documentation online: In 2009 it funded the design phase of ConservationSpace.org to begin imagining online, inclusive, and sustainable routes for sharing. Merv Richard of the National Gallery lead 100 conservators in the development of its structure, its priorities, and its breadth, presenting a discussion session at AIC’s 41st Annual Meeting, Indianapolis.
Important observations are being made when studying potential models, notably the similarities in which the National Park Service, libraries, natural science collections, etc. handle networked information. Although there were necessarily different emphases on workflow and information, there were also large intersections.
In the meantime, CoOL shows its age. It’s long history has necessitated a few migrations over hosts and models—from Stanford Libraries to AIC, and from Gopher to WAIS to W3. It is still, however, based on a library-catalogue model, in which everything is represented to the user as a hypertext (hypermedia) object. In such a system, there are only two options available: to follow a link or to send a query to a server. As important as this resource has been for our professional communication and for the development of our discipline, it lacks the tools to for collaboration over networked content. Having become a legacy resource, it is discontinuous from other infrastructures, such as Wikipedia (pdf), Hathi Trust, Shared Digital Future, and Google Books, all of which which point to a more expansive set of technological opportunity, such as indexing, semantic resource discovery, and linking to related fields.
Our discipline does not exist in a vacuum, and the structuring of our online resources should not show otherwise. Additionally, we need to be able to identity trustworthy information, and this is not a unique problem: We have to open ourselves up to the solutions that other disciplines have come to implement.
Ken encourages us to think of accessible data as infrastructure, which forces the creator to think about applications of the data. A web-platform should be more than just switches and networks! It should support collaborative research, annotation, sharing, and publication. This plat form should increase our ability to contribute to, extract from, and recombine a harmonized infrastructure that we fell represents us.
Planning for the extent of our needs and building it is not beyond a shared professional effort. We will find it to have been worth it.
3. SPEAKER: Nancie Ravenel
Nancie Ravenel, Conservator at the Shelburne Museum, former Chair of Publications and Board Director of Communications, works very hard to create and disseminate information about digital tools and their use to conservators. She is continuously defining the digital cutting-edge, at once “demystifying” conservation through outreach, embodying the essential competencies, and articulating the value of this profession. Her segment of the session provided an overview of key resources she uses as a conservator, noting how the inaccessibility of certain resources (e.g. ARTstor, ILL, and other resources requiring an institutional subscription) changes how she locates and navigates information.
“What does Nancie do in the digital landscape?,“ Ravenel asked. She makes stuff. She finds stuff. She uses and organizes what she makes and finds. And she shares what she’s learned.
Nancie divided her presentation of each function into four sections:
◦ Key resources she uses as a conservator
◦ Expectations of these resources
◦ What is missing
◦ and What remains problematic
In our capacity of makers of stuff, many of us, like Nancie, have begun to experiment, or are already proficient at, using Photoshop for image processing and analysis, experimenting with 3D images and printing, gleaning information for CT scans, producing video, and generating reports.
Where making stuff is concerned, further development is needed in the area of best practices and standards for createng, processing, and preservation of digital assets! We need to pay attention to how assets are created so that they can be easily shared, compared, and preserved. Of great concern to Ravenel is the fact that Adobe’s new licensing model increases the expense of doing work.
On the frontier of finding stuff, certain resources get more use from researchers like Nancie, perhaps for their ease-of-use. Ravenel identifies CoOL/CoOL DistList, jurn.org, AATA, JSTOR, Google Scholar/Books/Images/Art Project/Patent, CAMEO, Digital Public Library of America (dp.la), WorldCat, Internet Archive, SIRIS, any number of other art museum collections and databases (such as Yale University Art Museum or Rhode Island Furniture Archive) and other conservation-related websites, such as MuseumPests.net.
The pseudo-faceted search offered by Google Scholar, which collates different versions, pulls from CoOL, and provides links to all, is noted as being a big plus!
There is, however, lots of what Nancie terms “grey literature” in our field—which is not published in a formal peer-reviewed manner (such as listserv or post-print content, as well as newsletters, blogs, or video content). The profusion of places where content is available, the inconsistent terminology, and the inconsistent metadata or keywords (that which is read by reference management or that which facilitates search) applied to some resources are the most problematic when finding stuff.
As Richard McCoy has always insisted to us, “if you can’t ‘google’ it, it doesn’t exist,” Nancie reiterates a similar concern: If you can’t find it and access it after a reasonable search period, it might as well not exist. In the way of a list of what is harder to find and access she provides the following areas in need:
• AIC Specialty Group Postprints that are not digitized, that are inconsistently abstracted within AATA, or whose manner of distribution makes access challenging.
• Posts in AIC Specialty Group electronic mailing list archives are difficult to access due to lack of keyword search
• Conservation papers within archives often have skeletal finding aids; and information is needed about which archives will take conservation records.
• ARTstor does not provide images of comparative objects that aren’t fine art.
Any effort to wrangle these new ways of assembling and mining information using technology need to consider using linked resources, combining resources, employing a more faceted search engine, and deploying better search options for finding related objects. Research on changing search habits of everyone from chemists to art historians should help us along the way.
In her capacity as a user and organizer what she makes and finds, Nancie knows that not every tool works for everyone. However, she highlights digital tools such as Bamboo DiRT, which, as a compendium of digital-humanities research tools, works and synch across platforms, browsers, and devices, allows for exporting and sharing, and can allow you to look at your research practices in new and different ways. Practices to be analyzed include note taking, note management, reference management, image and document annotation, image analysis, and time tracking. Databases such as these offer structure for documenting and analyzing workflow; and if used systematically, they can greatly increase the scientific validity of any project over the mere anecdotal approach. For a large cleaning project, such as that undertaken with the Shelburne carousel horses, this is indispensable.
What is missing or problematic? A digital lab notebook is not ideal around liquids but is very suited to logging details and organizing image captures. These methods cannot measure the results of treatments using computational methods. Missing are also good tools for comparing, annotating, and adding metadata to images on mobile devices and well as for improved cooperation between tools.
And after all of this analysis of one’s use of digital tools, how is it best to share what one has learned? The AIC Code of Ethics reminds us that:
“the conservation professional shall contribute to the evolution and growth of the profession…This contribution may be made by such means as continuing development of personal skills and knowledge, sharing of information and experience with colleagues, adding to the profession’s written body of knowledge, and providing and promoting educational opportunities in the field.”
The self-reflexive exercise that Nancie Ravenel modeled in her talk—of analyzing personal use of digital tools and how personal needs and goals may reflect and inform those of others—will not only be indispensable to the future development of digital tools which will meet this call to share, but it contains in itself a call to share: Nancie asks, what do you use to share and collaborate with your colleagues. How may these systems serve as a model for further infrastructure?
Email, listservs, and forums; the AIC Wiki; research blogs, and project wikis enabling collaboration and peer review; document repositories like ResearchGate.net and Academia.edu; shared bibliographies on reference management systems like Zotero.org and Mendeley.com; collaboration and document-sharing software like Basecamp, Google Drive, and Dropbox; and social-media platforms allowing for real-time interaction like Google Hangouts are all good examples of tools finding use now.
Missing or problematic factors in our attempts to share with colleges include the lack of streamlined ways of finding and sharing treatment histories/images of specific artworks and artifacts; the lack of archives that will accept conservation records from private practices; and the persistent problem of antiquated IP legislation which is often confusing.
In addition to sharing information with other conservators, we must also consider our obligation to share with the public. Here better, more interactive tools for the display of complex information. As media platforms are ever-changing, these tools but be adaptable and provide for some evaluation of the suitability of the effort to the application.
4. SPEAKER: David Bloom
Described by Eric Pourchot as a “professional museophile,” David Bloom was a seeming non-sequitur to the flow of the event. However, as coordinator of VertNet, and NSF-funded collaborative project making biodiversity data freely available online, he spoke very eloquently about the importance of and the opportunities offered by data-sharing and online collaboration. He addressed issues of community engagement in digital projects, interdisciplinary collaborations, and sustaining efforts and applicability throughout these projects. As argued in the other short talks, conservation is yet another “data-sharing community” which can learn from the challenges met by other disciplines.
As described by Bloom, VertNet is a scalable, searchable, cloud-hosted, taxa-based network containing millions of records pertaining to vertebrate biodiversity. It has evolved (pun-intended) from the first networked-information system built in 1999 and has grown over various revisions as well as by simple economies of scale—as the addition of new data-fields became necessary. It is used by researchers, educators, students, and policy-makers, to name a few. As the network is a compilation of data from multiple institutions, it is maintained for the benefit or the community, and decisions are made with multiple stakeholders under consideration.
Amongst the considerable technical challenges through all of its iterations, VertNet has struggled to establish cloud-based aggregation, to cache and index, to establish search and download infrastructure, and to reign in all associated costs.
Additionally, intellectual property considerations must be mentioned, as even though the data is factual (the information cannot be copyrighted), the data “belongs” to the host institution, as they are the historical keepers. As a trust, VertNet does not come to own the data directly. This made a distributed network with star-shaped sub-networks necessary, even though it was expensive to maintain, especially for a small institution, requiring many servers with many possible points of failure. Once one point failed, it was difficult to locate. Costing about 200k/yr, this was an expensive system to maintain, and although it was still the best and most secure way to structure the network, it was not as inclusive as it could have been for its expense.
There are always social challenges to building such “socio-technical networks,” and this is something that the FAIC is discovering by simply attempting to poll its membership. It doesn’t work if people don’t want to play. What ensues are knowledge gaps, variable reliability, and a lack of resources. To speak more broadly, any entity entrusted with indexing information needs for people to get over their fear of sharing to learn the benefits and acquire the skills associated with being connected (i.e. Social-media privacy controversies). All the knowledge and time needed to meet everyone where they are technologically and bring them along in a respectful manner does not exist in one place, so priorities must be defined for the best investment of time and funds to bring the discipline forward.
Bloom found that disparate data hosts could not communicate with each other—they either had different names for similar data fields which needed to be streamlined or they did not maintain consistent terminology, either globally or internally.
This problem had already been solved in a number of ways. For example, Darwin Core classification system was developed by Dublin Core; ABCD is the European standard; and Biodiversity Information Standards was developed by TDWG. There are 186 fields defined by Darwin Core with a standardized vocabulary in as many fields as possible. These standards are community-ratified and community maintained in order to not be easily or unnecessarily changed. This allows for easy importation by mapping incoming data-sets to a Darwin-Core standard; all the data is optimized for searchability and discoverability; and publication and citation tools are hence streamlined.
This type of study of the state of the art,necessary when designing new database infrastructure, can serve as a model for the field of conservation. At the foundation of a successful system, will be a serious study of what has been done in other fields and of what is most useful to prioritize for this one.
As VertNet is based entirely on voluntary participation, it is critical that participants understand the benefits of submitting their data to the trust. The staff at VertNet makes themselves available to help the host institution through any technical difficulties encountered in the data exportation and importation process. Backups of this data are scrupulously maintained throughout the migration process. A major benefit to the exporting institution is VertNet’s data-quality checks which will complete, clean up, and streamline fields and then will send back a report so that the client can update their own databases. This brings local data-maintenance standards in-line with those maintained by the global database.
Additionally, the NSF grant has made training workshops, the development of analytical tools, and certain instances of impromptu instruction possible for clients. This has lead to VertNet’s exponential growth without advertising. The repository now represents 176 institutions with 488 collections and many, many more want in from the waiting list. All these institutions are voluntarily submitting their data despite historical concerns about “ownership.” All these institutions realize the benefit of membership for themselves, for researchers, and for the state of the discipline.
Unfortunately, however, this “traditional” (eek) model of procuring NSF (or NEH, IMLS, etc.) funding to maintain cost is becoming unsustainable. Support for these services is desperately needed now that its utility is established. The value-add model is difficult even if VertNet does believe in “free data.”
The associated cost does not change; however, the database was built as community tool. So even though the common perception is an unchanging status-quo, the community will have to support the project insofar as they find the resource valuable and important. A common misconception propagated by recalcitrant host institutions is that “we can do it ourselves.”. The fact is, however, that most stewards of data can’t—and even more won’t—turn around and make these records available to the community for revision, maintenance, reference, or analysis.
5. DISCUSSION
The audience then exploded with responses :
Pamela Hatchfield (Head of Objects Conservation at the Museum of Fine Arts Boston and AIC Board President) began by reminding those who had been romanced by visions of star-shaped networks that concerns about maintaining privacy are still driven by private funding. Although there is now a conservation module in TMS, and terminological standardization is a frequently cited concern, this data is clearly not intended for the public. Historically, private institutions maintain the attitude that data should be tightly held. There is a huge revenue stream from images at the MFA, and as such it is difficult even for staff to obtain publication rights. Terry Drayman-Weisser (Director of Conservation and Technical Research at the Walters Art Museum) pointed out the the Walters walks the middle path by providing a judiciously selected summary of the conservation record associated with an object. Not all of the information is published.
Certain institutions, such as at the British Museum, have an obligation to make these records public, unless the object falls into certain categories. The 2007 Mellon “Issues in Conservation Documentation” meeting at the National Gallery, London, provides summary of the participants’ approaches to public access at the time of publication.
I did have time to ask a question about the privacy concerns attendant on a biodiversity database. Why does it seem that there is less hesitancy at the prospect of sharing? In reality, these institutions do overcome certain hurdles when deciding what to make publicly available: It turns out that certain data about endangered species should not be shared. Although he did not have time to elaborate, I was curious how this “species privacy” might compare to “object privacy.”
VertNet, it turns out, cannot even find protection under the “Sweat-of-the-Brow” doctrine, as this factual information cannot be copyrighted. What about those portions of conservation documentation which are markedly drawn from speculation, interpretation, and original research? This information can be copyrighted, as per each institution’s policies, but our culture is changing. “We don’t train students to cite resources properly,” he noted, “and then we wonder why we don’t get cited.”
The time allotted for the session was drawing to a close, and everyone expressed their regrets that the conversation could not go on for longer and that more people could have attended.
I would personally like to thank FAIC, the speakers, the Mellon, Kress, and Getty Foundations, and all of the participants for their part in a very though-provoking discussion. I hope and trust that it will continue in future fora.
The inaugural meeting for this group took place on May 31, 2013 at the AIC Annual Meeting in Indianapolis, ID. Organized by Nancy Ash, Scott Homolka, Stephanie Lussier and Eliza Spaulding, the session presented the Draft Guidelines for Descriptive Terminology for Works of Art on Paper which is a project under way at the Philadelphia Museum of Art and supported by an IMLS 21st Century Museum Professionals Grant. Continue reading “AIC's 41st Annual Meeting- Art on Paper Discussion Group”
Glenn Wharton began with an overview of the conservation of electronic media at the Museum of Modern Art (MoMA). When he set up the Media Conservation program at MoMA in 2005, there were over 2,000 media objects, mostly analog video, and only 20 software objects. The main focus of the program was digitizing analog video and audio tapes. Wharton was a strong advocate for the involvement of IT experts from the very beginning of the process. Over time, they developed a working group representing all 7 curatorial departments, collaborating with IT and artists to assess, document, and manage electronic media collections.
Wharton described the risk assessment approach that MoMA has developed for stewardship of its collections, which includes evaluation of software dependency and operating system dependency for digital objects. They have increased the involvement of technical experts, and they have collaborated with Howard Besser and moving image archivists.
The presenters chose to focus on project design and objectives; they plan to publish their findings in the near future. Glenn Wharton described the three case study artworks: Thinking Machine 4, Shadow Monsters, and 33 Questions per Minute. He explained how he collaborated with NYU computer science professor Deena Engel to harness the power of a group of college undergraduate students to provide basic research into source code documentation. Thinking Machine 4 and Shadow Monsters were both written in Processing, an open source programming language based on Java. On the other hand, 33 Questions per Minute was written in Delphi, derived from PASCAL; Delphi is not very popular in the US, so the students where challenged to learn an unfamiliar language.
Engel explained that source code can be understood by anyone who knows the language, just as one might read and comprehend a foreign language. She discussed the need for software maintenance that is common across various types of industries, not unique to software-based art projects. Software maintenance is needed when the hardware is altered, the operating system is changed, or the programming language is updated. She also explained four types of code documentation: annotation (comments) in the source code, narratives, visuals, and Unified Modeling Language (UML) diagrams.
Engel discussed the ways that the source code affects the output or the user experience and the need to capture the essential elements of presentation in artwork, which are unique to artistic software. In 33 Questions per Minute, the system configuration includes a language setting with options for English, German, or Spanish. Some functions were operating system-specific, such as the Mac-Unix scripts that allow the interactive artwork Shadow Monsters to reboot if overloaded by a rambunctious school group flooding the gallery with lots of moving shadows. Source code specified aesthetic components such as color, speed, and randomization for all of the case study artworks.
One interesting discovery was the amount of code that was “commented out.” Similar to studies, underdrawings, or early states of a print, there were areas of code that had been deactivated without being deleted, and these could be examined as evidence of the artist’s working methods.
Engel concluded by mentioning that the field of reproducibility in scientific research is also involved with documenting and preserving source code, in order to replicate data-heavy scientific experiments. Of course, they are more concerned with handling very large data sets, while museums are more concerned with replicating the look and feel of the user experience. Source code documentation will be one more tool to inform conservation decisions, complimenting the artist interview and other documentation of software-based art.
Audience members asked several questions regarding intellectual property issues, especially if the artists were using proprietary software rather than open-source software. There were also questions raised about artists who were reluctant to share code. Glenn Wharton explained that MoMA is trying to acquire code at the same time that the artwork is acquired. They can offer the option of a sort of embargo or source code “escrow” where the source code would be preserved but not accessed until some time in the future.
Emily MacDonald presented on the usefulness of a new condition mapping program called Metigo MAP 3.0. She began her presentation with a description of a collaborative project between the University of Delaware and the Tsinghua University (Beijing) led by Dr. Susan Buck (Winterthur/University of Delaware Program in Art Conservation) and Dr. Liu Chang (Tsinghua University) to examine and document Buddhist murals and polychromy in the Fengguo Temple (Fengguosi), located in Yixian County, Liaoning Province, China. The four interior walls of the temple are lined with the murals. The murals were in very poor condition and their contained images were skewed by loss and other damage.
The Metigo Map software allowed the conservation team to map the murals’ condition issues in a short period of time. The software incorporates mapping, digital imaging, and area measurement tools. The program streamlines the mapping process and is easy to use. Emily compared the software to known and used techniques of documentation and illustrated the limitations of each.
Metigo Map was created by German company fokus GmbH Leipzig, dedicated to architectural surveying in addition to documentation of large scale conservation projects.
Maps are produced by uploading images into the software. The images can then be drawn on and annotated. The program makes the image true to scale and is able to rectify skewed images to proper orientation. This allows images to be used that were taken from an angle if your subject is not accessible from the front. By inputting the dimensions of the painting, the software can give exact locations of areas of interest and calculate the surface area of damage. This feature can also be useful in making time estimates for proposals on big projects. Image processing setting allows for photo editing to aid mapping. Mapped images can then be exported as tif. files and opened in other programs.
For the presentation, Emily chose three murals to be representative of the condition issues they noted overall. The conservators worked as a team, using Metigo Map to document the condition of the murals. After the murals are mapped, the maps can be compared easily for condition issues. The software can also be used to map the locations of samples. Annotations can be made to the maps for future referral.
For large scale projects or projects particularly difficult to photograph, users can use the tiling function of the software to piece together the rectified image. This allows for seeing the project unobstructed.
Emily also illustrated how Metigo map can be used to document experiments. She has also used the software while working on a graffitti removal research project at the Getty to document surface changes and areas of treated surfaces.
Emily summed up the talk with an excellent slide comparing the pros and cons of the software. The pros included: easy mapping, image processing, rectification, measurement functions, compatibility with other software, and easy interface. Cons included: requires initial training, no white balance (but this can be done on photoshop beforehand), and cost (more expensive than adobe creative but less expensive than autocad).