Jennifer McGlinchey started her impressive talk with an explanation of the history of lists of photographic paper sizes. She stated that there were no available references that have lists of ‘standard’ sizes. In fact, the lists of sizes they were able to find were very small, and also corresponded to very specific time periods. Further research suggests that these sizes were not considered standard and were certainly not inclusive of all sizes used. Rather than identify ‘standard’ sizes, she identified ‘common’ sizes by the criteria that they appear in five or more of the references. These she concluded were ‘de facto standard sizes.’ For the study, McGlinchey used Paul Messier’s extraordinary paper collection which consists of over 5,000 samples of silver gelatin photographic paper, as well as 9 manufacturer’s sample books and pricelists and 6 encyclopedias: http://www.paulmessier.com/
The use of English-language publications from a few geographic locations (US and Europe) may have been limiting, but in fact, there are very few references from other geographical areas. Concluding that there were common rather than standard sizes is not to say that there were no attempts to standardize paper sizes, but the attempts were never very successful. The result of the study was that she identified over 200 distinct sizes, just over half of which occurred only once. She identified 32 de facto standard sizes. Many of the sizes considered common now in the USA, such as 4×6 and 5×7 inches, are included in that list, but many sizes which are no longer manufactured are also on that list. This includes smaller sizes like 2.5 x 2.5 inch, which were much more common in early days of gelatin silver printing. She mentioned that the measurements for papers grouped together as the same size allowed for a difference of +/1 5 mm along each dimension, to account for natural expansion/contraction, ferrotyping, and so on, which could account for small dimensional changes. As part of the research they also evaluated common thicknesses of silver gelatin paper, and found three de facto standards. The most common was ‘single weight,’ followed by ‘double,’ and finally ‘medium.’ Double weight papers fall above 0.25 mm, medium weight papers under 0.25 and single weight papers under 0.2 mm. The double weight paper started get thicker in the 1930s until the 1950s and then got thinner again, so manufacturers changed the thicknesses over time but not the terminology. It was also found that five common aspect ratios occurred in 88% of the de facto standard sizes. This implies that scaling relationships were a factor for determining silver gelatin DOP paper sizes. Characterization by aspect ratio not only simplifies the dimensional diversity of silver gelatin paper by emphasizing their scaling relationships, but also highlights their relationship with other media. For example, 6:5 is common mainly with plate sizes, 5:4 is the aspect ratio of many large format films. 4:3 is the first motion picture aspect ratio and 8:5 is the golden ratio.
The measurements and other data were recorded in spreadsheets (the full results were published in JAIC, Volume 53, Issue 4 (November, 2014), pp. 219-235). It led to the conclusion that there is no easy answer to these questions. No system of standard paper sizes was successfully put in place for photographic papers. Additionally, available sizes varied widely over time and across geographic boundaries.
This research can be utilized in identification of artist’s methods and paper preferences. One useful application is to study photograms. This technique was used to great effect by Man Ray and Laszlo Maholy-Nagy, contemporaries working in France and Germany respectively. Of course, each print is unique, because it would be difficult to replicate the exact condition since no negative has been used. Of a selection of 163 prints by Man Ray dating from 1920 to 1940, there were 88 photograms and 75 traditional prints from negatives. Compared with the de facto standard sizes identified by this research, the dimensions of 39 Man Ray photograms (roughly 44%) correspond to de facto paper sizes. From 75 prints from negatives, only 25 (33%) correspond to the de facto paper sizes. This survey shows that Man Ray most likely trimmed his photographs from negatives, but didn’t trim his photograms. Averaging about 3.5 mm thickness, the photograms fall into the category of double weight papers.
Man Ray’s process contrasted with the working practices of Laszlo Maholy-Nagy. Print dimensions and estimated thickness for 216 prints by Maholy-Nagy made in Germany between 1922 and 1928 were collected from the catalogue raisonne of his photograms. Of these photograms, 216 or about 89% correspond to the de facto standard sizes identified by this research. The majority of the prints made from negatives were printed on two sizes of paper, 18 x 24 cm and 13 x 18 cm, both de facto standard sizes. This shows that Moholy-Nagy used full sheets of paper for his photograms and didn’t trim them down. The catalogue raisonne describes the thickness measurements of many Moholy-Nagy photograms as single weight or double weight. According to these descriptions Moholy-Nagy used the single weight or double weight papers in equal frequencies and sometimes used both in the same series.
Understanding the de facto standard sizes provides a useful point of comparison of these two artists. Differences in their methods can be due to a variety of factors. Moholy-Nagy was known for his scientific approach to photography, as a record of the interaction between light and physical object composed within the border of the paper. His photograms were complete upon processing. In contrast, Man Ray was more acutely engaged in producing highly refined settings of expression; attention to detail and subtle manipulation apply to all aspects of Man Ray’s photography as evidenced by his skilled practices in retouching and using carefully proportioned mounts.
In summary, there really were a lot photographic paper sizes available, particularly in middle of the 20th century, when these papers were extremely popular. While there are some de facto standard sizes and thicknesses, silver gelatin papers were made in numerous sizes and the majority of paper sizes listed in the references occurred only once.
Category: Photographic Materials Conservation
PMG Winter Meeting – "New Photo Histories in West Africa" by Erin Haney, Feb. 21
This was the final session of the 2015 PMG Winter Meeting. Speaker Erin Haney is an art historian and co-founder/co-director of Resolution, which hosted the 2014 “3PA” workshop in Benin. During the Q&A afterward, one conservator remarked that her talk “reminds us why we do what we do.” That couldn’t be more true. She provided an exciting glimpse of family and private photograph collections in West Africa that have not been widely seen nor studied. The stewards of important West African photography collections have recently started to come together to explore strategies for their preservation as well as raising their visibility worldwide.
She began by saying that West Africa has valuable historic photographs that won’t come up on Google searches. The reason is simply that these photographs tend to be dispersed widely in private and family collections. There are very few cultural institutions, archives and museums that have enjoyed stability from the colonial era to the present day. Some institutions have lost all or part of their photographic collections in times of political upheaval. Instead, it is primarily families and private owners who have safeguarded that region’s photographic heritage.
Haney showed just a few examples that reflect the diversity of images that can be found in these collections. These include photographs made during the colonial period, the images made by the great, early studios (often now in family collections of their descendants), domestic portraits, group portraits, and events of social and political importance. There are images of the social elite and the wealthy, showing a materially rich and cosmopolitan West Africa that is seldom seen, and a history that is seldom taught. She showed a daguerreotype by Augustus Washington, who went to Liberia from the US and made daguerreotypes in cities all along the West African coast. There were photographs made by the Lutterodt family, which established a far-reaching network of family photography studios that operated from the 1870’s to the 1940’s. There were British colonial scenes, portraits by early French-run studios, portraits of West African women and their Bordeaux trader husbands, and debut portraits–young women dressed in the finest cloth, showing their readiness for marriage. More recent images included Gold Coast soldiers, independence movements, city skylines and infrastructure, and prominent political figures. These are but a few of the many treasures in these collections, spanning the 19th and 20th centuries. There is an extraordinary variety of subjects and photographic traditions.
She showed how photographs were made and remade in order to improve them and preserve them. Some photographs took on new meaning as memorial objects when the sitter passed away. These could be marked with crosses, mounted, and/or captioned by loved ones. Other photographs that had condition issues over time might be heavily overpainted to refresh them. In one case, a painting of a Dutch ancestor was remade by photographing it, in order to present it alongside a group of other family portrait photographs. The original image was not sacred. To study these collections, one has to understand how the images functioned when they were made and how they continue to function. Theirs is an iterative practice of artistry, which must inform preservation and conservation decision-making.
Of grave concern today is that these collections are at risk when the custodians feel they must sell or dispose of them to reclaim the valuable space they occupy in a private home, or generate much-needed income. Resolution communicates the importance of photographic cultural heritage to people in West Africa and around the world. The Benin workshop provided participants with the skills to document and manage their collections, while networking with others in the region working toward the same goals. The workshop involved nine countries in Francophone West Africa and is actively building partnerships and capacity to make a case for the ongoing support of photographic collections. There is a growing recognition of their critical importance for national identity, education and research. It was an inspiring end to this PMG Winter Meeting.
PMG Winter Meeting – "Cataloging Is Preservation: An Emerging Consideration in Photograph Conservation Programs" by Robert Burton, Feb. 20
“Cataloging Is Preservation: An Emerging Consideration in Photograph Conservation Programs” was the first talk of the Biannual PMG Winter Meeting in Cambridge, MA, February 20-21, 2015. Speaker Robert Burton began with a quote from his mentor Sally Buchanan, who stated, “Cataloging is preservation.” Burton went on to show how that is no overstatement. In a sense, the goal of all conservation is to preserve materials to enable continued access to them, and there is a direct relationship between cataloging and access. Descriptive records in prescribed formats, organized under controlled headings, make photographs discoverable. This in turn sparks research interest, helps institutions identify preservation priorities, and even helps them organize storage more efficiently. Burton showed that cataloging is the foundation of a comprehensive view of collections management and preventive conservation.
A good record should answer the questions: who, what, when, where, why, and how? It gives an institution administrative and intellectual control over its photographic materials. Whereas books and other text-rich objects are more self-identifying, photographs require additional data to be contextualized, and collecting this data requires a cataloger with the appropriate training. A cataloger might be the first person to go through a photograph collection, and that person should possess visual literacy, an understanding of photographic processes, an ability to carry out basic preventive measures such as rehousing, and be able to bring objects in need of special care to the attention of conservators. Because different institutions have diverse approaches (different databases, digital asset management systems, missions, and constituents), catalogers must understand and apply data value standards to bring some consistency to searches for terms such as artists’ names, geographic place names, and so on. (Burton mentioned the Getty Art and Architecture Thesaurus and the Name Authority File from the Library of Congress as examples.)
Recent advances in digital recordkeeping and digital imaging have reduced the administrative burden of cataloging and have also reduced the need for over-handling photographic materials, which can result in handling damage. There are new technologies on the horizon that will help with cataloging, such as automatic captioning of newly created images, or giving photographers a way to record voice annotations as additional metadata. Nevertheless, catalogers will need to find a way to enter this information so it can be searched.
Without knowing its holdings, instititions will not be able to adequately value or safeguard their materials, nor will they be able to care for them. Uncataloged items are essentially invisible: vulnerable to loss, their condition and value unknown.
Burton acknowledged that few library school programs provide students with the opportunity to study photographic materials specifically. He urged this audience to view cataloging as a preventive conservation method on par with environmental monitoring, housings, and the like. He traced the development of this thinking to the 2002 Mellon survey at Harvard, which in turn became the model for the Weissman Preservation Center’s Photograph Conservation Program, and then FAIC’s Hermitage Photograph Conservation Initiative. These surveys show that, by coordinating conservation, cataloging, and digital imaging, photograph collections are more accessible and in better condition. This positive trend should continue as more institutions adopt Susan Buchanan’s mindset: “Cataloging is preservation.”
Getty Conservation Institute Announces Photographic Materials Conservation & Preservation Workshop (July 13-24, 2014)
The Getty Conservation Institute (GCI) is pleased to announce the third in a series of annual two-week workshops that focus on specific topics relating to the conservation and preservation of photographic materials. The workshop Photographs and Their Environment: Decision-making for Sustainability— will be offered from 13 – 24 July 2015 in partnership with the Institute of Art History, Academy of Sciences of the Czech Republic in Prague.
The workshop is open to fifteen mid-career conservators with at least three to five years of experience working in the area of paper or photograph conservation. Paper conservators should have prior knowledge of photographic processes and photograph preservation. Priority will be given to applicants currently working with these materials.
Additional information and an application are available on line. The deadline for application is March 16, 2015.
http://www.getty.edu/conservation/our_projects/education/cons_photo/advanced.html
If you have questions, you may contact euphotos@getty.edu<mailto:euphotos@getty.edu>
42nd Annual Meeting – Research & Technical Studies, May 31, “Development and Testing of a Reference Standard for Documenting Ultraviolet Induced Visible Fluorescence” by Jennifer McGlinchey Sexton, Jiuan Jiuan Chen, and Paul Messier
Jennifer McGlinchey Sexton, Conservator of Photographs at Paul Messier, LLC, presented on the testing of reference cards and the development of new imaging protocols that are so desperately needed in our field for increased standardization and comparability of photographs taken of UV-induced visible fluorescence phenomena. The project started by private photograph conservator Paul Messier in 2006, under the servicemark name UV Innovations (SM), was taken over by Jiuan-Jiuan Chen, Buffalo State’s Assistant Professor of Conservation Imaging, Technical Examination, and Documentation at Buffalo State College. Sexton has directed development of the Target-UV™ and UV-Grey™ products since 2012.
Many a visual examination is followed by technical imaging, including both Ultraviolet Fluorescence (UV-FL) and Visible-Induced Luminescence (VIL), and Sexton’s talk first reiterated why observing cultural material by using carefully selected wavelengths of light is important: It is non-invasive, relatively inexpensive, accessible, and (largely) commercially available. As a surface technique, UV-induced fluorescence probes outside layers, coatings, optical brighteners, mold, tidelines, and organic-glaze pigments above bulk pictorial films. Although it is a technique we rely on for the large majority of condition assessments and technical studies, our documentation remains unstandardized, and essentially, unscientific. With so much to gain by standardizing our capture and color-balancing process, as well as by taking careful notes on the equipment used, the prospect of the Target-UV™ and UV-Grey™ UV-Vis fluorescence standards is certainly an exciting one.
UV-FL images are unique in that they contain diagnostic color information, hence the need for standardization, which would enable cross-comparison between colleagues and between before- and after-treatment documentation. The beta testing of the UV target which was been carried out for 2 years has attempted to account for the most significant variables in the production of UV-FL images. The talk evidenced the enormous amount of collaboration and communication needed to streamline the significant aspects of equipment choice, the optimization of acquisition, and the documentation of post-processing methods. The goal was to increase reproducibility and comparability. Sexton’s presentation showed that the beta testing of the product achieved demonstrable results in terms of uniformity of output.
Development of the UV target was begun in collaboration with (Golden) to produce stable fluorogenic pigments of known color values and known neutral-gray values (which were evidently produced by mixing the red, green, and blue fluorogenic pigments). Neutral gray was defined as a gray which was interpreted as neutral by many viewers and which performed similarly under many different conditions. Including such color swatches within a photograph–for the purposes of color-balancing and correcting any variation in the Red-Green-Blue channels for each pixel–is a very familiar principle in visible photography.
A second consideration made for the round-robin testing was that of intensity, which is a variable somewhat unique to UV-FL photography. The nature of the emissive source must be noted for purposes of calibration and exposure, especially as all light sources currently used in fluorescent photography lack stability over long periods. The output of a lamp with fluctuate over time, and this makes relative intensities of materials illuminated with some lamp types very difficult to determine. Even when this particular factor is taken into account, other variables, such as the distance of the lamp to the subject and the wattage of the lamp will effect intensity. It is also possible that multiple emitting sources could be present. These factors should be included in the metadata for the exposure.
To control for this intensity factor, beta testers were to divide their sources, distance-to-subject, and wattage parameters into three different intensity levels which were best matched to certain analyses: “Ultra” was beta-tested for analysis of optical brighteners and other products produced specifically to fluoresce. “High” was best for the analysis of natural and thicker fluorescence, perhaps of a paint film such as zinc white, of some feathers (see Ellen Pearlstein’s talk from this year “ Ultraviolet Induced Visible Fluorescence and Chemical Analysis as Tools for Examining Featherwork”), and uranium glass colorants; and “Low” was used to image thin applications of resins, varnish, and sizing films.
A third variable was that of camera sensitivity, which varies with manufacturer (proprietary internal filtration and software), camera type (either DSLR or digital back cameras), as well as with sensor type (CCD or CMOS, modified or unmodified). Different filters were tested (Kodak Wratten 2e pale yellow filter, PECA 918, and an internal blue-green IR (BG-38) filter). These types of internal filtration are typical on digital cameras to block out IR and some red light to bring the camera output closer to the typical photopic curve of the eye and more closely mimic human vision. The 2e filters UV radiation and a small amount of the blue light commonly emitted by UV lamps, while the Peca 918 is used for IR blocking.
The fourth variable tested was the source type. Those tested included low-pressure mercury, high-pressure mercury, arc and metal halide arc lamps. Although LEDs were used at some institutions, many of these have a peak emission at 398 nm, which is barely in the ultraviolet range. Greg Smith at the IMA analyzed Inova X5 UV LED, and found that it does contain UV but is more expensive. Other products show a large difference in emission peaks which often cannot be accommodated by a simple white-balancing operation. Therefore, testing limited the peak emission to the most common types, emitting between 360 and 370 nm.
The last variables that were analyzed were those of post-processing procedures and software and of user perception and needs. An problematic paradigm identified over the testing period was that of the image being readable or resolvable vis-à-vis a particular argument versus the image being strictly accurate and well-calibrated. A photograph may accurately render the intensity of the fluorescence but it may be so completely underexposed so as to be unreadable.
Testing showed that, despite these difficulties of calibration and subjective experience, that the workflow incorporating the UV Innovations standard, showed a marked increase in standardization. Round-robin testing was completed by eight institutions in the US and Europe in May 2013. Fluorescent object sets were shipped along with the UV standard and filters. Each test site collected two image sets, one named “a,” using the lab’s current UV documentation protocol with color balance and exposure set “by eye,” and the other named “b” using the UV innovations protocol. The increased control provided by the use of the standard was evidenced by the average delta E of L*a*b* data points as well as the average standard deviation of RBG data points for both a and b sets as each institution. By way of example, the ‘Low—a” set showed an improvement from a delta E of 18.8 to the ‘Low—b” with a delta E of 4.9. The average standard deviation in-between these two sets showed an improvement from 32.8 to 6.2!
The presentation went into depth about how this data was collected, how variables were controlled for, and how the data was analyzed, and it showed convincingly that despite the high variability of current work flows, the UV Innovations UV-Grey card and Target-UV standards in conjunction with standardization of UV source and filtration can markedly improve the image variability of UV-FL photography.
One variable in “extra-spectral” imaging that was not addressed in this talk were the spatial inhomogeneities of the light source, or the gradient that results from the use of an inconsistent light source. This could be especially problematic if using UV-FL photography for condition imaging, and “flat-fielding” should be considered as a possible augmentation to the ideal image-acquisition protocol.
There is still further research to be done before this product hits the market. A fourth intensity level will be added to increase the flexibility of the product. The current prototype features two intensity levels on the front and two on the back. Notably, artificial aging must be done to determine when the product should be replaced. As this current standard only operates over UV-A and UV-B, UV Innovations looks forward to developing a UV-C standard, as well as a larger format target.
The prototype of the Target-UV and UV-Grey cards were handmade, but the company hopes to overcome the challenges of large-scale production and distribution by Fall 2014.
42nd Annual Meeting – Photographic Materials, May 31, "Retouch Practices Revealed in the Thomas Walther Collection Project" by Lee Ann Daffner
Summary by Greta Glaser, Owner of Photographs Conservation of DC
42nd Annual Meeting – Photographic Materials, May 30, "Preservation of Deborah Luster's One Big Self" by Theresa Andrews
The San Francisco Museum of Modern Art (SFMoMA) was faced with the challenging task of displaying One Big Self in the way the artist intended – as an interactive work – after it was acquired in 2003. The security and physical preservation of the photographs were the two biggest threats to the work. The piece ended up being displayed alone in a room with one museum guard on duty at all times, which seemed appropriate in the context of the artwork’s subject matter. It was decided that only 200 plates would be displayed at any time, and twenty portraits were randomly selected to never be displayed. The plates that have been on display have indeed seen changes. Some plates have been caught in the drawers and become bent, edges of the emulsion on some plates have been abraded, and some of the plates have yellowed. Although the artist is dismayed at learning about the yellowing, the cost and time of replacing each plate as it becomes too worn to be viewed makes reprinting each portrait an inefficient solution. Out of the total 287 plates, excepting the 20 that will never be displayed, only 200 are on display at any given time, so 67 plates can still be swapped with any plates that become too damaged for exhibition.
Summary by Greta Glaser, Owner of Photographs Conservation of DC
42nd Annual Meeting – Photographic Materials, May 30, "Fototeca Pedro Guerra: Conservation of the Photographic Archives" by Cinthya Cruz
The archives of Pedro Guerra are part of the Universidad Autónoma de Yucatán in Mérida, where the climate is hot and humid. Photographic prints and negatives in this collection include many photographic processes and materials, from albumen and silver gelatin to glass plates and nitrate negatives. The goals of the photo archives are to stabilize the existing materials, catalog and organize the objects, and monitor and maintain a safe environment. Condition issues affecting the collection include broken and scratched glass, finger prints, sticky emulsion, and fungus. Nitrate negatives are immediately placed in frozen storage in Marvelseal bags after they are treated and scanned. Object codes and registration numbers specific to the archive are written on the exteriors of the bags so negatives can be located when necessary. Enclosures for other photographic materials, such as sink mats for broken plates and acid-free paper envelopes for photographic prints, also contain object codes and registration numbers. The object codes refer to the subject matter contained in the photographic image and the type of object.
Summary by Greta Glaser, Owner of Photographs Conservation of DC
42nd Annual Meeting – Photographic Materials Group, May 31, “Comparative Study of Handheld Reflectance Spectrophotometers” by Katie Sanderson
Katie Sanderson, Assistant Conservator of Photographs at the Metropolitan Museum of Art (MMA), presented a most informative comparative study of handheld spectrophotometers undertaken at MMA. When the Department of Photograph Conservation decided to replace its existing handheld spectrophotometer—an X-Rite 968—Sanderson along with Scott Geffert, Senior Imaging Systems Manager, researched current units available to determine the best replacement and variation in measurements taken by each.
Sanderson began by outlining the factors to consider when replacing a spectrophotometer: data continuity (extant data over 20 years); instrument agreement; data translation; software compatibility with previous and future instruments; and longevity and support (the previous spectrophotometer is no longer supported by X-Rite but still takes good data readings). In total, seven spectrophotometers—four by X-Rite and three by Konica Minolta—were tested against the MMA’s X-Rite 968 and a bench-top spectrophotometer equipped with an external remote diffuse reflectance accessory probe in the Department of Scientific Research. The seven spectrophotometers examined were:
- X-Rite 964
- X-Rite eXact
- X-Rite Ci64
- X-Rite RM200
- Konica Minolta 2600D
- Konica Minolta 2500c
- Konica Minolta FD-7
Before delving into the specific finding of each unit tested, Sanderson provided a brief overview of how spectrophotometers work. She explained that an object is illuminated by a light source of a specific spectral range, a detector collects any reflected light, and a unique spectrum is produced. While some light sources extend into the ultraviolet region of the electromagnetic spectrum, most are within the range of visible light (400-700nm). The two most common geometries for spectrophotometers are 0/45—in which the first number represents the angle (in degrees) of the light source and the second number the angle of the detector—and integrated spherical.
As Sanderson described, some of the units tested had an integrated spherical geometry that takes into account specular reflectance; these spectrophotometers can be operated in either specular component excluded (SCE) or specular component included (SCI) mode. The aperture of the X-Rite units was set to 4mm—that of MMA’s current spectrophotometer—andthe Konica Minolta units were set to 8mm as they exhibited a range of apertures and, in some cases, were not adjustable.
To determine an appropriate replacement, several reference standards and sample objects were tested with the seven spectrophotometers. The reference standards tested were obtained directly from X-Rite that is about to release a new Digital SG ColorChecker. The new target will include the same colors as the existing one but will utilize new pigments for some colors. For this comparative study, MMA obtained samples of the new standards to assemble its own large-format color checker. Ceramic BCRA calibrationcolor tiles were also tested as well as objects with varied surface qualities—chromogenic photographic prints (glossy and matte), watercolorpaper, textiles, and paintings. Five readings were taken and averaged for each spot tested; the units were lifted and repositioned before each measurement to account for a margin of error in positioning when monitoring color shift in objects over time using a spectrophotometer. Mylar® templates were created to facilitate positioning of the meters. All testing was completed by a single operator and resulted in approximately 12,000 readings!
To evaluate the variation in measurement between spectrophotometers, MMA’s X-Rite 968 was used as a master and delta E values were calculated for each of the 140 X-Rite color references. Sanderson summarized the results of this comparative study as follows. Meters with a 0/45 geometry produced readings with the closest match to the unit currently in use, which was not surprising as both are 0/45 instruments. When operated in SCE mode to exclude specular reflectance, the integrated spherical instruments fared worse than the 0/45. The easiest-to-use instruments were lightweight with built-in crosshair targets to facilitate alignment with a template. Finally, Sanderson introduced the concept of acceptable tolerance meaning that the operator should simplify the use of spectrophotometric readings by using a single instrument with a single set of standards. During the Q&A session that followed this presentation, a member of the audience asked which spectrophotometer MMA ultimately selected. Sanderson responded that the X-Rite eXact was selected for several reasons: it is lightweight; it produces data reasonably consistent with MMA’s existing spectrophotometer (understanding that data translation will be necessary regardless of which instrument is chosen); and long-term support from the manufacturer as well as continuity in data and software.
The presentation concluded with a discussion of areas of further research within this project, specifically continued analysis of data pertaining to the UV-radiation source found in some of the meters as well as the use of SCE settings in spherical integrated systems for more highly textured surfaces like those found in textile objects. Finally, it is a goal of MMA to complete processing of all data collected during this study and make it available to a wider audience so that it might contribute to more standardized color communication within the field of conservation and allied professions.
42nd Annual Meeting – Photographic Materials Group, May 31, “Characterization of a Surface Tarnish Found on Daguerreotypes under Shortwave Ultraviolet Radiation” by Krista Lough
Krista Lough, graduate intern in photograph conservation at the Metropolitan Museum of Art and third-year student in the Buffalo State College (BSC) program in art conservation, presented an interesting talk on the presence and potential sources a particular fluorescent tarnish found on many daguerreotypes when viewed under shortwave ultraviolet radiation. In addition to examination and photodocumentation of a set of daguerreotypes that exhibit this type of fluorescence, Lough also used Raman spectroscopy, scanning electron microscopy (SEM), and x-ray diffraction (XRD) to determine that the fluorescent tarnish is copper- and cyanide-based.
The presentation began with a summary of prior research on this subject by Lee Ann Daffner, Dan Kushel, John Messinger, and Claire Buzit Tagni. These studies corroborated Lough’s findings in characterizing the fluorescent tarnish as copper- and cyanide-based. These studies also showed that the tarnish was either removed or its fluorescence quenched when the daguerreotypes were treated with ammonium hydroxide.
Following a brief review of the phenomenon of fluorescence and its causes, Lough presented the photodocumentation of nine daguerreotypes that were examined during this study. The plates came from two sources—a private collection and a study collection at Buffalo State College—and only those from previously opened packages were examined. Lough’s research focused on determining the source of the fluorescent tarnish and its long-term effects. While the plates varied widely in condition, three primary types of fluorescent tarnish were identified: edge tarnish; rings and circles; and continuous film. The characteristic fluorescence was only observed when the plates were viewed under shortwave UV-C and not under longer wavelengths of ultraviolet radiation. Lough also noted that it was not always possible to associate fluorescent areas with tarnish perceived under visible light. Further, the greenish fluorescence was observed on the verso of some of the plates and along the verso and beveled edges and brass mats that accompanied some of the daguerreotypes. No strong connections could be made, however, between the fluorescence observed on the plates and the corresponding components of their once-sealed packages.
As part of her research methodology, Lough created a number of pure copper and silver-coated copper mock-ups. The mock-ups were treated with both potassium cyanide and sodium cyanide in an attempt to produce the same fluorescent tarnish observed in the 19th-century daguerreotypes. Ultimately, the tarnish only formed in the mock-ups treated with sodium cyanide in areas of exposed, pure copper. The fluorescent tarnish did not form on the plates treated with potassium cyanide or where the copper mock-ups were protected by a coating of silver.
To characterize the composition of tarnish, the mock-ups and select 19th-century daguerreotypes were analyzed using Raman spectroscopy, SEM, and XRD. The Raman spectra obtained indicate that the composition of the tarnish was identical in all spots analyzed. SEM was used to create elemental maps of some of the tarnished areas on one of the 19th-century daguerreotypes. A higher concentration of copper, carbon, and nitrogen and a lower concentration of silver were revealed in the areas of tarnish analyzed. Further, a higher concentration of sodium was observed in the areas surrounding the tarnish spots, perhaps an indication of previous treatment with sodium cyanide. Finally, XRD analysis of the fluorescent tarnish on the historic plate produced peaks for silver sulfide and pure silver. Unfortunately, while cyanide was identified on one of the mock-up plates, it was not found on the historic daguerreotype examined and it is thought that the quantities present may be below the detection limits of the XRD instrument.
Lough concluded the presentation with a list of daguerreotype procedures documented in historic literature that could account for the presence of cyanide: electroplating, cleaning, brightening, fixing, gilding, and engraving by galvanism. She also identified avenues for future research including investigation into whether or not the tarnish should be removed, the presence of copper cyanide on brass mats, and potential problems or affects to the daguerreotype that may arise if the tarnish remains untreated. Lough suggested documentation of the fluorescent tarnish could be used to develop a monitoring program for daguerreotype collections and potentially map trends during the examination of larger collections to determine, for instance, if a specific cyanide procedure is common to daguerreotypes from a particular period or location. In closing, Lough summarized the findings of her study in three main points: UV-C examination is a useful tool for understanding the condition of daguerreotypes; the fluorescent tarnish was positively identified as copper cyanide; and the objects exhibiting this characteristic fluorescent tarnish should be handled with caution as the tarnish is toxic.