Conservation of a Software-Based Sound Installation: Insights from the Museum of Modern Art’s Installation of David Tudor and Composers Inside Electronics’ “Rainforest V (Variation 1)”

Caroline Gil-Rodriguez
Electronic Media Review, Volume Six: 2019-2020

ABSTRACT

David Tudor’s (1926-1996) Rainforest performances used creative circuitry and live electronics to create a spatialized, resonant sound environment. The Rainforest series and its assorted versions are constructed upon experimentation and change, rarely being assembled or performed exactly the same way twice. The Museum of Modern Art’s acquisition of Rainforest V (Variation 1) (1973–2015) conceived by Tudor and realized by Composers Inside Electronics (CIE)—John Driscoll, Phil Edelstein, and Matt Rogalsky—brought forth an institutional examination of the practice of collaborative modes of conservation that places the work at a ‘point in a continuum’.

For Rainforest V (Variation 1), CIE created a self-running sound environment using Max/MSP and a sound library of more than 1,000 audio files dedicated to individual objects in the installation. 20 objects were affixed with audio transducers and suspended throughout the gallery space. Audio files for each of the 20 objects were played on a Mac Mini computer and sent to the transducers through audio amplifiers to transform the sculptural objects into a “loudspeaker object” (Gray 1997).

This paper will describe methods for documenting and assessing the condition of sound art, including analyzing audio files and evaluating the material value placed on artist-provided control systems as part of an electronic chain. As media art installation derives meaning through systems of interconnected components, this paper examines how works that use this technology compel conservators to work through a coordinated web of signification to document the work-defining properties that will enable re-staging of the work in the future.

Introduction

A conventional line of thought and tendency for media art and contemporary art conservators is to approach their work as interpreters, mediators, or even “co-producers” of what is designated in the acquisition process to be the “artist’s intent” or more accurately the artists’ sanction of an artwork (Wharton 2015a). To achieve this, a time-honored tool in the conservator’s workbox is the practice of capturing—by way of reports and documentation records—the complex, interdepartmental, and oftentimes multidisciplinary decision-making process of a given artwork’s iteration. The procedures for enacting institutional acceptance and adherence to these policies and workflows require dedicated staff, institutional buy-in, and a collaborative understanding of all those involved—often taking years to gradually build up:

Acknowledging the responsibility for accommodating documentation [of complex artworks] will often require institutional staff to change their long-established curating and exhibition practices. For example, curators, who used to maintain bilateral and exclusive relationships with artists, would be required to share their artist communication and negotiation with documenting staff. Media technicians would have to feel comfortable providing insight into problems and compromises. And exhibition managers in charge of budgets would have to approve the extra costs that would arise for creating on-site documentation at other venues. Pulling together the knowledge and experience that is otherwise fragmented and embodied by individuals across an institution is essential to taking responsible care of allographic (and autographic) collection works. Advocacy for new, coordinated documentation practices requires respectful education of colleagues. Such advocacy can profit significantly from the formation of an interdepartmental working group that focuses on the diverse issues of time-based media works in the collection. (Phillips 2015)

Provoking interdepartmental collaboration necessitates time, resources, and shared goals. We talk about participatory and collaborative design, ethnography, and media—the roots of which lie in the public-access television movements of the late 1960s and 1970s, in itself the root of much of what we call early video art—where the so-called stakeholders of a body of work are involved in a purposeful process, with a determined intention placing careful attention on equity and balancing power asymmetries. It is commonplace for media artists to work in teams of people, comparable to working in commercial film production, and acknowledge their co-authorship.

With this in mind, is it possible to think of and practice participatory or cooperative media conservation as a co-constructed process of re-staging a work of art that is both critical and egalitarian in its procedures and the way it comes together? One can imagine a congregation of approaches that is in full recognition of the power imbalances at play between artists, artist representatives, art institutions and the museum workers tasked to assist in re-staging, conserving, and preserving artworks. It is an approach that aspires to shift disparity and create a pluriverse space that can function as a free exchange of ideas, and that embodies ethical stewardship of art collections.

This text represents an attempt to answer those questions. It is an experiment with one big caveat: I was only able to do this work because of an Andrew W. Mellon Fellowship in Media Conservation, a contingent work position that provided me with enough time and economic resources to investigate the intricacies of Max/MSP and its use by artists, participate in an installation of said work, and allow for plenty of research and writing time. It’s also worth noting the fact that this kind of work is by no means perfect and oftentimes happens in organizations with a good amount of institutional buy-in and the necessary economic means to document and conserve media art installations, with an ethical onus.

In his 2015 work “Artist Intention and the Conservation of Contemporary Art,” Glenn Wharton, the first media conservator at the Museum of Modern Art (MoMA), considers the prolonged, protracted care that media artworks in museum collections require. Wharton argues for a gradual transfer of knowledge, a process where the interpretive and exegetic authority eventually flows into the collecting institution:

Artworks that are meant to be reinterpreted for each iteration are considered variable. For these works, museums and other owners develop the capacity to make their own interpretive decisions as they gain an understanding of the variability inscribed by the artist. Yet many artists maintain creative relationships with their prior works even after the moment of sale. Some savvy artists specify in contracts that they or their designees must be present and have decision-making authority at each installation. Given the labor and per diem costs involved in such an arrangement, museums and collectors may elect to negotiate these terms. In some cases, a “weaning” process evolves. Artists and their agents may be brought in during the first few installations, but over time the interpretive capacity of the owner grows as knowledge is transferred from the artist.

Labor and economic issues may arise during the process of knowledge transfer, including non-permanent museum staff holding this knowledge, the evolving recasting of meaning associated with technology-based artworks, and the relevance of contemporary art museums as we know and experience them. The notion of an eventual exchange of knowledge is further complicated by the fact that in order to be effective, it requires extensive personal contact, regular interaction, well-established trust, and reciprocal investment, especially in the case of embodied knowledge transfer (Marçal 2022). Thus, this article and the ideas discussed in it are narrowly conducive to widespread adoption within conservation practices. They are rather reflections of how the field can grow and be informed by embracing different mechanisms for participation within ongoing, dynamic conservation strategies.

Constituting and Entering the Rainforest

Rainforest V (Variation 1) is a sound installation constructed from everyday objects, including a metal barrel, a vintage computer hard disc, and a sculpture made out of polyvinyl chloride (PVC) tubing, among others. The work was conceived by the composer-artist David Tudor and realized by Composers Inside Electronics (CIE), a group of Tudor’s disciples (founded in 1973 by John Driscoll, Phil Edelstein, Paul DeMarinis, Linda Fisher, Ralph Jones, Martin Kalve, David Tudor and Bill Viola) dedicated to the composition and live collaborative performance of electronic and electroacoustic music using both software and circuitry. Rainforest V (Variation 1) germinated from a sound score that Tudor wrote for Merce Cunningham’s choreography piece entitled RainForest in 1968, which mutated into Rainforest IV (1973), an electroacoustic installation. In 2009 CIE members John Driscoll, Phil Edelstein and Matt Rogalsky developed a self-running installation version of the work called Rainforest V (Janevski 2019).

In this artwork, 20 object-sculptures are fitted with sonic transducers and suspended in an acoustically treated space, increasing their resonance—essentially making each object or instrument a loudspeaker without a cone. The way in which the object-sculptures are hung throughout the gallery creates a visual and acoustic environment. Audio files are relayed to the transducer via conventional speaker wire to create a resonant effect, or, in Tudor’s own words, “instrumental loudspeakers” (Janevski 2019). This influential electroacoustic sound artwork was chosen to inaugurate the Marie-Joseé and Henry Kravis Studio—a modular, multipurpose gallery space for live performance and experimental programming that was part of MoMA’s 2019 mega-expansion project.

As visitors enter the gallery, they are encouraged to move around the space, and, for example, place their heads within an oil barrel to listen in; kneel to get low and close to the three suspended planters; and limitlessly experience the resonant frequencies coming from the objects. Resonance is a phenomenon in which an external force or a vibrating system forces another system around it to vibrate with greater amplitude at a specified frequency of operation. In other words, one object vibrating or oscillating at the natural frequency of another object forces that other object to vibrate at a frequency higher than its natural frequency. In Rainforest V (Variation 1) there is no suggested or deliberate pathway to move through the installation space. The artist’s intention is to generate as many possible routes and vantage points as possible to get the audience to circulate through the space.

Rainforest V (Variation 1) is controlled by a custom self-running sound environment built in Max/MSP, a modular, patch-based, visual programming software by the company Cycling ‘74. Max patches, as they are commonly referred to, are composed of many Max objects (or modules) that perform various functions. Within a patch, Max objects are connected by virtual patch cords, enabling data and signals to flow between them. Max patches can be triggered by an action taken by the user, or by a scheduled event. Sets of 30 to 50 audio files were created for each installed object, which are played back in a randomized sequence by the Max patch player. With 20 objects in the gallery, the sound patterns that are generated will rarely, if ever, noticeably repeat.

The artwork’s technology chain starts with two identical sound file directories residing on two Mac Mini computers that employ a custom Max/MSP software patch, called “rf5_player” (fig. 1). The player sequences and selects the resonant object that the sound files are directed to. The custom patch was created by CIE specifically to support these Rainforest V variations. The basic operation of the patch is to randomly select and play files from a folder designated for each object. An audio interface is connected to the computers and the audio amplifier routes the selected sound file to the amplifier. The amplifier is directly connected to a transducer attached to an object using conventional black speaker cable. The Max/MSP patch is intended to run continuously for an indefinite amount of time.

Fig. 1. Top-level view of the rf5player patch.

The artists’ decision to create and rework a self-running environment arose when they were presented with the possibility of permanently installing the work in the early 2000s, along with CIE’s concerns over the work’s overall sustainability. At its core, the 2019 iteration of Rainforest V (Variation 1) signals a radical shift from its previous iterations and its family of works because it integrates automation—transforming the work from one previously activated by performers to one that involves self-operating sources of sound.

During the 2019 installation, CIE members Driscoll and Edelstein, with the assistance of Ed Potokar, were responsible for arranging and tuning each object to the gallery space. The methodology used for documenting this first iteration at MoMA after acquisition encompassed participant observation during CIE’s tuning process. Creation of the documentation took one week of in-gallery labor and involved 15 staff members, who collaborated on a rich, holistic, detailed account of the decision-making process and assembly of the installation. We had the fortune of working with a highly skilled audiovisual department, who made Computer Aided-Design (CAD) drawings of the installation and its relationship to the architectural space. These drawings recorded each of the objects’ hanging points, arrangement, and the spot-light treatments for the individual instruments.

Through the detailed observation process and post-installation interviews, it was revealed that the artists were using the custom Max/MSP patch player as an in-situ compositional tool, a feature that was not obvious to museum staff. The artists would control and remotely adjust Volume Unit levels (a standard indicator that displays the representation of the signal levels in audio equipment) in the gallery, using an iPad application released by Cycling ’74 called Mira. The Mira app allows users to remotely connect, adjust, and mirror Max patches through a WiFi or wireless ad hoc connection. In other words, with the assistance of this app, CIE was able to experiment and try different settings in the gallery, tuning each object separately and in unison over the course of the week, then saving those adjustments within the software patch. During one of our post-installation interviews, Driscoll remarked that “to be able to be in space with an iPad setting levels radically changed how we could tune [the work]. If you think of it, [it is] almost a spiral process, where you just keep spiraling tighter and tighter after more and more passes through the material and the objects. So, the tuning process, we’ve come to find, is not an after decision. It’s literally a very critical part of the installation process” (John Driscoll, March 20th 2020). At its heart, Rainforest V is a work about continuous refinement, with a rich and dense tapestry of collaboration and cooperation. That spirit was mirrored in our approach to document the work in its idealized state.

David Tudor and CIE

David Tudor (1926–1996) was an avant-garde pianist and composer born in Philadelphia, Pennsylvania, who pioneered the creation of live electronic music from the mid-1960s and onward. His prolific career is punctuated by his copious collaborations, most notably with John Cage, Merce Cunningham, Robert Rauschenberg, and La Monte Young. His explorations with handcrafted musical instruments utilizing circuitry and resonance are considered to be watershed moments in the history of electronic and avant-garde music. Along with Gordon Mumma, Tudor built his own electronic instruments, which could be patched together through analog modular synthesizers. He used three fairly simple analog electronic concepts: amplification, the act of projecting or transmitting sound or speech through electronic equipment; equalization, the process of changing the balance of different frequency components in an audio signal; and phase shifting, where two or more versions of a sound or musical motif are played simultaneously but slightly out of synchronization. Tudor constructed complex systems that produced live sounds that were not possible on the popular Moog synthesizers of the time. As a result, his electronics gave way to chaotic interactions and unusual aural frequencies that yielded a new mode of artistic expression. Composing music by soldering circuits, as is the tradition of Tudor, is a medium reliant on a process of triggering action and responding to it in real-time (Kuivila 2001),  where flipping a switch and modifying the voltage in a circuit happens in parallel and synchronic fashion. This method for composing music is transient and by nature partially impermanent.

In keeping with his life’s work, Tudor’s Rainforest series utilizes acoustic principles that playfully expose the physical dimension of sound. These artworks are equally concerned with how sound occupies space and with how loudspeaker-transducers electromechanically reproduce sound differently from acoustic instruments. Similarly, Tudor’s body of work is based on combining modular electronic devices that are connected in electronic chains and form complex feedback networks and indeterminacy as an approach to musical composition—that is, some aspects of a piece are left open to chance or to the performer/interpreter’s free will/choice. The complexity of Tudor’s circuits, which relied on parallel channels of feedback, made it impossible for a performer to fully predict or control the behavior of the instrument. Nakai (2014) explains this elaborateness:

Once activated, a signal would be distributed throughout the network passing through various gain stages, filters, and modulators before being fed back to repeat the process over and over again. The multiple channels of signals would be transduced and output from loudspeakers at different points of the network. These loudspeakers were often distributed across the space to particularize the perception of sound at a given location. The output sounds could then be fed back once more into the electronic circuitry either through microphones (acoustic feedback), or via Tudor the performer who would decide on his next maneuver based on what entered his ears.

CIE is a group of composers and performers dedicated to the composition and live collaborative performance of electronic and electroacoustic music using software and circuitry designed by themselves. They are an essential component to the longevity and continuity of Tudor’s work. The group was formed in 1973 after the “New Music in New Hampshire” workshop in Chocorua, in which Tudor led a “Rainforest” workshop on sound transformation devoid of modulation. That workshop gave birth to one of Tudor’s most well-known works, Rainforest IV (1973), an electroacoustic environment and performed installation, where performers “design and construct up to five sculptures, which function as instrumental loudspeakers under his or her control, and each independently produces sound material to display the sculptures’ resonant characteristics” (Perloff 2010).

CIE initially comprised a group of young composers that included John Driscoll, Phil Edelstein, Paul DeMarinis, Linda Fisher, Ralph Jones, Bill Viola, Martin Kalve, and David Tudor. Current members include founders Driscoll and Edelstein, who are the shepherds of Rainforest V (Variation 1). Both were actively involved in the 2019 MoMA installation, along with D’Arcy Gray, Ben Manley, Ron Kuivila, Matt Rogalsky, Stephen Vitiello, You Nakai, Matt Wellins, and Doug Van Nort, along with invited guest performers.

CIE’s dynamic group structure, its principles, and spirit of collaboration combine as a core structure that manifests in their performances. For the 2019 MoMA installation, CIE invited a multigenerational group of artists and composers to perform Tudor’s Forest Speech (1978/79). Led by Driscoll and Edelstein, this group was formed by Marina Rosenfeld, Stefan Tcherepnin, Spencer Topel, Jeremy Toussaint-Baptiste, Lea Bertucci, Ed Potokar, Margaret Anne Schedel, Phillip White, Ginny Benson, Cecilia López, Daniel Neumann, and C. Spencer Yeh. Whereas works in the Rainforest family utilize sounds reminiscent of insects, birds, and other wildlife creatures (ever changing yet sonically consistent), Forest Speech solicited human and wildlife vocal-like sounds from its performers. In view of the fact that Forest Speech was not acquired by MoMA, its modality falls outside the scope of this article, but it is worth acknowledging the multiple branches that have been generated from the Rainforest matrix.

Throughout the process of ‘learning to play’ Rainforest IV under the direction of Tudor, CIE members have come to individual and collective insights into various levels of his creative process, from mundane technical matters to ethereal spiritual concerns that seep into the way that they have continued to work and steward this artwork. After Tudor’s passing in 1996, CIE was re-formed and the group installed Rainforest IV at the Judson Memorial Church for his commemorative service. On top of that, several of Tudor’s other scores, including Rainforest IV, were routinely re-performed by the group and by previous collaborators. Tudor clearly expressed his wish to have his music kept alive within the platform for which it was composed—live performance combining aspects of sound sculpture, live concert, performance, and audience interaction. As of late, artists close to Tudor’s practice have begun to facilitate the accession of his performance works into collecting institutions, transferring the right of re-performance by old and new performers to the dominion of the museum.

Hybrid media and performance works often challenge collecting institutions, because their creative interpretation, as intended by the artists, present curators and conservators with an interrogation to our understanding of “authenticity” (Laurenson and Van Saaze 2014). In this way, following contemporary art conservation literature, one could consider CIE as the entity responsible for “reactivating” the work, intergenerationally passing down the “authenticity” or “authentic instances” of the artwork—but once these types of works enter institutional collections, the responsibility is subsequently co-managed with museum staff, and by and large by conservators. This protracted, relationship-focused way of conserving media artworks opens up new, although oftentimes difficult to sustain, ways of stewarding and caring for artworks that are more human and dialectically oriented. To sum it up, Edelstein reflects on the layered extensiveness of Rainforest IV, and by extension to Rainforest V: “Writing about the work [Rainforest] feels like a one-dimensional flattening of a space [whose] beauty was found in folds of multiple dimensions” (Rogalsky 2006).

Keeping Time: The Basics of Synchronization, Clocks, and Musical Instrument Digital Interface

A creative device commonly used in electronic music and musique concrète is to layer sound recordings on top of each other. For example, an early technique used during the 1960s was to create an echo effect by connecting two tape decks, situated at a small distance from each other, and letting each of the individual loops drift out of phase with each other as a tape loop travels from the supply reel of one machine to the take-up reel of a second machine. This phenomenon was coined by Terry Riley, a composer and musician associated with the minimalist school of composition, as a Time Lag Accumulator. The time-lag accumulator technique was perhaps wordplay in reference to the Orgone Accumulator, a pseudoscientific device developed in the 1930s by Wilhelm Reich to collect and store “orgone” energy, and an accumulator—an energy storage device, or an out-of-date term for a battery.

In a 1968 exhibition-production titled Magic Theater curated by Ralph T. Coe, Riley constructed a labyrinthine space of 12 cubicles made of tall glass and aluminum. “The outer cells contained microphones which registered the words and sounds of those who (eyes glazed, arms outstretched like robots seeking egress) passed through—these noises, then, were recorded on tape and then replayed within two minutes in the inner chamber. Like [Howard] Jones’s sound chamber [Sonic Games Chamber], the Accumulator had the laboratory mystique—a little human reactor, driven by a human mechanism” (Livingston 1968). Similar methods were developed by sound artists at that time, such as the tape loop compositions of Pauline Oliveros, where she would connect two tape machines, one of which one would record and the other would play back from a single shared tape. The delay technique, which allows for the gradual construction of sound and—later—image textures through video synthesis, is only possible through the act of synchronizing and then desynchronizing  two or more devices to record or play back together. This approach of manipulating sequential audio sources later evolved, and was famously used by English guitarist Robert Fripp and Brian Eno in a system they called Frippertronics. Afterward, this method became the bedrock for electronic, digital loop delays and new musical forms.

In layman’s terms, time can be measured by a clock: an apparatus used to measure, keep, and indicate a continued and indefinitely progressing existence. Synchronization of two or more devices is a critical feature of electronic media works. Synchronization relies on two crucial but separate aspects: devices starting at the same time, and proceeding at the same speed as time continues. Start time can be referred to as “start sync,” and the proceeding time as “continuous sync.” In an analog machine system, start and continuous sync is handled by SMPTE (Society of Motion Picture and Television Engineers) timecode. In hybrid analog-to-digital systems, where audio is sent to hardware through an A/D (analog to digital) device, timecode is used for communicating the start point; word clock is a handshake signal that determines the sampling frequency (which can be described as a digital pulse represented by a continuous square wave). Just as the analog tape moves across the tape heads at a constant speed (e.g., 30 inches per second), the samples of the digital audio recording flow by at the sample rate (e.g., 44.1 or 48 kHz). The rate of this flow is controlled by the word clock, which sets the precise sample rate of the digital system.

In 1980, a group of musicians and music merchants met to standardize an interface by which new instruments could communicate user control input with other instruments and the as of then, recently prevalent microcomputer. The standard was meant to be a universal language—a response to the lack of interoperability between manufacturers that would restrict people’s use of audio synthesizers. Dave Smith and Chet Wood, engineers at Sequential Circuits, an audio synthesizer company, presented a paper to the Audio Engineering Society in 1981 proposing the concept of a Universal Synthesizer Interface running at 19.2 k baud using regular quarter-inch phone jacks (Smith and Chet 1981). By early 1983, the Prophet 600 synthesizer became the first musical device to be connected via Musical Instrument Digital Interface (MIDI) to a Roland JP-6 synthesizer. In August 1983, MIDI Specification 1.0 was finalized. It had been developed by the MIDI Manufacturer’s Association, an agglomeration of five synthesizer manufacturers: Sequential Circuits, Roland, Yamaha, Korg, and Kawai. Later that year, Smith and electronics pioneer Bob Moog demonstrated a MIDI connection between two synthesizers. The result was the near-ubiquitous presence of MIDI-compatible electronic music devices in universities, professional and home studios, and the empowerment of a new generation of emerging digital musical artists (Diduck 2013).

The establishment of the microcomputer and their integration with MIDI-enabled instruments during the 1980s gave rise to a plethora of musicians swapping soldering irons for software and synthesizers, an increasingly affordable coalescence of skill, technology, and industry. Software, such as Max/MSP, SuperCollider, VCV Rack, and Reaktor, and their application in the composition of experimental music, deliberately emulate the modularity of 1970s-era electronic technology. Since then, and as a result of the proliferation of MIDI-enabled instruments, a multitude of software plug-ins and features have been designed to mimic the behavior, aesthetics, and sonic characteristics of obsolete yet beloved hardware synthesizers.

Cycling Through: Max/MSP History and Structure

Computer software has had an unparalleled potential for creating and manipulating sound and video. Visual programming languages such as Max/MSP and Jitter—a toolkit to work with video and graphics introduced to the Max ecosystem in 2003—endow artists with tools that automate, shift, and alter signal sources in real time, as one could with a hardware modular synthesizer. Max (or Max/MSP/Jitter), as a high-level visual programming environment, is commonly used in music and multimedia applications to this day. Max provides users with an interactive programming environment with extensive libraries of precompiled and pretested algorithms. Max programs (or patches) are made by arranging and connecting object blocks through virtual patch cords within a visual canvas. The object blocks behave as self-contained programs, and they execute a determined function. These object blocks are the primary building unit of Max. Under the hood, the object blocks are dynamically linked software libraries that can receive an input and generate a specified output. Inputs to objects are called messages and can be pre-recorded sound files, keyboard presses, or digital data. Outgoing messages can be sent to hardware output devices like a loudspeaker or an audio interface, and can be connected through virtual patch cords in the Patcher window, the most basic type of window interface in Max. Max allows for rapid prototyping, the creation of stand-alone programs, and unconfined usage of the software libraries by non-programmers or non-specialized individuals.

Max is currently developed, maintained, and licensed by the San Francisco–based software company Cycling ‘74, founded by David Zicarelli. The building blocks of what we now know as Max/MSP originated at the Institute for Research and Coordination in Acoustics/Music (IRCAM) in Paris. It was initially developed as a program called patcher authored by Miller Puckette, a mathematician and musician, in 1988. Patcher was a graphical environment for making real-time computer music with MIDI-controllable synthesizers. The real-time function of the program was probably the first of its kind for a music application, influenced by the MUSIC-N program developed by Max Matthews at the MIT Experimental Music Studio (pre-MIT Media Lab). Matthews was a pioneering computer musician who worked at Bell Labs and wrote the MUSIC programming language, the first widely used computer program for generating digital audio waveforms through direct synthesis. Puckette, in naming the program as a tribute to Matthews, “was specifically acknowledging Max’s work on a pioneering real-time scheduling algorithm called RTSKED developed in the early 1980s, which Miller credits as a fundamental influence on the design of his software” (Zicarelli 2011). The first composition that used the patcher editor was a solo piano and live electronics piece called Pluton by French composer Philipe Manoury in 1988. Manoury’s piece was composed and performed using a computer, piano, and a MIDI interface. The Pluton patch, now existing in various forms as a virtual score, is in essence the first Max patch (Puckette 2002).

An enthusiastic adoption of Max/MSP by musicians followed, despite what is a break from the “rules of computer science orthodoxy” (Puckette 2002) given in its design, application, and ease of use. Max exposed composers to real-time MIDI control and algorithmic composition that allowed for the creation of stand-alone applications. Another feature of the program’s design is the incorporation of a real-time scheduler that could be used in a music setting and would react in a similar fashion as a musical instrument. These paradigms are features of Max/MSP and the philosophies of its creator, who when asked to reflect on the creation of Max provided the following context (Puckette 2002):

This notion of task can be seen clearly in the boxes of the Max paradigm, which “trigger” each other via their connections. The idea predates the invention of MIDI (and Max was never conceived as a MIDI program), but the availability of MIDI I/O for Macintosh computers was convenient for early Max users because MIDI shares the sporadic quality of control events which have their roots much earlier in the “MUSIC N” programs (and even much earlier still, in the well tempered clavier keyboard that MIDI models). The parallelism so visually apparent in a Max patch is intended to allow the user to make computer programs that follow the user’s choices, not the program’s. This was necessary so that Max patches (the programs) could work as musical instruments.

By 1990, IRCAM licensed the prototype developed by Puckette to Opcode Systems, founded by Dave Oppenheim, who was producing MIDI sequencing software for further development as a commercial product. Opcode began selling a commercial version of Max in 1990 developed by Zicarelli. Meanwhile, Puckette, realizing that personal computers were capable of improved performance levels that made audio processing using a CPU viable, developed a new package called Pure Data. When Opcode was bought out by Gibson Guitar Corporation, Cycling ’74 (founded by Zicarelli) obtained the rights to Pure Data and Max, and combined them to form Max/MSP. In 2003, Jitter, a plug-in that allows for real-time manipulation of video and 3D graphics, was introduced, and, in 2009, Max was integrated with Ableton, a popular digital audio workstation (DAW).

Fig. 2. Unlocked version of the rf5playerpatch

Conceptually, Max/MSP can be thought of as a collection of boxes interconnected by lines (fig. 2). The expressive quality of the program comes from the interconnection and intercommunication provision of the boxes. This quality is present in the program even though the contents—the underlying code of the boxes—are usually hidden from the user. In this way, a user may schedule real-time tasks and manage the intercommunication between different sets of operations without necessarily having a proficiency in computer programming. Structurally, Max boxes or objects communicate by sending each other messages through virtual patch cords, and the messages are sent at a specific moment in time triggered either by a user response action or because an event was scheduled to occur. An example of a scheduled event is the “metro” or metronome object that outputs bangs (a bang message causes an an object to trigger their output) at regular, user-specified intervals, along with an object like “setclock” which creates and controls an alternative clock running according to the standard Max/MSP millisecond clock. These time-based objects can be used to create generative music (a term coined by Brian Eno that denotes music that is ever changing and created by a system) or algorithmic compositions, which use a formal set of rules to create music. Today, musicians, live performers, and media artists extensively use Max/MSP for real-time signal processing, prototyping, and time-based media installations.

Edelstein, who was primarily responsible for creating the rf5player patch for Rainforest V (Variation 1), reflected on this during one of our interviews: “I had been using these techniques for software generation and configuring. So instead of playing one file, you could play 20 (files). I didn’t know about the poly object . . . So I wrote my own.” A poly object allows a patcher to be encapsulated inside an object box and allows one or more instances of a patcher to be loaded. Combined with the “random” function, these objects comprise the topographical layer of the player patch used in this variation of Rainforest V.

The “random” object embedded within the spaghetti tangle of the rf5player patch responds to a load bang that is initiated upon startup of the computer. Following that, the “umenu” object is launched: this object contains the file names for a given object-sculpture which are saved in a corresponding folder. Then the “sfplay” object starts playing the first file. When the “sfplay” object finishes playing a file, a bang or execution trigger is sent to the “random” object. The “random” object selects an audio file from the directory, and the “umenu” object displays the next file to be played and sends a message to the “splay” object. This aspect of the patch is not visible to a user unless you activate the patcher’s edit view mode (the patch automatically opens in presentation mode). This brings us to the next section, in which I will discuss useful techniques for assessing the condition and functionality of a Max/MSP patch using the rf5player program as an example.

Condition Checking Max/MSP Patches and the Pseudocode Approach

In the field of media conservation, the practice of condition checking artist-provided deliverables is employed to initially establish and record the principal features and parameters of both the content and display of time-based media artworks. With single channel and multichannel video works, a media conservator will view and play back files pertaining to the work in real time, ideally on display monitors of varying technologies, noting any features or characteristics observed within visual and aural spheres. Museums and media art institutions across the world have developed their own styles of in-house reporting, notably with the Guggenheim, MoMA, The Metropolitan Museum of Art, and Matters in Media Art publishing their report templates for condition assessment, display, and iterations in recent years.

[This] archive will be a resource for staff at the museum in planning and executing the next exhibition. This buildup of documentation is now common practice for conceptual, ephemeral, and variable works. Creating this documentation involves considerable time and labor to record what the work has been, what it is, and what it can be in the future. Artists fill out questionnaires, create installation manuals, and participate in interviews. Museum staff produce their own documentation through each life stage of the artwork. New documentation practices extend protocols established for more traditional collections, as the artwork portfolio builds from acquisition through each phase of an artwork’s institutional life, including storage, exhibition, loan, and conservation. (Wharton 2015b)

Following this premise, if an artwork consists of a bouquet of audio files, software, and hardware, a conservator would be responsible for mocking up the artwork to the best of their ability and registering and recording the technical characteristics, settings, and functional behaviors of the work.

In the case of Rainforest V, a Max patch is used for multichannel audio file playback. Just as there is an overarching logic in the artist’s placement and tuning of the installation objects in space, there is also an overarching logic to how the Max patch was put to use in the process of installing and refining settings in a gallery exhibition. This process was initially documented through participant observation of the artists at work in the gallery as they placed, assembled, and adjusted the objects. In collaboration with my colleagues from sculpture conservation, registration, and collection managers, I was able to draft an assembly manual that included detailed descriptions, from nuts-to-bolts, of how the object-sculptures were pieced together. The process of engaged observation revealed that, in addition to containing and administering the playback of the work’s audio files, the player patch also saved and registered setting and tuning schemas that constituted a critical aspect of how this artwork was received in time and space.

In the early stages of acquisition, the museum had received a DMG disk image of the build environment used in a prior installation. This version of the patch did not have the present, in-gallery programmed settings and value parameters, particularly the JSON data pertaining to the current installation being considered. To prepare for condition checking the patch, I disk imaged the two Mac Pro 2013 computers running on macOS Mojave 10.14.1 that were used in this iteration. In doing so, I was able to preserve the in-gallery levels and settings after they were calibrated by CIE, and extracted the patch for close inspection.

At the time, a typical disk imaging workflow entailed using the command line version of FTK Imager for Mac. Since 2012, this version is no longer supported, which caused it to fail with Mac computers using newer file systems—particularly Apple File System (APFS), the proprietary file system for macOS High Sierra (10.13) and later. For this reason, I imaged the computers using the dd command via Target Disk Mode while they were connected to a separate destination computer. This is a bit of a detour, but it is worth mentioning that my colleagues and I contributed to the development of a disk-imaging protocol at MoMA that includes a report entered into the museum’s TMS database that makes note of the following circumstances which we deemed as important to save for posterity:

  • The rationale for creating a disk image.
  • The process used for imaging and version (including whether the drive was removed while disk imaging, etc.).
  • Write-blocking mechanism (connection type, model number), including version and settings.
  • Person responsible for creating the disk image, date of creation.
  • Checksum.
  • Partitions present on disk:
    • Byte offset
    • Name of logical volume
    • File system
  • If bad sectors were found while creating the disk image and where they are situated (this is normally an output of Guymager or FTK Imager).
  • Notes: If any issues occurred while disk imaging, note the problems encountered and how they were solved.

As is the case with all computer-based artworks, it is crucial to gather and deconstruct complex data such as software packages, patch libraries, executable files, and document the authoring system used:

When captured from a computer system, we can consider the disk image a representation of a complete software environment, incorporating an operating system, installed programs, and user data. This image is useful not only because it encapsulates all the important bits of data, but because we can use it as a basis for emulation or virtualization as a means of accessing these environments in the long-term (Ensom and McConchie 2020).

When working with complex software-based installations, the ideal starting point or baseline documentation framework is to create a snapshot of the system at work, in its idealized state. From the computer’s disk image, I was able to extract and copy the Max patch to my workstation for evaluation. Max/MSP can be downloaded and used for free to view a patch, but to edit and save those edits, a paid-by-subscription license is needed. This presents a high risk for collecting institutions because, as I will describe hereinafter, the act of unlocking a patch to examine its different sections and layers is considered to be an edit (or paid) feature. Moreover, what happens to the artwork when the software is no longer supported? And, how can a conservator document the internal processes of the Max patch so that if/when the software is no longer available it can be replicated with something else that mimics the same functionality of the original patch?

To prepare for the examination, I enrolled in an advanced Max/MSP individualized study course with Hans Tammen at Harvestworks Digital Media Arts Center. Hans is a renowned composer and professor at School of Visual Arts, Hunter College, and at Harvestworks, where CIE members also serve as board members. Under the guidance of Tammen, I was able to develop an understanding of the rf5player patch and examine its component parts under the counsel of someone who had intimate technical and contextual knowledge of the work and its creators. One of the crucial things I learned was that a Max patch can be saved or delivered in two ways:

  • A collective, which contains all of the necessary resources to use a patcher. These resources include abstractions, external objects, and image or audio files. Making a collective allows a project to be received with the confidence that it will not be missing anything to run properly. One caveat is that collectives, unlike patcher files, are generally platform specific and will require Max or Max Runtime to operate.
  • A stand-alone application, which involves combining a collective with the Max Runtime application, resulting in a file that looks like a standard executable application. It is used in conjunction with the ‘Build Application/Collective’ item in Max’s File menu. This means that all of the objects will be automatically included in the build, including the runtime files. Neither Max nor Max Runtime need to be installed on the computer to run a stand-alone application.

In the following, I will summarize a set of criteria to aid in guiding a condition assessment of a Max/MSP artwork:

  1. Checking that the information is complete: All of the expected files were received.
  2. Checking the usefulness of the information: The patch can be executed and functions as expected.
  3. Authenticity of the information: The patch is the same as the one used in its previously realized or sanctioned manifestation. This can be confirmed by performing a checksum on the patch and by individually verifying that the patch retains the levels, settings, and structure of the one used in exhibition.

Additional challenges posed by works utilizing Max/MSP in their construction is how to condition check the patch, and what status to attribute to it. At MoMA,the “status” TMS attribute is used to reflect the artist-assigned status of a component. The status attribute is assigned by the conservator and may be an “artist master,” “artist proof,” “exhibition copy,” “research copy,” “artist ancillary material,” or “archival.” Suffixes are also added to a component number to establish the relationship of a file to an artwork. For example, artist master and derivative files, such as an exhibition file, would be assigned a “.x” suffix followed by a number in sequence: 200.x1, 200.x2, and 200.x3. In the case of Rainforest V (Variation 1), the Max patch was assigned a .x suffix and was categorized as an artist master because the patch contains the audio files that make up an essential part of the work. This selection was made in spite of the fact that members of CIE themselves have expressed an openness to, and acceptance of, using something other than Max/MSP to reactivate the work in the (somewhat) near future:

Each time we revisit [Rainforest V (Variation 1)], we think that this is a better way. But we talk about these Tsunami file players (1). I could have also done it with something like SuperCollider. It could have been done in [Pure Data], could do it in Python or Java … it’s pretty primitive and it’s turned out to be fairly robust in Max. The problem has been as the versions of Max increase, I feel obligated to make sure it works with upgrades to the operating system and upgrades to Max and the certain convenience of being able to use the iPad for adjusting the volumes [with Mira] (Edelstein 2019).

In Max, the concept of a “project” is similar to other programming languages in which a single folder neatly organizes multiple files such as patchers, media, images, and data. File extensions for these types of projects may be .max (up to Max 4) and .maxpat (Max 5+). Max 5 patches are not backward-compatible. At the time of writing, Max 8 saves projects with the .maxpat extension; these files are plain-text JSON (JavaScript Object Notation), a data interchange format that uses human readable text to store and transmit data objects. There is no open specification for .maxpat files, but these files are more or less readable and interpretable by JSON viewers or readers. In some cases, a patch may have a “pattr” object, which is something akin to an alias for data or data inside another object. In the case of the rf5player patch, the rf5_pattr.json file holds the volume settings.

A straightforward way to check Max file deliverables is to open the patch in Max and verify whether any external libraries are not included in the delivered patch. To do this, go to File> List externals and subpatcher files from the top menu in Max. Presentation and patching modes can be toggled to scrutinize any underlying subpatches included in the build. One can also “encapsulate” a section of a patch, which places sections of a patch within their own subpatch. Encapsulating a patch allows a user to hide parts of the patcher logic to make projects easier to read and condenses the lower-level logic without having to change anything at the higher-level programming.

Another useful tool is the object inspector and console, which displays information, error messages (errors in red, warning in yellow, and internal errors in blue) that will assist in debugging a patch. Data flow in a Max patcher can sometimes be extremely complex and mystifying. One way to resolve the confusion is to use the built-in debugging tools in Max that are available in the Debug Menu. Additionally, the object inspector can reveal an object’s essential argument or the variables that are called or invoked, and provide a complete list of all attributes associated with the selected object. For example, if the artwork’s Max patch contains a random object, which takes an argument for the range of numbers that you want to produce, then the random object inspector would display the set parameter as a number between 0 and 999 (or 1,000) potential numbers.

In summary, actions that a conservator can perform to support a condition check on a Max patch are as follows:

  1. Run the patch and verify that it functions as expected.
  2. Run the Max patch from its “source file.”
  3. Unlock the patch’s presentation mode to reveal the elements that make up the patch.
  4. Toggle between View mode and Presentation mode and note any differences in the arrangement of the blocks. Provide a description of the objects in the patch and their structure.
  5. Consult the Help Files and inspector menu in an unlocked patcher.
  6. Select “List externals and subpatcher files” from the file menu, and document them in the condition report.

To accompany my deep dive into Max/MSP, museum staff and the artists held a series of post-installation interviews in order to collectively excavate the boundaries of the work. This gave us the opportunity to ask lingering post-installation questions, such as what are acceptable levels of oxidation on the “brass strip” object, as well as gaining a deeper understanding and reflection of the collaborative installation experience. During our last interview, which concerned the software components of the work, I used a pseudocode approach, a methodology suggested by my Max/MSP tutor. In computer science, pseudocode is a notation style resembling a simplified programming language that is used to design programs or provide a high-level description of an algorithm. This type of notation is highly specific, and to be clear, I did not write pseudocode for this patch but used its tenets to guide the interview. Pseudocode is an annotated informative text written in plain language that represents an algorithm or software function. It does not have an established, standardized syntax and cannot be used to compile or interpret a program by a computer.

It can, however, serve as an informal, human-readable, high-level description of the operating principle of a computer program—a digest of the building blocks of the patch and its basic functionality. During a video-recorded discussion, I gently prompted Edelstein—the person mainly responsible for the patch, and who is an experienced programmer—to guide us through the different functional objects of the program starting from its first load-bang sequence. In this way, a video recording of the artist navigating a software program can be thought of as a snapshot of work similar to a disk image, a piece of a naturally evolving material assemblage that constitutes the ‘essence’. For example, Edelstein’s bird’s-eye description of the patch can shine a light toward possible future recombinations of the elemental components of the work:

So, the simple structure of the patch is that there’s a little construction of a file player. Essentially, there’s 20 copies . . . there’s 10 copies of that running on each machine and essentially each of one of these little file player objects takes a folder and then plays randomly, plays the clip through and then selects another one in the folder. So, mid-level, underneath the scenes are these 20 file players and then the top level really controls the volume of each of the players. We used Mira, essentially, as a way of getting remote access to control the volumes and then use the pattr object in Max for essentially saving the presets. This was never ideal. This is always kind of an act of convenience (Edelstein 2020).

The resulting, richly detailed description of a software—abstracted from the actual code—serves as a middle ground that honors the technology used to run an artwork at a given point in time while refusing to oversimplify the technology. The pseudocode interview as an archival resource could potentially enable a conservator in the future to pseudocode an appropriate replacement technology that is used thereafter. The pseudocode approach facilitates the comparison and migration/translation of the “logic” of a program, with the understanding that in certain, specific cases, the Max patch that runs an artwork may not necessitate material maintenance in perpetuity, or in cases where source code is not available to a conservator.

Conclusions

As with other types of software-based artworks, production materials utilized for a given artwork should be acquired and preserved by a collecting institution only after careful consideration and determination that these materials are inherent and/or hold a characteristic, aesthetic or conceptual feature of the work. Long-term approaches to preserving software-based artworks are multi-pronged and include registering a snapshot of the software in its performative, idealized state. Becoming aware of the software dependencies on which an artwork relies, and attaining a deep understanding of the software’s functionality, are essential factors in documenting media artworks. This authentication process will enable a conservator to track software obsolescence risks and, if need be, migrate the software to another technology.

An important step in the characterization of an object or a functional piece of an object is to learn and make sense of the materials used by the artists. This assures the long-term preservation of components using these materials, as well as their role within the system constituting the artwork. Peeling back the layers of compiled or standalone Max/MSP applications (as seen in figures 2 and 3) can reveal the complex mechanisms and operations that are assembled by artists working with software. Maintaining and fostering relationships with artists or artist representatives is a way to ensure that proposed preservation actions are in line with the artist’s vision and conception of the artwork. This is not an issue of control or micromanaging, but rather an opportunity for conservators to take an active role in the co-creating process of an artwork.

[I]t’s not because we want the control, it’s because that’s part of the process for us too … we’ve never thought of them as historical fixed entities and as much as they’re collected in different museums, we still think of them as sort of living works that are not fixed. So therefore, for us it’s sort of an anathema to think that: “Oh well, here’s the rules and here’s the floor plan to do it.” Because that takes the fun out of it. I mean we really like that process of making them and it’s sort of strange to say you’ve acquired a work and theoretically that means it’s done and we may be a little anachronistic in that sense, but we still think of them as sort of living works in progress (Edelstein and Driscoll 2020).

Fig. 3. Snapshot of “random” and “sfplay” objects with virtual patch cords in the rf5player patch.

ACKNOWLEDGMENTS

I am grateful to the MoMA Conservation Department, especially to my sculpture and media conservation colleagues. Very special thanks to Kate Lewis, Ellen Moody, Peter Oleksik, Flaminia Fortunato, Lia Kramer, and Gene Albertelli. I am indebted to Amy Brost, who guided, supported, and heavily influenced this exploration. This work was supported by the Andrew W. Mellon Foundation. Many thanks to the EMG (Electronic Media Group) and its review editors, especially Meaghan Perry and sasha arden, for their patience and guidance with draft submissions during the tumultuous 2020 year. Last, but not least, very special thanks to Composers Inside Electronics, Phil Edelstein, John Driscoll, Hans Tammen, Carol Parkinson and the Harvestworks Digital Media Arts Center for trusting in me and their invaluable insight.

NOTES

  1. Tsunami file players are able to extend polyphony to 32 mono or 18 stereo simultaneous uncompressed 44.1 kHz, 16-bit tracks. Each track is able to start, pause, resume, loop, and stop independently, and can have its own volume setting. Newer models have eight audio channels. These file players were developed by Jamie Robertson.

REFERENCES

Diduck, Ryan Alexander. 2013. “The 30th Anniversary of MIDI: A Protocol Three Decades On.” The Quietus. https://thequietus.com/articles/11189-midi-30th-anniversary.

Edelstein, Phil, and John Driscoll. Conservation interview with Amy Brost and author. March 20, 2020.

Ensom, Tom, and Jack McConchie. 2020. “Preserving Virtual Reality Artworks: White Paper.” Unpublished manuscript, last updated March 2020. Google Docs file.

Gray, Darcy Philip. 1997. “The Art of the Impossible.” David Tudor website. https://davidtudor.org/Articles/dpg_impos.html

Janevski, Ana and Martha Joseph. 2019. “The Evolution of David Tudor’s Rainforest.MoMA Magazine. https://www.moma.org/magazine/articles/166

Kuivila, Ronald. 2001. “Open Sources: Words, Circuits and the Notation/Realization Relation in the Music of David Tudor”. Presented at the Getty Research Institute Symposium, ‘The Art of David Tudor’. https://www.getty.edu/research/exhibitions_events/events/david_tudor_symposium/pdf/kuivila.pdf

Laurenson, Pip, and Vivian Van Saaze. 2014. “Collecting Performance-Based Art: New Challenges and Shifting Perspectives.” In Performativity in the Gallery: Staging Interactive Encounters, edited by Outi Remes. Bern: Peter Lang. 27–41.

Livingston, Jane. 1968. “Magic Theater.” Artforum 7 (2): 66-67.

Marçal, Hélia, “Ecologies of Memory in the Conservation of Ten Years Alive on the Infinite Plain.” In Reshaping the Collectible: Tony Conrad, Ten Years Alive on the Infinite Plain, Tate Research Publication, 2022. https://www.tate.org.uk/research/reshaping-the-collectible/conrad-ecologies-memory

Nakai, You. 2014. “Hear After: Matters of Life and Death in David Tudor’s Electronic Music.” Afterlives of Systems 3 (1): Article 10. https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1023&context=cpo.

Pask, Andrew. 2015. “Max 7 and Long-Term Installations.” Cycling ‘74 Forum (blog). December 29, 2015. https://cycling74.com/tutorials/max-7-and-long-term-installations/replies/1.

Perloff, Nancy. 2010. The Art of David Tudor. Getty Research Institute (research guide). 2001–2010. Accessed March 16, 2021. https://www.getty.edu/research/tools/guides_bibliographies/david_tudor/index.html

Phillips, Joanna. 2015. “Reporting Iterations: A Documentation Model for Time-Based Media Art.” In Performing Documentation, Revista de História da Arte, edited by Gunnar Heydenreich, Rita Macedo, and Lucia Matos. Lisbon: Instituto de Historia da Arte. 168–179. http://revistaharte.fcsh.unl.pt/rhaw4/rhaw4_print/JoannaPhillips.pdf

Puckette, Miller. 2002. “Max at Seventeen.” Computer Music Journal 26 (4): 31–43. http://msp.ucsd.edu/Publications/dartmouth-reprint.dir/

Rogalsky, Matthew. 2006. “Idea and Community: The Growth of David Tudor’s Rainforest, 1965-2006.” Doctoral Thesis. City University London. Music Department. Accessed March 16, 2021. https://core.ac.uk/download/pdf/42628585.pdf

Smith, Dave and Chet Wood. 1981. “The ‘USI’, or Universal Synthesizer Interface.” Sequential Circuits, Inc., San Jose, CA. Paper 1845. Accessed June 2024: https://aes2.org/publications/elibrary-page/?id11909.

Wharton, Glenn. 2015a. “Artist Intention and the Conservation of Contemporary Art.” Accessed March 16, 2021. http://resources.culturalheritage.org/osg-postprints/wp-content/uploads/sites/8/2015/05/osg022-01.pdf.

Wharton, Glenn. 2015b. “Public Access in the Age of Documented Art.” In Revista de História da Arte—Série W. Lisbon: Instituto de História da Arte. 180–191. https://revistaharte.fcsh.unl.pt/rhaw4/RHAw4.pdf 
Zicarelli, David. 2011, April 22. Max Matthews: An Appreciation. Cycling ’74 Blog. https://cycling74.com/articles/max-mathews-an-appreciation.

FURTHER READING

Composers Inside Electronics Concert. 2016, April 18. Concert includes performances Impulsions (2015) by Phil Edelstein using Max/MSP, and Microphone (1970) by David Tudor, performed by Phil Edelstein and John Driscoll. https://artmuseum.pl/en/wydarzenia/rainforest-v-wernisaz-2.

Dechelle, Francois. “A Brief History of MAX.” Accessed March 16, 2021. http://jmax.sourceforge.net/history.html.

Matters in Media Art. “Acquiring Media Art Templates.” Accessed March 16, 2021. http://mattersinmediaart.org/acquiring-time-based-media-art.html#post-templates.

Museum of Modern Art. “Getting Started Resources.” Media Conservation Initiative website. Accessed March 16, 2021. https://www.mediaconservation.io/resources#getting-started

Solomon R. Guggenheim Museum. n.d. “Time-Based Media.” Solomon R. Guggenheim Museum. Accessed March 16, 2021. https://www.guggenheim.org/conservation/time-based-media.
Time-Based Media Working Group. “Sample Documentation and Templates.” Metropolitan Museum of Art. Accessed March 16, 2021. https://www.metmuseum.org/about-the-met/conservation-and-scientific-research/time-based-media-working-group/documentation.

AUTHOR

Caroline Gil Rodríguez
Andrew W. Mellon Fellow in Media Conservation
Museum of Modern Art, New York
gilcaroline10@gmail.com