42nd Annual Meeting – Electronic Media Group Luncheon, May 30, “Sustainably Designing the First Digital Repository for Museum Collections”

Panelists:
Jim Coddington, Chief Conservator, The Museum of Modern Art
Ben Fino-Radin, Digital Repository Manager, The Museum of Modern Art
Dan Gillean, AtoM Product Manager, Artefactual Systems
Kara Van Malssen, Adjunct Professor, NYU MIAP, Senior Consultant, AudioVisual Preservation Solutions (AVPreserve)
This informative and engaging panel session provided an overview of The Museum of Modern Art’s development of a digital repository for their museum collections (DRMC) and gave attendees a sneak peak at the beta version of the system. The project is nearing the end of the second phase of development and the DRMC will be released later this summer. The panelists did an excellent job outlining the successes and challenges of their process and offered practical suggestions for institutions considering a similar approach. They emphasized the importance of collaboration, communication, and flexibility at every stage of the process, and as Kara Van Malssen stated towards the end of the session, “there is no ‘done’ in digital preservation” — it requires an inherently sustainable approach to be successful.
This presentation was chock-full of good information and insight, most of which I’ve just barely touched on in this post (especially the more technical bits), so I encourage the panelists and my fellow luncheon attendees to contribute to the conversation with additions and corrections in the comments section.
Jim Coddington began with a brief origin story of the digital repository, citing MoMA’s involvement with the Matters in Media Art project and Glenn Wharton’s brainstorming sessions with the museum’s media working group. Kara, who began working with Glenn in 2010 on early prototyping of the repository, offered a more detailed history of the process and walked through considerations of some of the pre-software development steps of the process.
Develop your business case: In order to make the case for creating a digital repository, they calculated the total GB the museum was acquiring annually. With large and ever-growing quantities of data, it was necessary to design a system in which many of the processes – like ingest, fixity checks, migration, etc.- could be automated. They used the OAIS (open archival information system) reference model (ISO 14721:2012), adapting it for a fine art museum environment.
Involve all stakeholders: Team members had initial conversations with five museum departments: conservation, collections technologies, imaging, IT applications and infrastructure, and AV. Kara referenced the opening session talk on LEED certification, in which we were admonished from choosing an architect based on their reputation or how their other buildings look. The same goes for choosing software and/or a software developer for your repository project – what works for another museum won’t necessarily work for you, so it’s critical to articulate your institution’s specific needs and find or develop a system that will best serve those needs.
Determine system scope: Stakeholder conversations helped the MoMA DRMC team determine both the content scope – will the repository include just fine arts or also archival materials? – and the system scope – what should it do and how will it work with other systems already in place?
Define your requirements: Specifically, functional requirements. The DRMC team worked through scenarios representing a variety of different stages of the process in order to determine all of the functions the system is required to perform. A few of these functions include: staging, ingest, storage, description & access, conservation, and administration.
Articulate your use cases: Use cases describe interactions and help to outline the steps you might take in using a repository. The DRMC team worked through 22 different use cases, including search & browse, adding versions, and risk assessment. By defining their requirements and articulating use cases, the team was able to assess what systems they already had in place and what gaps would need to be filled with the new system.
At this point, Kara turned the mic over to Ben Fino-Radin, who was brought on as project manager for the development phase in mid-2012.
RFPs were issued for the project in April 2013; three drastically different vendors responded – the large vendor (LV), the small vendor (SV), and the very small vendor (VSV).
Vetting the vendors: The conversation about choosing the right vendor was, in this blogger’s opinion, one of the most important and interesting parts of the session. The LV, with an international team of thousands and extremely polished project management skills, was appealing in many ways. MoMA had worked with this particular vendor before, though not extensively on preservation or archives projects. The SV and VSV, on the other hand, did have preservation and archives domain expertise, which the DRMC team ultimately decided was one of the most important factors in choosing a vendor. So, in the end, MoMA, a very big institution, hired Artefactual Systems, the very small vendor. Ben acknowledged that this choice seemed risky at first, since the small, relatively new vendor was unproven in this particular kind of project, but the pitch meeting sold MoMA on the idea the Artefactual Systems would be a good fit. Reiterating Kara’s point from earlier, that you have to choose a software product/developer based on your own specific project needs, Ben pointed out that choosing a good software vendor wasn’t enough; choosing a vendor with domain expertise allowed for a shared vocabulary and more nimble process and design.
Dan Gillean spoke next, offering background on Artefactual Systems and their approach to developing the DRMC.
Know your vendor: Artefactual Systems, which was founded in 2001 and employs 17 staff members, has two core products: AtoM and Archivematica. In addition to domain expertise in preservation and archives, Artefactual is committed to standards-based solutions and open source development. Dan highlighted the team’s use of agile development methodology, which involves a series of short term goals and concrete deliverables; agile development requires constant assessment, allowing for ongoing change and improvement.
Expect to be involved: One of the advantages of an agile approach, with its constant testing, feedback, and evolution, is that there are daily discussions among developers as well as frequent check-ins with the user/client. This was the first truly agile project Artefactual has done, so the process has been beneficial to them as well as to MoMA. As development progressed, the team conducted usability testing and convened various advisory groups; in late 2013 and early 2014, members of cultural heritage institutions and digital preservation experts were brought in to test and provide feedback on the DRMC.
Prepare for challenges: One challenge the team faced was learning how to avoid “scope creep.” They spent a lot of time developing one of the central features of the site – the context browser – but recognized that not every feature could go through so many iterations before the final project deadline. They had to keep their focus on the big picture, developing the building blocks now and allowing refinement to happen later.
At this point in the luncheon, the DRMC had it’s first public demo. Ben walked us through the various widgets on the dashboard as well as the context browser feature, highlighting the variety and depth of information available and the user-friendly interface.
Know your standards: Kara wrapped up the panel with a discussion of ‘trustworthiness’ and noted some tools available for assessment and auditing digital repositories, including the NDSA Levels of Digital Preservation and the Audit and Certification of Trustworthy Digital Repositories (ISO 16263:2010). MoMA is using these assessment tools as planning tools for next the phases of the DRMC project, which may include more software development as well as policy development.
Development of the DRMC is scheduled to be complete in June of this year and an open source version of the code will be available after July.