The Potential of Augmented Reality (AR) in the Virtual Performance of Time-Based Media Art

Sasha Arden
Electronic Media Review, Volume Six: 2019-2020

ABSTRACT

Augmented reality is a technology that superimposes digital information on a view of the real world through a device such as a smartphone or tablet. Using an app or web page, one can point a device’s camera at a designated object and see video, images, three-dimensional digital renderings, and more, activated as a virtual layer while the real-life object is still in view. This research explores the potential of augmented reality as a tool to preserve the experience of time-based artworks no longer able to function in their original iteration owing to damage, obsolescence, or other barriers. Elements such as moving images or kinetic motion could exist as virtual visual layers integrated with the original object or alongside it. Current workflow options and practical considerations for creating an augmented reality project are discussed, and a case study illustrates the process. Ethical concerns and issues of authenticity are also addressed, and reflection on how artworks are experienced and conservation values is encouraged.

INTRODUCTION

This research investigates the possibilities and limitations of using augmented reality, or AR, in the virtual performance of time-based media art. AR is a technology that superimposes digital information on a view of the real world through a device such as a smartphone or tablet. It has been in development in various forms since the 1960s, with wide adoption made possible only recently with the availability of consumer devices powerful enough to handle the real-time sensor responses and image rendering inherent to AR, along with software development tools to take advantage of these abilities. The technology has the potential to preserve the experience of time-based artworks no longer able to function consistently owing to fragility, damage, obsolescence, or other barriers. Conservators and curators have endeavored to maintain and repair these artworks while strategizing methods to display them and minimize risk of damage. The original objects are valuable in their materiality and connection to the artist, but a significant aspect of their meaning is lost to audiences who cannot witness their intended behavior. As an alternative to severely restricted performance, an exhibition copy, displaying an object with documentation of its past function, or even indefinite storage, AR offers a unique method to connect time-based, work-defining elements to their physical anchors and to keep original artworks fully accessible to viewers.

DIGITAL CONTENT AND 3D MODELS 

Development of AR content may require some specialized equipment and software, depending on the artwork and goals of an AR project. A digital version of the artwork is used in an AR platform. In the case of a video playing virtually on a nonoperational CRT screen, the video may be natively digital or could be digitized. In the case of an object, such as a kinetic sculpture, a digital three-dimensional (3D) model could be captured with 3D scanning equipment, a photogrammetry process, or built manually with 3D modeling software. 3D scanning and photogrammetry work only for solid objects that can be safely turned so that all sides are exposed to the equipment. An object with thin parts, such as wire or movable elements, may not be able to be captured or might need manual intervention to complete the digital model. Well-executed 3D scanning or photogrammetry can produce an accurate, to-scale digital model. Manually building a digital model is possible, but the final digital model may not be an exact replica. Interpretation is a necessary condition when working from two-dimensional (2D) documentation to create 3D forms. Access to the artwork itself as a reference is recommended but could involve extra handling that increases the risk of damage. There are many software options to build or refine a digital model. The most popular are currently Blender, Maya, and Autodesk 3ds Max. 3D modeling software is complex; available options should be evaluated for features, price, format support, and compatibility with AR development software. As with any technology, 3D capture and modeling tools will undoubtedly shift over time. Therefore, evaluation of current options is encouraged if an AR project will be undertaken.

3D models are mathematical representations of an object’s surface in three dimensions, accomplished through connecting points in space with geometric shapes such as triangles, lines, and curves. 3D scanning and photogrammetry automatically generate geometries to represent the object in modeling software; however, when manually building an object, the palette includes planes and sets of Platonic shapes such as spheres, cubes, and other polygons. A realistic digital form can be sculpted through reshaping, joining, and deforming to replicate an object that might have dents, tooling marks, or other evidence of handwork. Just as in 2D images, the resolution of a 3D object affects its perceived quality. The greater the number of points defining the object, the more realistic it will appear and the larger the file size and demand on processing during export and display.

A digital 3D model is not complete without the addition of materials, which act like wrappers to give the model color and texture. There are several approaches to consider in relation to the particular artwork. 3D modeling software has native features to add color, roughness, and other parameters, such as metalness, which can be manipulated to resemble a range of materials and results in a uniform wrapper for the object when used alone. Surface texture or topography can be added with a so-called “normal” map, which is a specially formatted image file that works behind the scenes to affect how light falls on the object. Normal maps to replicate the texture of common materials can be found online in user forums or through third-party developers. They can also be generated in Photoshop from an image of a texture. Packages of files to replicate many types of materials are available as well, which may make this aspect of creating a 3D model less daunting. If an object’s visual features must be replicated exactly, it is possible to capture images of its surface and map them to the 3D model by leveraging UV data. Another approach would be customizing material wrapper files in Photoshop to recreate an object’s surface features.

If the object includes time-based behaviors, such as motion, it is also necessary to add animation to the 3D model. Each software package has its own proprietary physics engines that calculate forces like gravity and account for logical interactions between objects. These can be used in coordination with constraints to produce realistic motion for an object and its parts. However, the physics engine information in a 3D modeling software package may not be compatible with the AR content delivery platform. With a model that is set up and exported correctly, physics can be added by some AR development software. A more laborious method, but potentially more accurate and cross-platform compatible, is manual animation with keyframes. Similar to 2D animation, each part of an object can be independently repositioned, with location and rotation data stored in keyframes. The software interpolates motion between keyframes (more keyframes means less interpolation); when played back, the object will appear to be in motion. Keyframing can become complex when managing several moving parts that must remain coordinated. The choice of animation method will likely be determined by the object’s behavior to be reproduced and compatibility within the workflow that is established. At present, the diversity of for-profit companies and open-source foundations; public and private consortia; and individuals developing tools and standards for the related fields of 3D modeling, gaming, virtual reality, and augmented reality has resulted in fragmentation due to varying motivations and ability to implement. There are already efforts to address cross-platform compatibility, such as the WebXR API; the trend may continue as these technologies mature.

AUGMENTED REALITY PLATFORMS 

AR content can be developed and delivered with proprietary software or for browser-based web access. Unity (with Vuforia AR Engine) is a popular and powerful game development software package that supports AR. It can use markers or designated objects to trigger AR content.1 Unity and Vuforia recently joined forces; each company still requires its own user account to activate the feature sets. Both can be used for free to test projects but require paid developer licenses for public access to AR content. The final product can be compiled into an app that is downloaded from mobile device app stores or it can be exported for access on the web with a browser. Unity relies on external libraries and asset packages from Vuforia’s AR asset management system, which are accessed easily while working on a project but must be maintained for future access to the project files to make changes or to redeploy. For web-based AR, the open-source framework A-Frame can be used with AR.js and three.js JavaScript libraries. A-Frame supports only graphics-based markers to trigger AR content. Anyone can access the web page without plugins, although more recent browser versions will ensure compatibility and high-quality performance. A-Frame works best with the GL Transmission Format (glTF), which can be exported from most 3D modeling software, but animation support is currently not reliable with glTF. As AR libraries evolve, the functionality of project code may become obsolete and will need to be updated with current technologies to remain accessible.

To test a workflow and explore the question of whether AR could be used as a conservation tool, a case study artwork was identified. A slightly damaged, unprovenanced wire mobile in the style of Alexander Calder’s early period is part of the New York University (NYU) Conservation Center’s Study Collection (fig. 1).

A slightly damaged, unprovenanced wire mobile in the style of Alexander Calder’s early period is part of the New York University (NYU) Conservation Center’s Study Collection
Fig. 1. Case study object, from the Study Collection at the Conservation Center at New York University

The wire construction eliminated the ability to use 3D scanning or photogrammetry; thus, digital photo and video documentation of the mobile and its motion were captured as references for the process of manually building the digital components. Blender was chosen as the 3D modeling software. It is an open-source application with professional features that supports common 3D file formats and has a strong user base with good tutorials and active forums. One early challenge in the building process was the lack of direct correlation between 2D documentation and the software’s idealized 3D workspace. Perspective in Blender is a constantly shifting property, as one must zoom, pan, and turn in order to correctly place objects in space, create curves and angles, and adjust rotation and scale. The digital model was consequently built proportionally, based on many 2D views, and with adjustment after comparison to the mobile when available. The resulting digital model was a good representation but not an exact replica of the original artwork (fig. 2).

Completed digital model.
Fig. 2. Completed digital model (image capture by the author).

The digital model’s wire forms were constructed with curved lines of varying weights and looped connection points that mimicked those found in the artwork. Deformation tools were used to manipulate spheres to resemble the red hand-sculpted shapes that are attached to the end of each wire line in the artwork (fig. 3).

Details of digital model:  Looped connection points mimic the original sculpture’s construction (left); deformation tools in Blender customize the handmade shapes at the end of each wire.
Fig. 3. Details of digital model: Looped connection points mimic the original sculpture’s construction (left); deformation tools in Blender customize the handmade shapes at the end of each wire (right, top, and bottom).

The mobile’s parts are assembled like a chain; thus, parent-child relationships were assigned to the digital parts in order to recreate the logic of movement. These relationships were useful not only during the process of navigating the 3D space to make adjustments to the model but also during animation. The metal material was achieved with the help of third-party files from Poliigon (https://www.poliigon.com), while the red material of the sculpted shapes was made using Blender’s built-in color and roughness parameters to create a primarily matte surface with a dominant color and subtle highlights. Animation was also done in Blender, working in parallel with Adobe Premiere to match frames in a chosen video documentation clip that isolated a representative motion sequence where the wire mobile responds to strong air flow. Using keyframes for location and rotation, the mobile’s motion was sketched out, matching the connection points of the parent-child groups within each parent object’s animation timeline. The motion of the groups between keyframes was interpolated by the software. A known issue for Blender’s “child_of” constraint disrupted the available time for animation; thus, a simple motion sequence was created.

For the AR prototype, another open-source toolset was desirable. However, the A-Frame framework with AR.js and three.js did not easily load the 3D model file with embedded materials and animation that were successfully exported from Blender.2 Unity with Vuforia AR Engine was able to read the 3D model file, and a straightforward marker-based interface was designed. A side-by-side comparison test was set up, with a custom marker hung next to the installed mobile. The virtual mobile was scaled to match the original, and Unity’s built-in viewer with an external Logitech webcam provided a live scene of the AR interface (fig. 4).

Side-by-side comparison of the original sculpture (left) to the AR digital model (right) using Unity’s preview function.
Fig. 4. Side-by-side comparison of the original sculpture (left) to the AR digital model (right) using Unity’s preview function.

The virtual mobile changes perspective with the camera as it moves, and some slightly unrealistic distortion occurred due to the 3D model’s position beneath the marker.3 The virtual mobile also experienced some perceptible jitter as a result of the custom marker’s relative weakness and the very thin lines of the digital object emphasizing small changes in position.

The workflows outlined here describe equipment, software, file formats, and programming tools that are available as of the writing of this article. The process of developing AR will continue to change quickly in the coming years. The specific tools may shift and the quality of the final product may improve over time, but this research shows that it cannot be assumed that AR will be an appropriate route for display. Each artwork under consideration for an AR project will present a unique set of requirements for digital representation and exhibition. Every decision in the sequence of executing a project has the potential to be aligned with the artwork or to impact its integrity. Producing a worthwhile experience that respects the artwork necessitates understanding of the behavior of the original work; creating an accurate model; reproducing motion with nuance and believable physical attributes; and displaying a virtual replica within a high-quality, accessible interface.

ETHICAL CONCERNS 

While ethical questions arise when considering replication of artworks, a digital experience does not carry the same liabilities as a physical replica. An artwork seen through an AR interface will not be confused for the original object and will not threaten its authenticity. As a field, time-based media conservation has developed ethical frameworks to accommodate variations in display when an artwork’s identity maintains integrity. However, larger questions follow the adoption of and enthusiasm for technologies that achieve an illusion of realism. Celebration of technological progress does not have to obscure the presence of physical artworks, but shortsighted and poorly executed AR projects have the potential to change how artworks are perceived and understood. The basis of any conservation activity is maintaining connection to work-defining properties, which should be assessed throughout project development, from conception to user experience.

CONCLUSION

AR has the potential to preserve performative aspects of time-based artworks and to allow viewers to experience core intended meanings of those works that might otherwise be lost to obsolescence, fragility, or damage. AR cannot and should not replace artworks, but it may extend their accessibility beyond current exhibition strategies. The current state of available AR development tools and display methods warrants a thorough review process of project goals and methods, along with consideration of project file management. A case-by-case approach is recommended, making decisions that respond to the artwork and institutional needs.

NOTES

1.  Markers are high-contrast images, usually black and white, but can include color. Standard markers are available online to print or they can be custom images that fit the requirements to be easily read by the AR system.

2. A .fbx (Filmbox) file was used because it supports embedded materials and animation when exporting from Blender (version 2.79b).

3. Perspective in the virtual AR layer is based on the relative position of the digital model to the marker. The most common setup assumes that the digital model appears on top of the marker; thus, off-center placement produces some inherent distortion.

CONTACT INFORMATION

Sasha Arden
Rachel and Jonathan Wilf/Andrew W. Mellon Fellow in Time-Based Media Conservation at the Conservation Center, Institute of Fine Arts
New York University
sasha.arden@nyu.edu