New Models of Digital Materialities

panel / roundtable
Authorship
  1. 1. Jean-François Blanchette

    University of California, Los Angeles (UCLA)

  2. 2. Johanna Drucker

    Information Studies - University of California, Los Angeles (UCLA)

  3. 3. Matthew Kirschenbaum

    Department of English - University of Maryland, College Park, Maryland Institute for Technology and Humanities (MITH) - University of Maryland, College Park

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.

New Models of Digital Materialities

Blanchette, Jean-François , Information Studies, UCLA, blanchette@gseis.ucla.edu
Drucker, Johanna, Information Studies, UCLA, drucker@gseis.ucla.edu
Kirschenbaum, Matthew, Department of English; Maryland Institute for Technology in the Humanities, University of Maryland, mgk@umd.edu

One persistent myth of the digital age is that it differs fundamentally from all previous information epochs because in digital form information has finally achieved the long-standing historical aspiration to unburden itself from the shackles of matter. As a mere collection of 0s and 1s, digital information is imagined to be independent of the particular media on which it is stored—hard drive, optical disk, etc.—and the particular signal carrier which encode bits, whether magnetic polarities, voltage intensities, or pulses of light. Digital information also achieves a separation of content and form that could only be partially realized with analog carriers. This fantasy has implications for the ways we think about design, preservation, storage, use, and every other aspect of digital media. What can the digital humanities learn from and contribute to an engagement with the many aspects of the materialities of information?

The authors of these three papers undertake a common questioning of this purported independence from matter, a concept that has two distinct and important consequences: (a) the idea that digital information can be reproduced and distributed at negligible cost and high speed, and thus, is immune to the economics and logistics of analog media; (b) and that it can be accessed, used, or reproduced without the noise, corruption, and degradation that necessarily results from the handling of material carriers of information. Thus the concept of digital information as immaterial is fundamental to the ability of the digital to upend the analog world, and the foundation of a belief that any media that can be digitized or produced digitally will eventually succumb to the logics of digital information and its circulation through electronic networks—an argument powerfully encapsulated by Negroponte’s slogan, “from atom to bits.”

Such widespread assumptions have obscured the specific material constraints that obtain in digital environments. Only recently that the issue has emerged as a legitimate concern for scholarly enquiry—engaging concepts of materiality from literary, visual, media, and cultural studies and bringing them to bear on the analysis of digital environments. The papers in this session take up some of these issues by reading the machines, the specific properties of digital media from surface screen to deeper structures, as a demonstration of the ways the materiality of digital media can be engaged. The purpose of this work is to inform some of the basic tasks of digital humanists – the interpretation of digital media artifacts, the skills sets necessary to interrogate these artifacts, but also, our responsibility for the preservation and use of these objects as part of our cultural legacy.

“Infrastructural Thinking” as Core Computing Skill

Blanchette, Jean-François , Information Studies, UCLA, blanchette@gseis.ucla.edu

It is often suggested that all digital humanists would benefit from learning programming. Through the acquisition of this core skill, they would engage with the practice that defines computing and directly experience its possibilities and constraints. Beyond mere mastery of a language, programming would expose them to formal methods for abstracting and modeling concepts and real-world phenomena. The current wave of interest in a “computational thinking” pedagogical paradigm mirrors this argument: computer science is primarily about modeling and abstraction of phenomena in ways amenable to algorithmic processing.

In this paper, I argue that programming, or its more complex formulation, “computational thinking,” provides only a partial picture of computing, and correspondingly, only a partial skill set. A fuller picture requires engagement with the material foundations of computing. I use “material” here in a very literal sense, to point to the physicality of bits (their encoding as magnetic polarities, voltages, etc) and the material constraints of the devices that process, store, and transport them. While the material dimension of computing constantly informs the practices of the computing professions, this dimension is also repressed, in the context of a general discourse that has emphasized the abstract dimension of the digital over its material substrate. Yet, this materiality, perhaps unexpectedly, holds the key to analyzing the shape and evolution of the computing infrastructure. And while digital humanists may well benefit from engaging in “computational thinking,” I will argue the computing infrastructure implicitly performs much of that thinking, before a single line of application code is written.

While programming deals with creating applications that provide service to users, infrastructure software provides services to applications, by mediating their access to computing resources, the physical devices that provide processing power, storage, and networking. Infrastructure software is most commonly encountered in the form of operating systems, but is also embedded in hardware (the firmware in a hard drive) or in specialized computers (e.g., web servers, or routers). Whatever its specific form, the role of infrastructure software is to provide a series of transformations whereby the signals that encode bits on some physical media (fiber optic, magnetic drive, electrical wires) become accessible for symbolic manipulation by applications. Infrastructure software must be able to accommodate growth in size and traffic, technical evolution and decay, diversity of implementations, integration of new services to answer unanticipated needs, and emergent behaviors, among other things. It must provide programmers with stable interfaces to system resources in the face of continuously evolving computing hardware—processors, storage devices, networking technologies, etc.

The computing industry manages to accomplish this feat through the design strategy of modularity, whereby a module’s implementation can be designed and revised without knowledge of other modules’ implementation. Modularity performs this magic by decoupling functional specification from implementation: operating systems, for example, enable applications to open, write to, and delete files, without any knowledge of the specific storage devices on which these files reside. This decoupling provides the required freedom and flexibility for the management, coordination, and evolution of complex technical systems. However, in abstracting from specific implementations of physical resources, such decoupling necessarily involves efficiency trade-offs. The TCP/IP protocols for example provide abstractions of networks that favor resilience (the network can survive nuclear attacks) over quality of service (the network provides no minimum delays for delivery of packets). Applications sensitive to such delays (e.g., IP telephony or streaming media) must thus overcome the infrastructural bias of the protocols to secure the quality of service they require.

An important point is that efficiency trade-offs (or biases) embedded in a given modular organization become entrenched through their institutionalization in a variety of domains: standards, material infrastructure (e.g., routers), and social practices (e.g. technical training) may all provide for the endurance of particular sets of abstraction. This entrenchment is further enabled by the economies of scale such institutionalization affords. An immediate consequence is that the computing infrastructure, like all other infrastructures, is fundamentally conservative in character. Yet, it is also constantly under pressure from the need to integrate changes in the material basis of computing: multi-core, cloud-based, and mobile computing are three emerging material changes that will register at almost every level of the infrastructure.

Computing, it turns out, is material through and through. But this materiality is diffuse, parceled out and distributed throughout the entire computing ecosystem. It is always in subtle flux, structured by the persistence of modular decomposition, yet pressured to evolve as new materials, requiring new tradeoffs emerge. This paper thus argues that, in a very literal and fundamental sense, materiality is a key entry point for reading infrastructural change, for identifying opportunities for innovation that leverage such change, and for acquiring a deep understanding of the possibilities and constraints of computing. This understanding is not particularly provided by exposure to programming languages. Rather, it requires familiarity with the conflicts and compromises of standardization, with the principles of modularity and layering, and with a material history of computing that largely remains to be written.

Performative Materiality and Interpretative Interface

Drucker, Johanna, Information Studies, UCLA, drucker@gseis.ucla.edu

Approaches to interface design have come mainly from the HCI community, with an emphasis on maximum efficiency in the user-centered experience. Since the days of Douglas Engelbart and Ivan Sutherland’s experiments with head sets, pedals, mice, and screens, in work that led to the development of the Graphical User Interface, the dominant paradigm in the human-machine relationship has come from an engineering sensibility. Leading practitioners in that field, from Stuart Card, Ben Shneiderman, and others, have defined basic principles for design methodology and display that are premised on a mechanistic analysis of user’s abilities to process information effectively. This approach, taken from flight simulators and applied to the vast numbers of tasks for searching navigating, buying, and communicating online, is grounded in a user-as-consumer model. Criticisms from inside that community, such as the work of Jesse James Garrett (showing the confusion between information and task based approaches) or Aaron Marcus’s group (analyzing cultural differences and their connection to interface functionality) have provided useful insights and shifted design principles to be more nuanced. But the basic model of the user-centered approach to interface design remains in place. And it has been adopted by humanists, particularly when the resources to do so are available.

If we bring the legacy of critical theory to bear on this model, however, we see that the same critique leveled by post-structuralists against New Criticism is pertinent here. The “text” of an interface is not a thing, stable and self-evident, whose meaning can be fixed through a detailed reading of its elements. An interface is a site of provocation for reading, and, in the same manner as a film, literary work, or any other “text” (fashion magazine, instruction manual), it is a space for interpretation involved an individual subject, not a generic user. In critical parlance, both an enunciating and enunciated subject – the speaking and the spoken subject – are aspects of textual production. (Text here is meant broadly.) This concept of performativity, articulated by John Austin in How to do things with Words, has echoes within the field of anthropology, gender studies, and cultural studies. By situating texts and speakers within pragmatic circumstances of use, ritual, exchange, and communities of practice, performativity stripped away any foundation for thinking meaning was inherent in a text or work. Performativity offered a sharp rebuke to notions of agency (individuals) and autonomy (of texts).

How can we, that is, the community of digital humanists, take these critical insights from literary, cultural, and gender studies into our current practice? If the object is merely to demonstrate that one may read an interface with the same techniques we used to read Young Mr. Lincoln or to follow Laura Mulvey’s arguments into a new realm of semiotic analysis, a rather tedious and predictable path would like ahead. This might have some value in the undergraduate classroom, as the unpacking of ideological subtexts fascinates the young. But for those of us concerned with the design of environments for digital humanities and its research agendas, the questions that arise from this critical encounter are quite different. Can we conceive of models of interface that are genuine instruments for research? That are not merely queries within pre-set data that search and sort according to an immutable agenda? How can we imagine an interface that allows content modeling, intellectual argument, rhetorical engagement? In such an approach, the formal, graphical materiality of the interface might register the performative dimensions as well as support them. Such approaches would be distinct from those in the HCI community in terms of their fundamental values. In place of transparency and clarity, they would foreground ambiguity and uncertainty, unresolvable multiplicities in place of singularities and certainties. Sustained interpretative engagement, not efficient completion of tasks, would be the desired outcome.

This is not an argument in favor of bad design. Nor is it a perverse justification for the ways in which under-resourced projects create confusion, as if that were a value for humanists. Quite the contrary. The challenge of creating an interface in which the performative character of interpretation can be supported and registered builds on demonstrable principles: multiple points of view, correlatable displays, aggregated data, social mediation and networking as a feature of scholarly work, and some of the old, but still charming, qualities of games like Nomic, with their emerging rule sets.

My argument is that the humanities embody a set of values and approaches to knowledge as interpretation that cannot be supported by a mechanistic approach to design. This is not just a semantic exercise, but a point of departure for implementation. The concept of performative materiality has a double meaning here. In the first sense, materiality is understood to produce meaning as a performance, just as any other “text” is constituted through a reading. That notion is fundamental to humanistic approaches to interpretation as situated, partial, non-repeatable. In the second sense, performative materiality suggests an approach to design in which use registers in the substrate and structure so that the content model and its expressions evolve. The “structure of knowledge” becomes a “scheme of knowing” that inscribes use as well as provoking it. The idea of a user-consumer is replaced by a maker-producer, a performer, whose performance changes the game. This takes us back to some of the earlier theory of games, to the work of Brenda Laurel and others, whose theoretical training brought notions of subjectivity and performance into the study of online environments.

This paper does not claim to have a toolset of design solutions, since by definition, that would put us right back into the HCI model. Instead, it is an attempt to lay out some basic ideas on which to imagine a performative approach to materiality and the design of an interpretative interface. Such an interface supports acts of interpretation (does not merely return selected results from a pre-existing data set) and also is changed by acts of interpretation, it evolves. Performative materality and interpretative interface are co-dependent and emergent.

Checksums: Digital Materiality in the Archive

Kirschenbaum, Matthew, Department of English; Maryland Institute for Technology in the Humanities, University of Maryland, mgk@umd.edu

The general conversation about “materiality” in digital media has been ongoing for quite some time (notably Markley 1997; Hayles 2002). A number of new foci, models, and constructions have also recently been introduced into the conversation around the term. These include Kirschenbaum’s “formal materiality” and accompanying work on digital forensics (2008), the “media archeology” paradigm emerging out of several key European writers following in the wake of Friedrich Kittler (Parikka 2007, Ernst 2005), and the platform studies approach developed and encouraged by Montfort and Bogost (2009). At the same time, in the archival community, practitioners are finding themselves confronted with the materiality of born-digital objects in palpable and often increasingly time-sensitive real-world ways: as more and more collections begin to process and receive digital storage media as elements in the acquisition of personal “papers” from writers, politicians, and other public figures, those charged with their long-term care are implementing their own working models of materiality as they make decisions about what to save, what to index, and what to provide access to. (Given the venue for this year’s conference, it’s worth noting that Stanford University Libraries has been a pioneer in this area, with such efforts as the Self-Archiving Legacy Toolkit (SALT), the AIMS project on Born-Digital Collections and New Models for Inter-Institutional Model for Stewardship, and their participation in the Preserving Virtual Worlds project.) This paper will therefore seek to evaluate recent developments in the theoretical conversation about digital materiality in the specific context of applied practice in the archival community. However, it will not assume that born-digital archival content can function only as a test-bed for the various theoretical models; instead, it will also look at how the decisions being made in archival settings have the potential to inform critical and theoretical discourse.

The paper follows a case-study approach. Thus, it will consider media archeology, which one definition glosses as “histories of suppressed, neglected, and forgotten media [. . .] ones that do not point selectively and teleologically to the present cultural situation and currently dominant media as their 'perfection’” (Huhtamo 2010) in light of the Deena Larsen Collection at the University of Maryland. Larsen, who is traditionally associated with the Eastgate stable of hypertext authors through work such as Marble Springs (1993) and Samplers (1997) has deposited a large and heterogeneous array of hardware, software, storage media (some eight hundred 3½-inch diskettes), notebooks, manuscripts, correspondence, and ephemera at the Maryland Institute for Technology in the Humanities. This material includes a number of items related to her best-known work, Marble Springs (1993), written in Hypercard and published the same year, it turns out, as a much more famous work of digital storycraft, Cyan’s Myst, also authored in Hypercard. The Maryland collection includes the original text, with annotations, as a manuscript in a notebook, various electronic drafts and early implementations, installed copies of the work (which allows the user to add marginal notes, thus creating the capacity to render every copy unique), and most unusually, a shower curtain which contains laminated screenshots of different nodes (lexia) pasted up and linked together with colored string to diagram the affective relations between the different elements of the text. How can media archeology, which, following Foucault’s venerable formulations, seeks to tease out “hitherto unnoticed continuities and ruptures” (Huhtamo 2010) inform curatorial practice around this work? Among other things, this paper will argue for a network-oriented model of access, whereby a user of the collection, through metadata packaging, is encouraged to access and evaluate items in relation to one another rather than in isolation, as atomized treasures sprung from a Hollinger box (or its digital equivalent, a FEDORA record).

Similarly, the platform studies approach advocated by Montfort and Bogost will be evaluated in relation to a second recent project from MITH, a site devoted to vintage computing which uses a considered metadata and modeling approach to computing hardware, whereby individual components of the vintage machines are documented, contextualized within their relation to the system as a whole, and expressed using Dublin Core. While not “platform studies” in its own right, the extent to which formalized representations of vintage computing systems can serve as the basis for further work in that vein is important, since there are currently few precedents for cataloging actual artifacts of computer history, and in particular considering specific components as individualized entities rather than as generic classes of mass-produced material. The paper will therefore ask what kind of documentation and metadata a scholar interested in the affordances and individuality of a particular computing system would need in order to use a cataloged instance as the basis for critical work in the platform studies model.

Finally, the paper will consider Kirschenbaum’s interwoven strands of formal and forensic materiality in relation to an event which post-dated his 2008 book, the recovery, reconstitution, and successful emulation of the original software program for William Gibson’s famously self-effacing poem “Agrippa” (Kirschenbaum, et al. 2009). Emulation in particular allows us to usefully consider the armature of “formal materiality,” since emulation functions precisely to naturalize or operationalize one baseline computing system as the virtual host for a working instantiation of another. Yet emulation is also an uncanny event, a kind of literalization of the age-old conceit of the “ghost in the machine” as a contemporary operating system becomes the proxy for one long (un)dead. “Agrippa” itself, with its themes of memory, media, and loss is an apt vehicle for a meditation on formal materiality and the limits of absolute emulation, especially since there is one overridingly obvious fact about any virtual implementation or emulation of “Agrippa”: unlike the original, which, famously, was a one-off “run” encrypted with a military-grade one-way key, here one can spawn “images” of the original disk at will and pass them through the emulator time and again. Emulation, the paper concludes, is finally chimerical as is, perhaps, the notion of formal materiality itself: virtualization will always be interrupted by the individuality of original events.

One last word: the paper has a broader agenda beyond putting theory through its paces, using archival content as a “checksum” for critical debate. Digital humanities itself, as a community, has much to offer to ongoing efforts in the archives world, yet thus far there has been a curious gulf between the two, with only a handful of individuals serving as ambassadors between the two communities. If digital humanities can offer a forum in which the artifacts and objects of contemporary cultural heritage, many of which will be born-digital rather than digitized, can serve as the basis for critical and technical inquiry then it will be well positioned to take part in the increasingly urgent societal conversation around the future of our digital and material present.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

Complete

ADHO - 2011
"Big Tent Digital Humanities"

Hosted at Stanford University

Stanford, California, United States

June 19, 2011 - June 22, 2011

151 works by 361 authors indexed

XML available from https://github.com/elliewix/DHAnalysis (still needs to be added)

Conference website: https://dh2011.stanford.edu/

Series: ADHO (6)

Organizers: ADHO

Tags
  • Keywords: None
  • Language: English
  • Topics: None