Ambiguity, Technology, and Scholarly Communication

paper
Authorship
  1. 1. Wendell Piez

    Mulberry Technologies, Inc., Piez Consulting Services

  2. 2. Julia Flanders

    Women Writers Project - Brown University

  3. 3. John Lavagnino

    King's College London, National University of Ireland, Galway (NUI Galway)

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.

PART I
Jerome McGann very subtly puts his finger on a point of stress between humanities computing (especially as
pursued through text encoding) and traditional literary studies:
[Computer-facilitated] methods, however, cannot concern themselves with aesthetic issues because they
forego any engagement with the 'minute particulars' of specific works. More crucially, while these approaches
view their materials of study as indeterminate and non-transparent, the critical instruments they deploy are
not. Computers and computer programs may be (and often are) extremely ‘complex’; nonetheless, their
functionality depends upon their determinate and self-transparent structures. [from “Radiant Textuality”,
http://www.iath.virginia.edu/public/jjm2f/radiant.html]
The apparent incongruity between computational approaches and encoded data, on the one hand, and
literary meaning on the other, becomes more poignant and more interesting as our digital tools become
steadily more powerful and nuanced, and as the community of scholars with access to such tools broadens to
include those with traditional “aesthetic” or critical, rather than linguistic, interest in the text. For this
community, the problem of ambiguity and its related terms—indeterminacy, multivalence, uncertainty,
disagreement—is central not only to their own work, but also to their perception of the new digital tools,
which will seem alien and beside the point unless they can accommodate themselves to these qualities. Partly
in recognition of this, at ALLC2002 in Tuebingen, Stephen Ramsay suggested a “ludic” approach (building
on McGann’s “deformative” criticism), in which the computer is not so much turned to the purposes of a
literary “panopticon” (if this may serve as a figure for the text encoder’s ideal of transparent access to a text,
indexed, concordanced, and marked up for any kind of processing or analysis), but is more like an instrument
of play, gambling or divination.
Yet the larger question remains. This assumption of incongruity bears reexamination, and not only
from the standpoint of digital tools and literary method but within the entire economy of scholarly research
and communication. When we ask precisely why—or whether—digital methods cannot accommodate the
detailed textual insight on which literary criticism is built, we also raise several larger issues which this
session will seek to articulate and address.
First, as Wendell Piez will argue, a careful inspection of the problem of “ambiguity” (and digital
technology’s presumed incapacity with it) reveals this is a problem that subsists not at a single level, but at
every level of the system. Just as we feel there to be a difference or stress between an “ambiguous” literary
text and a “disambiguated”, cleanly marked up representation thereof, so also we insist there is a difference
between how an electronic interface presents traditional textual scholarship from how a critical edition in
print does it—and so also we find our work is difficult to evaluate and credit by traditional norms. A close
consideration shows these ambiguities and destabilizations not to be a characteristic of electronic work per se,
128
but rather of poles within scholarly work in general, which is always dedicated both to ambiguities and their
resolution—poles whose magnetic tension is being energized by the solvent effect of the new technologies on
traditionally stable institutional roles.
Next, Julia Flanders will explore the assumption that text encoding cannot accommodate the kinds of
ambiguity that are essential to scholarly textual representation and study. For reasons stemming from the
history of scholarly textual study, text encoding is greeted with ambivalence as a tool for representing the
subtler aspects of textual meaning. But not only can markup technologies, in principle, describe a much wider
and less determinate range of textual phenomena than is presently acknowledged, but in addition they will
need to do so in order to respond to and represent the real thinking and work scholars do with texts. This
paper will consider how such a model of text encoding might fit within the larger environment of scholarly
communication.
Finally, John Lavagnino reflects on the “scholarly economy” and the always-ambiguous efforts, and
occasional successes, of scholars in reaching audiences outside their own narrow circles. It turns out that just
as we wonder whether there is any audience left at all, it turns out new forms of distribution and access create
new kinds of connections across boundaries. This, in turn, prompts one to consider the question of
“ambiguity” rather in light of how we make our own language(s) and concerns of interest to readers who do
not bring our own presuppositions to the work. Broader audiences have problems with ambiguous language or
oblique references, but to counteract leveling tendencies of the electronic medium we can expect scholarly
publications to feel impelled towards greater explicitness in some respects anyway.
REFERENCES
William Empson. “Preface to the second edition”. Seven Types of Ambiguity. London: Hogarth, 1984.
Rob Kling, Lisa Spector, and Geoff McKim. “The Guild Model”. Journal of Electronic Publishing 8:1,
August 2002.
Jerome McGann. “Radiant Textuality”. http://www.iath.virginia.edu/public/jjm2f/radiant.html
PART II
SCHOLARLY TRANSGRESSIONS
Wendell Piez
One might imagine a number of questions we could pose at the intersection of electronic text, and specifically
electronic text that makes sophisticated use of markup technologies, and traditional problems or areas of
interest of literary criticism. For this session we have agreed to consider the concept of “ambiguity” in the
light of e-text technologies and humanities computing projects (or the humanities computing project in
general), and/or vice versa: e-text in the light of approaches to “ambiguity”.
Examining a particular text (my paper selects a randomly-found snippet from a pulp horror short
story [example 1]) to see how ambiguity might manifest itself in literary language (that is to say, where even
the most traditional critic might turn for such an example), it is apparent that ambiguity comes with the
territory (as it were) of reading: indeed, reading itself (particularly the reading of narrative) is an engagement
with a continuous chain of ambiguities, ambiguities suggested, modulated, and ultimately resolved (or not).
This movement, in fact, can be observed at several levels at once in the course of reading, from the lexical
level of the senses of words on up through various figurative structures into the narrative itself (and
sometimes into wider contexts than that). It is intriguing to note that already, examining a text in this way, we
can discern a dimension of textual experience which is the very stuff of literary criticism, but which hitherto,
markup systems have not tried to describe. (To my knowledge. In part, this may be due to the predisposition
of markup to model a text as a synchronous artifact, whereas the shifts and eddies of senses in a sentence or
paragraph occur diachronously, i.e. through time, and subjectively and variably so.) And already we have
noticed something we can call “ambiguity”: ambiguity is when two or more possibilities are in play, and
which of them will hold true depends on factors unresolved, unknown or unknowable. More often than not,
these factors are part of the context within which the ambiguity occurs.
This is why when we look at something on the opposite extreme—an example of a markup language
identifying ambiguities or uncertainties in or regarding its content (and as an example here I have a fragment
of a DTD for a biographical encyclopedia in which several forms of dates are given, including various kinds
of "unknown" dates [example 2]), we may laugh to suppose that this should be taken to be an example of
“ambiguity in markup”. It is the opposite: a systematic (and hence, unambiguous, at least if well-designed for
its purpose) representation of ambiguity. TEI certainty attributes fall into this same category.
Yet when we look at electronic technologies (such as the encyclopedia that provides the context for
the “naturalized” or “domesticated” ambiguity just cited, a work accessible in all kinds of ways besides print)
nonetheless we see something deeply “ambiguous” about them. Considering a series of scans of print and
129
electronic scholarly productions (I’ll show scans of a literary anthology, a critical edition with commentary of
an Ancient Greek text, and of two or three not untypical electronic interfaces [examples 3-7]), they apparently
share some striking qualities: all of them have their particular ways of represented and “resolving”
ambiguities by drawing attention to them and finding some actual or supposed resolution. (This generally
involves some simple protocol to be followed by the reader, such as consulting footnotes or marginalia, or
selecting with a pointer). In fact, this seems of the essence of scholarly work (perhaps in contrast to the
publishing of theoretical tracts). So why are electronic media so “hot”, so scary, so cool or so retrograde?
Why is it even an issue whether and how they represent the world (or scholarly research) differently from
more established media such as journal articles and bound monographs? What accounts for the “culture gap”
between traditional literary studies and “humanities computing”?
E-text technologies, it is apparent, both create new contexts for the reception, examination and study
of literary or historic artifacts, and represent missing contexts in new ways (for as it turns out, the
representation of missing contexts is much of what scholarship has always had to be concerned about). Yet I
don’t think this is the real reason why they raise questions with such apparent urgency, since more traditional
forms of scholarly work do much the same; at least it does not account for the urgency fully. Rather, if one
broadens one’s view again to yet a greater context, a deeper reason for both excitement and anxiety appears.
Scholars, publishers, marketers, audiences, and librarians have all played quite distinct roles in a
highly-developed, elaborate information economy [see attached diagram]. In reality, of course, this economy
is incredibly complex and layered—the diagram is only the most schematic representation of it and much
could be said to extend and qualify what it can only hint at—but even in this simple view, it is apparent how
both stresses and opportunities could arise with the introduction of technologies (such as the web) that both
accelerate the movement of information (“stuff”) through the system as a whole, and circumvent established
channels and relationships within it. A world in which librarians or scholars or writers can become,
effectively, publishers—using electronic media to go directly to an audience—and this is just the beginning of
the disruptions caused by e-text—is a very different world from the old one. No more is the scholarly
economy one in which value and status are directly based on the scarcity of a narrow set of resources: the
available inches of pages in the name journals, or the attentions of the marketing department in a university
press, who will assure that the product becomes “visible” to a readership or acquisitions department. Rather,
where everyone can be a publisher it will not be the fact that someone selects you that connects you with an
audience, but something else: quality, topicality, timeliness, record of success.
That is, in order to ask whether e-media can address or represent “ambiguity”, we might need to move
beyond a merely pedagogical, phenomenological, or aesthetic critique of an electronic resource or tool in
itself, to anchor our question within the larger context of the ambiguities that the very existence of such a tool
introduces into the dusty realm of scholarly practices and folkways: ambiguities that are heightened, not
reduced, to whatever extent the new electronic resource manages to stimulate and support something
recognizable as serious scholarly work. E-media as such may be no better, nor worse, at representing
ambiguity in general than any other media or format: yet they are problematic—we raise questions about
them—because they introduce ambiguities where before there were none (“Is this guy qualified for the job?”).
Rather, now we come to a point when the explosion of available information is finally balanced by the
explosion of available ways of participating and contributing, a kind of gift economy. It could be we are
coming to a moment when the long-cultivated specialization of the literary scholar actually plays against
itself: what was once an advantage (as institutions of academic departments and publishers grew better
defined and rigid in their roles and categories), comes to be a liability in an age when the premium is on
(superficially) some facility with machines and (more deeply) the particular intellectual capacities that are
required to work with emerging media. As we know, these are capacities such as versatility, adaptability,
imagination, a bent for cooperation and teamwork, and the broad view towards new possibilities (even while
e-text also continues to support established media formats)—without ever requiring that a serious scholar
change what she or he fundamentally does, the questioning, searching and synthesizing.
REFERENCES
Epstein, Jason. Book Business. New York: W.W. Norton, 2002. [See also an excerpt, “Reading: The Digital
Future”, at http://www.nybooks.com/articles/14318]
PART III
AMBIGUITY AND TEXT ENCODING
Julia Flanders
Text encoding and technology enter as interlopers into the complex and tense arena of scholarly publication,
in which scholars are in fact conflicted about whether they want their technologies of representation to be
130
transparent or not. Within this arena, text encoding is either too factual, too empirical, to be useful to the
humanistic enterprise—or else, in trying to be otherwise, it trespasses on a domain in which it is seen as an
alien force. On the one hand, if markup is merely a means of representing true or at least widely accepted
facts about a text, then does it not also have the effect of stiffening the text, making it less supple, reducing its
fruitful human indeterminacy, limiting the reader's interaction with the text?
And on the other hand, if markup is more than this—if it is a means of intervening in the text,
mapping or mimicking scholarly subjectivity and experimentation—then does it not usurp or destabilize the
role of the scholar? If it mimics our own interventions, it does so with a difference: that is, the ambiguities
scholars want are the ones that emerge from our own human sensibilities, not the ones that come from some
other, non-human domain.
This conflicted relationship with digital tools for textual representation stems from the historical roots
of modern literary publication and textual editing. In this model of textual production, the text has
immanence: its meaning is the quintessence of human insight and wisdom, the distillation of what makes
humanity rich and deep and complex and morally sound. This is most obvious in the case of the poet, but
inasmuch as the scholarly editor is editing cultural texts which carry this weight, the editor is the modern
surrogate of the poet, bringing the poet’s wisdom back to vibrancy by restoring and representing his or her
text. The scholarly editor must have a wisdom and sensibility which matches that of the poet in order to be
able to fulfill this role—must have insight into the poet’s likely meaning, habitual language use, taste, and so
forth. The reconstruction of the original text (regardless of one’s preferences as to copy text, treatment of
variants, and so forth) depends on the deployment of expert judgment to both express and control the presence
of ambiguity in the text: to make that ambiguity the field of the editor’s expertise, rather than a challenge
thereto.
While text markup has been widely accepted as an editorial tool for the preparation of scholarly
editions of many sorts (as evidenced by the existence of efforts such as the Model Editions Partnership, the
Walt Whitman Archives, the Canterbury Tales Project, the Piers Plowman Archive, and many others), its
domain is assumed to be limited to expressing the text's determinacies, not its indeterminacies. Indeed, this
suitability for expressing textual structure and behavior in a consistent, rigorous way is taken as text
encoding’s chief virtue, and the development of high-quality encoding schemes focuses on establishing
methods that will minimize ambiguity and indeterminacy. This approach has brought text encoding
methodology to a high degree of effectiveness in representing the kinds and aspects of texts that lend
themselves to this treatment.
But what of the aspects of textual communication which on the contrary require an attentiveness to
ambiguity itself? And what if instead of simply representing the text we aim to provide a scaffolding or
additional musculature that can support our readerly and critical activities—a text encoding which more
actively intervenes in the textual economy being established? Consider the following brief list of concepts and
domains in which ambiguity or multiplicity of meaning could play a central role in our textual work:
• the representation of scholarly disagreement within a given edition (whether at the level of
the interpretation of a particular mark on the page, or the ordering of sections, or the
interpretation of the meaning of a given passage)
• the representation of aspects of the text for which a controlled (i.e. disambiguated)
vocabulary cannot provide sufficient nuance: aspects which, in effect, cannot be “digitized”,
which have infinitely fine granularity
• the representation of textual variation in a way which does not merely capture the existing
readings, but also the suspension of their resolution, the ways in which they do not simply
displace one another but coexist
If text markup seems to operate at a different level of abstraction from these domains, it is because of
what we assume about its empirical commitments, its alliances with determinacy and fixity, with ascertaining
and clarifying meaning rather than allowing it to hover before the reader. Our current understanding of text
encoding is as a powerful tool for presenting alternatives, for allowing us to choose, rather than for helping us
to probe a more complex domain in a more hesitant or searching manner.
Can text markup (and its companion tools for digital representation of text) be used to redirect our
attention and ambition towards a subtler textual economy? And would such a tool find acceptance within the
scholarly community? This paper will consider the possibilities within the current environment of humanistic
scholarly communication.
131
PART IV
AMBIGUITY, LANGUAGE, AND THE SCHOLARLY
ECONOMY
John Lavagnino
We hear a great deal about the difficulty of getting certain kinds of scholarship published, notably printed
monographs. Yet in other respects academic work is becoming more readily available to a large public,
reversing the trend of the period from the 1970s through the 1990s when there was a contraction of
availability. It’s not just that scholars, like many other people, can now publish on the web; it's that web
publications of current and back issues of journals can have the side effect of exposing scholarship to people
in other fields who wouldn't have thought of consulting them, and it happens whether the authors of the
scholarship thought they were doing Web publishing or not.
JSTOR is one example of this, as it seeks to cover a broad range of fields and makes it a simple
matter to decide that you’ll look for your topic in political-science journals as well as the literary-studies
journals you really expect to have what you want. But many online journal organizations are similar: in this
field you usually see an organization doing either one journal or dozens. Economically, the logic pushes such
an organization to do as many as possible once they’ve done a few; for the reader, it’s another instance of the
Web effect—if it’s quick and simple to take a look at something you may well do so.
These are journal systems that cater to academic audiences; but there is already some overlap with
wider audiences, who sometimes have access to such systems and who constitute another market for the
future. And from the point of view of any individual field most of the academic world is a popular audience,
of people who do not know a particular field's history and practices.
One of the many effects of the rise of the Web, then, is that our academic writing is likely to be
encountered by people in distant fields, or perhaps not even in academic life. They may take an interest in our
subjects matter without knowing the history or conventions of our fields; they may find the presentation
offputting or puzzling.
One response to this situation is to change nothing. This is the arXiv.org approach: this very large
online collection of preprints of scientific papers adapts an existing practice in scientific communication to
the new medium, and does it well; but without changing the nature of scientific writing and without changing
the audience for it. (See Kling et al for some astute discussion of this system and some other models for
academic publishing that aren’t journal-oriented.) The papers on a topic such as astrophysics are no more
comprehensible by those outside the field than the usual run of published papers, and indeed these are mostly
papers that will shortly be published in the usual journals. And it works: many people use it and it serves its
purpose within the field, though it doesn't make any effort to help other audiences understand the work.
We could adopt such an approach, but scholars in the humanities will find it harder to do, because
outsiders to most humanities fields still assume that the literature is (or ought to be) comprehensible to them:
that academic writing about astrophysics needn’t make any sense to outsiders but academic writing about
history or literature should. Where arXiv.org doesn’t need to make much of an effort to scare away outsiders,
the equivalent in the humanities would have to take positive steps in that direction.
We should also think about whether it’s in our interest to repel people interested in our work. There
is an ethical argument against such a practice: in the end our work is financed by the general public and it
ought to be available to them. And there is a strategic argument: we might see more funding for our work if it
was more visible; if our work is likely to be more visible anyway, then seeking to make effective
presentations of our work for this larger audience would be wise, as a way of inviting interest and sympathy
rather than hostility.
Discussions of problems that outsiders have in reading academic writing tend to focus on jargon, but
such discussions tend to be superficial; as in learning entirely foreign languages, acquisition of a particular
critical language involves gaining an understanding of suggestions and implications in words that are not
easily encapsulated in definitions. We can compare a comment that William Empson made: when teaching in
Japan and China he warned his students away from his famous book “Seven Types of Ambiguity”, because
without a native understanding of the language it was misleading: his approach was not based on tracking
down every possible ambiguity, but only those that had some relevance in context (xii). It’s characteristic of
language use within a subculture that a lot is going on that the outsider just can’t see.
It is easy but ineffective to suggest that scholars should simply write with an eye to a larger audience;
it can diminish the effectiveness of a contribution for its scholarly audience (by diluting its originality through
the need to take up space explaining familiar things) and it is a difficult task to state the general principles
underlying one’s practice. We may see some movement in this direction, but it will be the aspect of scholarly
writing that will be the slowest to change.
132
But change in other aspects of the publication of scholarly work are more open to change, in part
because they’re changing already in the electronic world. We know that some conventional framing devices
that are part of the meaning of print publications are not readily interpreted by outsiders: such as the authority
of particular journals. But that kind of framing device is already attenuated by electronic publication in many
cases: the visual effect of a particular journal’s typography is lost in many journals that are published online
in HTML conversions, for example, and there is much more of an impetus to focus on individual articles
rather than on whole numbers of a journal. Many influential journals in the humanities have had quite
specific agendas, but it is comparatively rare for such agendas to be spelled out with any frequency; there is a
strong incentive for electronic journals to move in that direction, though, to counteract the leveling tendency
of the medium, and greater explicitness about aims helps neophytes.
When the electronic journal is encountered as part of a huge collection of journals, rather than as
something with a slant and subject matter that matter for a particular subfield, there is a strong impetus to be
explicit not just about overall aims but about individual offerings. Humanities journals have mostly not
included abstracts of articles or systematic cross-references to related articles, but we can expect to see more
of these because they are the kind of feature that make it clearer what one journal has to offer that’s different
from what many other equally accessible journals have.
We are starting to see the effects of large-scale electronic publication, and while these may not be as
utopian as commonly predicted a decade ago, we can see ways in which they are making life better: not only
providing richer resources for scholarly work, but also tending to encourage wider access to the fruits of
scholarship and more mixing of the disciplines.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

In review

ACH/ALLC / ACH/ICCH / ALLC/EADH - 2003
"Web X: A Decade of the World Wide Web"

Hosted at University of Georgia

Athens, Georgia, United States

May 29, 2003 - June 2, 2003

83 works by 132 authors indexed

Affiliations need to be double-checked.

Conference website: http://web.archive.org/web/20071113184133/http://www.english.uga.edu/webx/

Series: ACH/ICCH (23), ALLC/EADH (30), ACH/ALLC (15)

Organizers: ACH, ALLC

Tags
  • Keywords: None
  • Language: English
  • Topics: None