Digital Manuscripts: Editions v. Archives

multipaper session
Authorship
  1. 1. Manfred Thaller

    Max Planck Institut fur Geschichte

  2. 2. Elli Mylonas

    Brown University

Child sessions
  1. Digital Archives, Stefan Aumann
  2. Digital Editions: Variant Readings and Interpretations, Dino Buzzetti
  3. Text as a Data Type, Manfred Thaller
Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.

Manuscripts have in recent years become increasingly frequently the object of presentation on the
screen. This session, for the contributions to which
individual abstracts are appended, tries to look
systematically at issues dealing with the problems
of such digital manuscripts. It is assumed that such
systems fall into one of three classes:
“Digital Facsimiles”
A digital facsimile will typically consist of an
individual source, which is scanned at a sufficiently high resolution to allow at least palaeographic
work, but which consists of a few hundred or
thousand pages only. The purpose of such an
edition is to make one witness which resides in a
specific location available.
Components of such a digital facsimile should be
at the very least:
– A complete transcription, accessible via a
fulltext system.
– A prosopographical catalogue of all persons
mentioned, in the form of a database.
– A topographical catalogue of all locations
mentioned, in the form of a database.
All these components are administered by retrieval systems, which allow a user to select those
portions of the manuscript to which these elements
of description or transcription pertain. This means,
that the user is able to call up, for example, the
whole manuscript page(s) which contain a reference to a specific person. High end versions of
digital facsimiles will as a result of such a query
show the location in the manuscript, where the
selected person or phrase is contained.
Beyond the basic tools given above, more advanced versions of digital facsimiles will typically
include:
– Tools for producing maps for the topographical information contained in the source.
– Computer accessible knowledge representations describing, as far as applicable,
– calendar systems as used in the source,
– coinage systems as used in the source,
– a terminology database relating, for example,
legal terminology used in the source.
– Graphical representations of the alphabets
used by identifiable scribes.
“Digital Editions”
Digital editions aim at presenting the same kind of
corpus as digital facsimiles. Unlike these, however, they attempt to represent either all or at least
a significant subset of all existing witnesses, representing in that case exactly the concept of a
critical edition.
In addition to the tools provided by digital facsimiles, they will include mechanisms to:
– Represent the individual witnesses dynamically. Popularly speaking: when you look at
a text, it is the text of the reconstructed original; if you press “F1”, you will see the text
as occuring in witness “α”, if you press “F2”
you will see the text as occuring in the witness “ β” and so on.
– Link the transcription of the individual witnesses to their graphical representation. (So,
if the user doubts a specific transcription of
a given witness, (s)he can check the reading
in the digital representation of that manuscript.)
While quite a few projects exist which have either
produced early versions of digital facsimiles or are
in the process of doing so, digital editions have so
far not very frequently been realized, with the
notable exception of the Canterbury Tales project,
though they are being actively explored by a number
“Digital Archives”
This form of presentation does not aim at a specific, relatively small “text”, but at the representation of archives as a whole. Sizes to be expected range between about 50,000 pages to some
million.
To make these masses of information available,
much more “shallow” descriptions will have to be
used. While, scanning operations which converted
6 – 8 million pages have successfully been performed, most existing attempts at the large scale
conversion of archives have so far been hampered
by the less than perfectly convincing tools to access the huge amounts of material. These difficulties are probably exaggerated by the fact that many
historians and archivists have a somewhat archaic
concept of databases and believe they are forced
to create highly formalized schemes to administer
manuscript material. The future will probably
show the usefulness of a more direct translation of
traditional archival tools into computer supported
versions of these tools.
As the entering of descriptions / transcriptions of
sources takes usually much more time than the
digitization, a digital archive can, and should,
successively go through various levels of accessibility. Such levels can, for example, be:
1) The digitized documents with nothing but
their archival shelf marks.
2) The same documents with short abstracts in
natural language.
3) The same documents with catalogues of prosopographical and / or topographical and / or
formulaic information.
This “dynamic” character of collections of digitized manuscripts is actually one of the more fundamental differences, if we compare them to more
traditional ways of making source material accessible.
The proposed session assumes, that all three kind
of “products” of manuscript processing are actually being related to a group of common technologies. The intention of the session is to show the
interrelationship between the problems encountered in the realization of the various types of product.
For this purpose the following course will be
choosen:
a) Starting from the experiences gained in a series of projects dealing with the administration
of various types of digital source material, a
contribution of Manfred Thaller describes the
relationship between the conceptual problems
involved in the creation of data bases holding
such types of source material and the requirements for software tools to facilitate the creation of such software environments.
b) Dino Buzzetti will then show, how on the one
hand these general techniques can be used to
solve a typical problem of the administration
of digital editions, the processing of variants.
c) To prove the generality of the proposed solutions, we confront this contribution with one
by Stefan Aumann, who will discuss an example from the creation of a 50,000 page archival
data base, to show how the general techniques
described can also be used to realize access
mechanisms to large collections

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

In review

ACH/ALLC / ACH/ICCH / ALLC/EADH - 1996

Hosted at University of Bergen

Bergen, Norway

June 25, 1996 - June 29, 1996

147 works by 190 authors indexed

Scott Weingart has print abstract book that needs to be scanned; certain abstracts also available on dh-abstracts github page. (https://github.com/ADHO/dh-abstracts/tree/master/data)

Conference website: https://web.archive.org/web/19990224202037/www.hd.uib.no/allc-ach96.html

Series: ACH/ICCH (16), ALLC/EADH (23), ACH/ALLC (8)

Organizers: ACH, ALLC

Tags