The MAAYA Project: Multimedia Analysis and Access for Documentation and Decipherment of Maya Epigraphy

paper, specified "short paper"
Authorship
  1. 1. Daniel Gatica-Perez

    Idiap Research Institute

  2. 2. Carlos Pallán Gayol

    University of Bonn

  3. 3. Stephane Marchand-Maillet

    University of Geneva

  4. 4. Jean-Marc Odobez

    Idiap Research Institute

  5. 5. Edgar Roman Rangel

    University of Geneva

  6. 6. Nikolai Grube

    University of Bonn

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.

1. Introduction
Archaeology and epigraphy have made significant progress to decipher the hieroglyphic writings of the Ancient Maya, which today can be found spread over space (in sites in Mexico and Central America and museums in the US and Europe) and media types (in stone, ceramics, and codices.) While the deciphering goal remains unfinished, technological advances in automatic analysis of digital images and large-scale information management systems are enabling the possibility to analyze, organize, and visualize hieroglyphic data that can ultimately support and accelerate the deciphering challenge.

We present an overview of the MAAYA project (http://www.idiap.ch/project/maaya/), an interdisciplinary effort integrating the work of epigraphists and computer scientists with three goals:

(1) Design and development of computational tools for visual analysis and information management that effectively support the work of Maya hieroglyphic scholars;
(2) Advancement of the state of Maya epigraphy through the coupling of expert knowledge and the use of these tools; and
(3) Design and implementation of an online system that supports search and retrieval, annotation, and visualization tasks.
Our team approaches the above goals acknowledging that work needs to be conducted at multiple levels, including data preparation and modeling; epigraphic analysis; semi-automated and automated pattern analysis of visual and textual data; and information search, discovery, and visualization. In this abstract, we concisely describe three ongoing research threads, namely data sources and epigraphic analysis (Section 2), glyph visual analysis (Section 3), and data access and visualization (Section 4). We provide final remarks in Section 5.

2. Data sources and epigraphic analysis
The project focuses on Maya hieroglyphic inscriptions produced within the Yucatan Peninsula, inside the northern Maya lowlands, which encompasses sites within the Mexican states of Yucatan, Campeche, parts of Quintana Roo and a northern-most portion of Belize (see Fig. 1). Our research targets the three Maya Books (Codices) produced inside the Yucatan peninsula during the Postclassic period (1000-1521 AD). The first one is the Dresden Codex, housed at the University Library of Dresden, Germany. For this data source, our project relies on published facsimiles (Förstemann, 1880; Codex Dresden, 1962; Codex Dresden, 1989) and on high-resolution, open-access images provided by the SLUB. The Codex Madrid is stored at the Museo de América in Madrid, Spain, and for its study, our project relies on published facsimiles and line drawings (Codex Madrid, 1967; Villacorta and Villacorta, 1976). For the Paris Codex, the project relies on published facsimiles and images provided online by the Biblioteque Nationale de France.

Fig. 1: Map indicating main archaeological sites under study by our project.

Codex pages were usually divided by red lines or t'ols (Fig. 2). Each of these t'ols is further subdivided in frames relevant to the specific dates, texts and imagery depicted. Frames contain several glyph blocks organized in a grid-like pattern with columns and rows, calendric glyphs, captions, and iconographic motives. Briefly stated, t'ols are "segmented" into their main constituent elements (Fig. 2). Images are post-processed and from these, high-quality, scale-independent vectorial images of the individual hieroglyphs and iconography are generated in three modes: (a) grayscale/color, (b) binary, and (c) reconstructed forms (marked in blue), which are based on epigraphic comparison of all available similar contexts (Figs. 3-4)

Fig. 2: Page 47c (44c) of the Dresden Codex framing main individual constituent elements (by Carlos Pallán based on SLUB online open source image)

The process of annotating the Codices entails an analysis comprising the following steps: (a) identification of individual signs on (Thompson, 1964) catalog, i.e. T0588:0181; (b) identification of individual signs on (Macri and Vail, 2008) catalog, i.e. SSL:ZU1; (c) identification of individual signs on (Evrenov et al., 1961) catalog, i.e. 400-010-030; (d) identification of signs on (Zimmermann, 1956) catalog, i.e., Z0702-0060; (e) transcription, specifying phonetic values for individual signs as syllables (lowercase bold) or logograms (uppercase bold), i.e. K'UH-OK-ki; (f) transliteration, conveying reconstructed Classic Maya speech (words) formed by the combination of individual signs, i.e. k'uhul ook); (g) morphological segmentation, a division into morphemes for later linguistic analysis), i.e. k'uh-ul Ok; (h) morphological analysis, assigning each of the previous segments to a definite linguistic category, i.e. god-ADJ step(s)/foot; (i) English translation: "Divine step(s)/foot". Taken together, the processing steps within this workflow provide the ground for more advanced multimedia analyses (Fig. 5).

Fig. 3: Process to generate vectorial representations of the Dresden Codex: a)color/grayscale; b) binary; c) reconstructed (blue) forms (by Carlos Pallán based on SLUB online open source images)

Fig. 4: Vectorial representations of the Madrid Codex, Page (T'ol) 10b, Frame 1: a) color/grayscale; b) binary; c) reconstructed (blue) forms (by Guido Krempel based on (Codex Madrid, 1967))

Fig. 5: Multivariable fields used to annotate textual contents of Dresden Page (T'ol) 47c (44c) (by Carlos Pallán)

3. Visual analysis of glyphs
Modeling Maya glyph shape is challenging due to the complexity and high intra-class variability of glyphs. We are developing methods to characterize glyphs for visual matching and retrieval tasks. In previous work, we proposed a shape descriptor based on visual bags-of-words (HOOSC: Histogram-of-Orientations-Shape Context) and used it for isolated glyph retrieval (Roman-Rangel, 2011). We are pursuing two research lines to extend our current capabilities.

Improved shape representations. Three directions are being considered: (1) the improvement of bag representations to retrieve syllabic glyphs. In particular, we developed a method to detect visual stop-words (Roman-Rangel, 2013a), and a statistical approach to construct robust bag-of-phrases (Roman-Rangel, 2013b); (2) the use of neural-network architectures like auto-encoders (Ngiam, 2011) that automatically build representations from training data. These approaches represent an alternative to handcrafted descriptors like the HOOSC, and provide a principled way to quickly adapt representations to different data sources (codex vs. monument glyphs); (3) the use of representations based on the decomposition of glyphs into graphs of segments, from which shape primitives can be extracted. This representation might be more suitable than histogram-based descriptors like HOOSC at identifying which strokes of a shape are discriminative, potentially allowing comparisons with so-called diagnostic features provided by epigraphers (Fig. 6).

Fig. 6: Three glyph instances of the same sign. Right: one diagnostic features and variant.

Co-occurrence modeling. We are exploring ways to exploit the fact that glyphs do not occur in isolation within inscriptions but in ordered groups (glyph-blocks) (Fig. 2). To this end, we are studying options to build models relying on glyph co-occurrence statistics or further accounting for the glyph spatial position within the blocks. We plan to investigate how such information can be used in a retrieval system to improve performance and to help scholars deal with unknown or damaged glyphs. This has several dimensions like query types (e.g. single glyphs with known identity of other glyphs within the block), and contextual combination of shape similarity with text metadata.

4. Data access and visualization
Our work in this direction focuses on visualization of and effective access to image databases with archaeological value. We are developing a repository that will serve further goals within the project. This database stores visual elements of the Madrid, Dresden, and Paris codices. It is complemented with an online system, shown in Fig. 7, which allows for capturing and annotation of codices. More specifically, the repository contains relevant information regarding the composition of the codices, such as hierarchical relations between components and bounding boxes of glyphs. Therefore, it allows to query visual elements at different levels of semantic structure, i.e., page, t'ol, glyph-block, individual glyph, etc. The repository will also allow to query and study statistics of the Mayan writing system, e.g., hieroglyph co-occurrences.

Fig. 7: Snapshot of the online tool that feeds the database with imagery data and its corresponding annotations, i.e., codex name, t'ol, glyph-block reference, Thompson and Macri and Looper catalogs.

The second research line is the advancement of visualization techniques, and more precisely, the development of techniques that will allow exploring the feature space of a number of visual shape descriptors used to represent Mayan hieroglyphs for retrieval purposes. By relying on these visualization methods, our goals are detecting, understanding, and interactively overcoming some of the drawbacks associated with the shape descriptors currently in use (Vondrick, 2013).

5. Conclusions
We presented an overview of the MAAYA project’s work-in-progress on epigraphic analysis, automatic visual analysis, and data access and visualization. Our close integration of work in computing and epigraphy is producing initial steps towards the design of computing methods tailored for epigraphy work; and can create opportunities to revisit findings in Maya epigraphy under the light of what computer-based methods can reveal (e.g., data-driven analyses of glyph diagnostic features.) At the same time, several of our machine learning, computer vision, and information retrieval methods are applicable to other problems in digital humanities. We would be interested in investigating applications of these methodologies to other sources of Cultural Heritage materials.

Acknowledgments.
We thank the support of the Swiss National Science Foundation (SNSF) and the German Research Foundation (DFG). We also thank all the members of the team (Rui Hu, Gulcan Can, April Morton, Oscar Dabrowski, and Peter Biro) for their contribution.

References
Codex Dresden (1962). Codex Dresdensis: Maya Handscschrift der Sächsischen Landesbibliothek Dresden. Edited by the Sächsische Landesbibliothek Dresden from Prof. Dr. phil. habil. Eva Lips. Akademie-Verlag GmbH. Berlin. (Issue 626 from 700 printed.)

Codex Dresden (1989). Die Dresdner Maya-Handschrift. Sonderausgabe des Kommentarbandes zur vollständigen Faksimile-Ausgabe des Codex Dresdensis. Akademische Druckerei- und Verlags-Anstalt, Graz 1989, including Helmut Deckert: Zur Geschichte der Dresdner Maya-Handschrift and Ferdinand Anders: Die Dresdner Maya-Handschrift.

Codex Madrid (1967). Codex Tro-Cortesianus. Museo de América Madrid. Facsimilar Edition 1967 Moderated by Francisco Sauer and Josepho Stummvoll. Introduction and Summary by F. Anders. Akademische Druck- und Verlaganstalt Graz- Austria.

Evrenov, E.B., Kosarev, Y. and Ustinov, B.A. (1961). The Application of Electronic Computers in Research of the Ancient Maya Writing. USSR, Novosibirsk.

Förstemann, E.W (1880). Die Maya-Handschrift der königlichen Bibliothek zu Dresden, hrsg. von Ernst Wilhelm Förstemann. - Leipzig : Verlag der Naumann'schen Lichtdruckerei, 1880.

Macri, M. and Vail, G. (2008). The New Catalog of Maya Hieroglyphs, Volume Two: The Codical Texts. University of Oklahoma Press, 308 pp.

Ngiam, J. (2011). Unsupervised feature learning and deep learning tutorial. http://ufldl.stanford.edu/wiki/index.php/UFLDL Tutorial.

Roman-Rangel, E., Pallan, C., Odobez, J.-M. and Gatica-Perez, D. (2011). Analyzing Ancient Maya Glyph Collections with Contextual Shape Descriptors, Int. Journal of Computer Vision, Special Issue on e-Heritage, Vol. 94, No. 1, pp. 101-117, Aug. 2011.

Roman-Rangel, E. and Marchand-Maillet, S. (2013). Stopwords Detection in Bag-of-Visual-Words: The Case of Retrieving Maya Hieroglyphs. International Workshop on Multimedia for Cultural Heritage (MM4CH), at International Conference on Image Analysis and Processing.

Roman-Rangel, E. and Marchand-Maillet, S. (2013). Bag-of-Visual-Phrases via Local Contexts. Workshop on Recent Advances in Computer Vision and Pattern Recognition (RACVPR), at Asian Conference on Pattern Recognition.

Thompson, J. E. S. (1964). A Catalog of Maya Hieroglyphs. University of Oklahoma Press. Available online at: http://www.famsi.org/mayawriting/thompson/index.html

Villacorta, J. A. and Villacortax, C. A. (1976) Códices Mayas (reproducidos y desarrollados por). Sociedad de Geografía e Historia de Guatemala, Guatemala, C.A. (second edition.)

Vondrick, C., Khosla, A., Malisiewicz, T., Torralba, A. (2013). HOGgles: Visualizing Object Detection Features.International Conference on Computer Vision.

Zimmermann, G. (1956). Die Hieroglyphen der Maya Handschriften. Abhandlungen aus dem Gebiet der Auslandskunde, Band 62- Reihe B, Universität Hamburg.

SLUB: Sächsischen Landes- und Universitätsbibliothek Dresden

http://digital.slub-dresden.de/werkansicht/dlf/2967/1/cache.off

http://gallica.bnf.fr/ark:/12148/btv1b8446947j/f1.zoom.r=Codex%20Peresianus.langDE

http://digital.slub-dresden.de/werkansicht/cache.off?id=5363&tx_dlf%5Bid%5D=2967&tx_dlf%5Bpage%5D=47

http://digital.slub-dresden.de/werkansicht/cache.off?id=5363&tx_dlf%5Bid%5D=2967&tx_dlf%5Bpage%5D=47

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

Complete

ADHO - 2014
"Digital Cultural Empowerment"

Hosted at École Polytechnique Fédérale de Lausanne (EPFL), Université de Lausanne

Lausanne, Switzerland

July 7, 2014 - July 12, 2014

377 works by 898 authors indexed

XML available from https://github.com/elliewix/DHAnalysis (needs to replace plaintext)

Conference website: https://web.archive.org/web/20161227182033/https://dh2014.org/program/

Attendance: 750 delegates according to Nyhan 2016

Series: ADHO (9)

Organizers: ADHO