Filtern
Dokumenttyp
Sprache
- Englisch (8) (entfernen)
Schlagworte
- Korpus <Linguistik> (8) (entfernen)
Volltext vorhanden
- ja (8)
Numerous high-quality primary text sources—in the context of the curation project described here, this means full-text transcriptions (and corresponding image scans) of German works originating from the 15th to the 19th centuries—are scattered among the web or stored remotely. E.g., transcriptions of historical sources are stored locally on degrading recording media and cannot be found, let alone accessed by third parties. Additionally, idiosyncratic, project-specific markup conventions and uncommon, out-of-date or inflexible storage formats often hinder further usage and analysis of the data. Often, textual resources are accompanied by scarce, insufficient or inaccurate bibliographic information, which is only one further reason why valuable resources, even if available on the web, remain undiscovered by and are of little use to the wider research community. The integration of these dispersed primary text sources into the sustainable, web and centres-based research infrastructure of CLARIN-D will be an important step to solve this problem. The Full Paper illustrates an exemplary approach taken by the »Deutsches Textarchiv« (DTA; www.deutschestextarchiv.de) at the Berlin-Brandenburg Academy of Sciences and Humanities (BBAW) to integrate dispersed textual resources and corresponding image scans from various sources into a large historical text corpus of its own and to insert these into the infrastructure of CLARIN-D.
Among mass digitization methods, double-keying is considered to be the one with the lowest error rate. This method requires two independent transcriptions of a text by two different operators. It is particularly well suited to historical texts, which often exhibit deficiencies like poor master copies or other difficulties such as spelling variation or complex text structures. Providers of data entry services using the double-keying method generally advertise very high accuracy rates (around 99.95% to 99.98%). These advertised percentages are generally estimated on the basis of small samples, and little if anything is said about either the actual amount of text or the text genres which have been proofread, about error types, proofreaders, etc. In order to obtain significant data on this problem it is necessary to analyze a large amount of text representing a balanced sample of different text types, to distinguish the structural XML/TEI level from the typographical level, and to differentiate between various types of errors which may originate from different sources and may not be equally severe. This paper presents an extensive and complex approach to the analysis and correction of double-keying errors which has been applied by the DFG-funded project “Deutsches Textarchiv” (German Text Archive, hereafter DTA) in order to evaluate and preferably to increase the transcription and annotation accuracy of double-keyed DTA texts. Statistical analyses of the results gained from proofreading a large quantity of text are presented, which verify the common accuracy rates for the double-keying method.
This paper is an updated presentation of the Ramses project being currently developed at the University of Liège. The first section stresses the main objectives and gives a technical description of the general architecture of Ramses software. The second part describes the encoding procedures and reviews the current state of the annotation. In the third section, some changes brought about by the use of large-scale corpora are discussed from an epistemological viewpoint. The paper ends with the presentation of some new avenues for research that will ensue from the use of a complex multilevel corpus.
The article summarizes the contents and the structurtal premises of the “Thesaurus Indogermanischer Text- und Sprachmaterialien” (TITUS), focussing on search functions and facilities and questions of the encoding of ancient languages written in various scripts. Examples are taken from Tocharian, Greek, Vedic Sanskrit, and other ancient Indo-European languages covered by TITUS.
Virtually all conventional text-based natural language processing techniques - from traditional information retrieval systems to full-fledged parsers - require reference to a fixed lexicon accessed by surface form, typically trained from or constructed for synchronic input text adhering strictly to contemporary orthographic conventions. Unconventional input such as historical text which violates these conventions therefore presents difficulties for any such system due to lexical variants present in the input but missing from the application lexicon. To facilitate the extension of synchronically-oriented natural language processing techniques to historical text while minimizing the need for specialized lexical resources, one may first attempt an automatic canonicalization of the input text. This paper provides an informal overview of the various canonicalization techniques currently employed by the Deutsches Textarchiv project at the Berlin-Brandenburg Academy of Sciences and Humanities to prepare a corpus of historical German text for part-of-speech tagging, lemmatization, and integration into a robust online information retrieval system.
For a fistful of blogs: Discovery and comparative benchmarking of republishable German content
(2014)
We introduce two corpora gathered on the web and related to computer-mediated communication: blog posts and blog comments. In order to build such corpora, we addressed following issues: website discovery and crawling, content extraction constraints, and text quality assessment. The blogs were manually classified as to their license and content type. Our results show that it is possible to find blogs in German under Creative Commons license, and that it is possible to perform text extraction and linguistic annotation efficiently enough to allow for a comparison with more traditional text types such as newspaper corpora and subtitles. The comparison gives insights on distributional properties of the processed web texts on token and type level. For example, quantitative analysis reveals that blog posts are close to written language, while comments are slightly closer to spoken language.
Berlin Text System 3.1 User Manual : Editorial Software of the Thesaurus Linguae Aegyptiae Project
(2018)
The Berlin Text System (BTS) Version 3.1 manual introduces a Java-based software designed for editing and annotating Ancient Egyptian texts. BTS integrates a CouchDB database and an Elastic search engine to support its main components: Text Editor, Lemma List, Thesaurus, and Abstract Text.
The Text Editor facilitates transliteration, translation, lemmatization, and annotations, allowing for detailed lexical and grammatical analysis. Hieroglyphic transcriptions can be entered via a specialized Hieroglyph Type Writer based on JSesh.
The Lemma List ist ready to contain pre-Coptic lemmata, divided into Hieroglyphic/Hieratic and Demotic scripts, providing comprehensive entries with passport data, transliterations, and translations.
The Thesaurus allows for metadata enrichment of texts with controlled vocabulary for consistent data management, supporting contextual analysis through structured metadata.
The manual covers BTS's user interface, including menu bar, toolbar, status bar, and workspace, divided into views for each main component. Features like Revision History for tracking and restoring versions, indexing, and search capabilities enhance user efficiency. BTS is a powerful tool for the study and preservation of Ancient Egyptian texts, integrating advanced database and search technologies with specialized textual analysis tools.