Refine
Document Type
- Conference Proceeding (13) (remove)
Language
- German (7)
- English (5)
- Multiple languages (1)
Keywords
- Korpus <Linguistik> (13) (remove)
Has Fulltext
- yes (13)
Virtually all conventional text-based natural language processing techniques - from traditional information retrieval systems to full-fledged parsers - require reference to a fixed lexicon accessed by surface form, typically trained from or constructed for synchronic input text adhering strictly to contemporary orthographic conventions. Unconventional input such as historical text which violates these conventions therefore presents difficulties for any such system due to lexical variants present in the input but missing from the application lexicon. To facilitate the extension of synchronically-oriented natural language processing techniques to historical text while minimizing the need for specialized lexical resources, one may first attempt an automatic canonicalization of the input text. This paper provides an informal overview of the various canonicalization techniques currently employed by the Deutsches Textarchiv project at the Berlin-Brandenburg Academy of Sciences and Humanities to prepare a corpus of historical German text for part-of-speech tagging, lemmatization, and integration into a robust online information retrieval system.
Der zentrale Gegenstand dieses Beitrags ist die Frage, wie Themen als grundlegende Aspekte der sprachlichen Verständigung mit dem Wortgebrauch zusammenhängen und wie diese Zusammenhänge - die thematische Prägung des Wortgebrauchs - auch für die lexikographisch-lexikologische Dokumentation des Wortgebrauchs fruchtbar gemacht werden kann. Es wird auf Ergebnisse der Gesprächsforschung, der Textlinguistik und der Diskursforschung zurückgegriffen. Anhand von Beispielen (u.a. aus den Themenbereichen Rassenhygiene, Naturschutz, Sport/Fußball, Hygiene, Technik und Haushalt) werden Vorschläge gemacht, wie man den Themenbezug historischer Kommunikation und die thematische Prägung lexikalischer Mittel auch in lexikographischen Darstellungen stärker verankern kann. Anhand von zwei Textbeispielen wird auch erläutert, wie thematische Schlüsseltexte lexikographisch-lexikologisch genutzt werden können, um historische Systemstellen von Themen und Teilthemen mit den entsprechenden Bereichen im Wortschatz zu füllen.
For a fistful of blogs: Discovery and comparative benchmarking of republishable German content
(2014)
We introduce two corpora gathered on the web and related to computer-mediated communication: blog posts and blog comments. In order to build such corpora, we addressed following issues: website discovery and crawling, content extraction constraints, and text quality assessment. The blogs were manually classified as to their license and content type. Our results show that it is possible to find blogs in German under Creative Commons license, and that it is possible to perform text extraction and linguistic annotation efficiently enough to allow for a comparison with more traditional text types such as newspaper corpora and subtitles. The comparison gives insights on distributional properties of the processed web texts on token and type level. For example, quantitative analysis reveals that blog posts are close to written language, while comments are slightly closer to spoken language.