400 Sprache
Refine
Document Type
- Part of a Book (12)
- Lecture (5)
- Conference Proceeding (3)
- Article (1)
- Working Paper (1)
Language
- English (22) (remove)
Keywords
- Historische Lexikographie (4)
- Korpus <Linguistik> (4)
- historical lexicography (4)
- Computerunterstützte Lexikographie (3)
- Digitalisierung (2)
- Entlehnung (2)
- Eurolatein (2)
- Europa (2)
- Historische Sprachwissenschaft (2)
- Latein (2)
Has Fulltext
- yes (22)
Institute
- Berlin-Brandenburgische Akademie der Wissenschaften (12)
- Akademienvorhaben Deutsches Wörterbuch von Jacob Grimm und Wilhelm Grimm (5)
- Akademienvorhaben Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten (2)
- Akademienvorhaben Digitales Wörterbuch der Deutschen Sprache (1)
For a fistful of blogs: Discovery and comparative benchmarking of republishable German content
(2014)
We introduce two corpora gathered on the web and related to computer-mediated communication: blog posts and blog comments. In order to build such corpora, we addressed following issues: website discovery and crawling, content extraction constraints, and text quality assessment. The blogs were manually classified as to their license and content type. Our results show that it is possible to find blogs in German under Creative Commons license, and that it is possible to perform text extraction and linguistic annotation efficiently enough to allow for a comparison with more traditional text types such as newspaper corpora and subtitles. The comparison gives insights on distributional properties of the processed web texts on token and type level. For example, quantitative analysis reveals that blog posts are close to written language, while comments are slightly closer to spoken language.
The article summarizes the contents and the structurtal premises of the “Thesaurus Indogermanischer Text- und Sprachmaterialien” (TITUS), focussing on search functions and facilities and questions of the encoding of ancient languages written in various scripts. Examples are taken from Tocharian, Greek, Vedic Sanskrit, and other ancient Indo-European languages covered by TITUS.
Even a reductionist attempt to define scholarship is clearly fraught with difficulty, but an idealised historical lexicographer-cum-scholar must obviously have – inter alia and at the very least – a profound linguistic and textual knowledge of the language being documented, an ability to understand texts in their historical context and to analyse the meaning or function of lexical items as used in context, an ability to synthesise the results through generalisation and abstraction and to formulate them in a way that is both accurate, i.e. reflects actual usage, and user- or reader-friendly, i.e. is comprehensible to the user/reader. S/he must have encyclopedic or world knowledge and literary skills in order to understand general content words and explain their meaning and their semantic shifts perhaps over many centuries, and technical expertise to understand specialist terms and define their use in specific contexts, again perhaps over time. In respect of etymology s/he must not only have knowledge of older stages of the language and an ability to reconstruct unattested forms, but also knowledge of the other languages that have impacted on the language being documented, or at least familiarity with the scholarly historical dictionaries of those languages. That is a tall order indeed, impossibly tall for any one person today given today‘s demands on and expectations of lexicographers. Teams which include specialists in different areas or at least have access to consultants in such areas alongside generalists are needed if scholarly standards are to be met. The standard of scholarship is primarily a factor of the number and range as well as the knowledge and experience of the lexicographers, as is in large measure the pace of production. In this regard, it cannot be emphasised enough that scholarly historical lexicography of high quality is and will remain very time consuming.
Norsk Ordbok is a 12 volume academic dictionary covering Norwegian Nynorsk literature and all Norwegian dialects from 1600 to the present. The dictionary is to be completed in 2014, the year of the bicentenary of the Norwegian constitution. The collection of data started in 1930 and the editing of the dictionary started in 1946. In the 1990s the Norwegian language collections were digitized, and from 2002 onwards Norsk Ordbok has been edited on a digital platform which communicates with a system of relational databases for manuscript storage. These databases include digitized slip archives, a draft manuscript from 1940, glossaries from the period between 1600 and 1850, canonical dictionaries from the period 1870-1910, bibliography, local dictionaries, text corpus (90 mill. words) etc. The source material is linked together in a Meta dictionary (MD). The MD is an electronic index with headwords in standard spelling, and it represents the hub of the language collections, where the source material from the databases is linked to headword nodes. This MD in turn communicates with the editing system and the dictionary database. The electronic linking up of the source material with the dictionary entries secures that the interpretation of data and product of scientific research can be reproducible in a very easy way. This is important to a scholarly dictionary. Further, the MD index system enables us to set a relative dimension for each dictionary entry and to make a master plan for setting alphabet dimensions for the whole dictionary. This is important to all modern dictionary projects with limited resources. The digitized source material, the digital editing platform and the digital dictionary product also point forward to new ways of presenting the data, and they point forward to future lexicographical research. The paper will present the digital resources of the Norsk Ordbok 2014 project, developed in close cooperation with the scientific programmers at the Unit of Digital Documentation at the University of Oslo. It will focus on the Norsk Ordbok 2014 experience with working on a fully digitized editing platform for the last 10 years, and it will also comment briefly on how the developed tools and resources point forward into Norwegian lexicography in the future.
Even a reductionist attempt to define scholarship is clearly fraught with difficulty, but an idealised historical lexicographer-cum-scholar must obviously have – inter alia and at the very least – a profound linguistic and textual knowledge of the language being documented, an ability to understand texts in their historical context and to analyse the meaning or function of lexical items as used in context, an ability to synthesise the results through generalisation and abstraction and to formulate them in a way that is both accurate, i.e. reflects actual usage, and user- or reader-friendly, i.e. is comprehensible to the user/reader. S/he must have encyclopedic or world knowledge and literary skills in order to understand general content words and explain their meaning and their semantic shifts perhaps over many centuries, and technical expertise to understand specialist terms and define their use in specific contexts, again perhaps over time. In respect of etymology s/he must not only have knowledge of older stages of the language and an ability to reconstruct unattested forms, but also knowledge of the other languages that have impacted on the language being documented, or at least familiarity with the scholarly historical dictionaries of those languages. That is a tall order indeed, impossibly tall for any one person today given today‘s demands on and expectations of lexicographers. Teams which include specialists in different areas or at least have access to consultants in such areas alongside generalists are needed if scholarly standards are to be met. The standard of scholarship is primarily a factor of the number and range as well as the knowledge and experience of the lexicographers, as is in large measure the pace of production. In this regard, it cannot be emphasised enough that scholarly historical lexicography of high quality is and will remain very time consuming.
In the last decade, interaction between scholarly lexicography and the public has grown enormously. While in the old days, the lexicographer and in particular, the scholarly lexicographer, had a tendency to describe the lexicon from an ivory tower, in a way that was for the general public rather unaccessible, a change has been evident for some time now. Interaction with the general public is now more and more appreciated and is even being stimulated within the lexicographic community. This holds too for the Algemeen Nederlands Woordenboek (ANW), a project of the Institute for Dutch Lexicology in Leiden. The ANW is an online scholarly dictionary of contemporary Dutch. In its periodization it is the successor of the Woordenboek der Nederlandsche Taal (WNT), which was completed in 2001 and covers the vocabulary of the Netherlands and Flanders up to around 1976. The editorial staff of the ANW would like to create a dictionary that is suitable for different audiences, ranging from language professionals and other academics to pupils, students and language enthusiasts in general. Consequently, interaction with the public is very important to the ANW editorial staff. It is realised in various ways. First, each dictionary article offers users the option to give feedback. Second, the editorial staff uses questions and comments gathered on internet forums, such as Meldpunt Taal (launched in June 2010) and Neo-term. The ANW staff also approaches the public directly through Twitter, with items such as ‘neologism of the week’, facts about spelling and answers to questions about language that have been received. A relatively new initiative is to call upon the public in the search for information for the dictionary, such as synonyms, pictures and the earliest use of words. Language games and word polls are other ways to increase the interest and involvement of the general public in the ANW.
The Swedish Academy Dictionary (SAOB) is one of the big national dictionary projects started in the 19th century. SAOB is still in production – there are another two volumes out of 38 to printed before 2018. The structure inside the volumes is (of course) varied/varying. There are ten chief editors and five generations of editors involved in the project. In the 1980s the SAOB was OCR-scanned. The result was used for a webversion in the internet from 1997. The webversion is very frequently used but has a lot of shortcomings due to, among other things, a great typographic complexity and a scanning technology of the time. Now the editorial board is discussing the future: redigitalization (in China), updating of the webversion with new search tools, updating of the dictionary itself and some form of editing tool.
Among mass digitization methods, double-keying is considered to be the one with the lowest error rate. This method requires two independent transcriptions of a text by two different operators. It is particularly well suited to historical texts, which often exhibit deficiencies like poor master copies or other difficulties such as spelling variation or complex text structures. Providers of data entry services using the double-keying method generally advertise very high accuracy rates (around 99.95% to 99.98%). These advertised percentages are generally estimated on the basis of small samples, and little if anything is said about either the actual amount of text or the text genres which have been proofread, about error types, proofreaders, etc. In order to obtain significant data on this problem it is necessary to analyze a large amount of text representing a balanced sample of different text types, to distinguish the structural XML/TEI level from the typographical level, and to differentiate between various types of errors which may originate from different sources and may not be equally severe. This paper presents an extensive and complex approach to the analysis and correction of double-keying errors which has been applied by the DFG-funded project “Deutsches Textarchiv” (German Text Archive, hereafter DTA) in order to evaluate and preferably to increase the transcription and annotation accuracy of double-keyed DTA texts. Statistical analyses of the results gained from proofreading a large quantity of text are presented, which verify the common accuracy rates for the double-keying method.
Numerous high-quality primary text sources—in the context of the curation project described here, this means full-text transcriptions (and corresponding image scans) of German works originating from the 15th to the 19th centuries—are scattered among the web or stored remotely. E.g., transcriptions of historical sources are stored locally on degrading recording media and cannot be found, let alone accessed by third parties. Additionally, idiosyncratic, project-specific markup conventions and uncommon, out-of-date or inflexible storage formats often hinder further usage and analysis of the data. Often, textual resources are accompanied by scarce, insufficient or inaccurate bibliographic information, which is only one further reason why valuable resources, even if available on the web, remain undiscovered by and are of little use to the wider research community. The integration of these dispersed primary text sources into the sustainable, web and centres-based research infrastructure of CLARIN-D will be an important step to solve this problem. The Full Paper illustrates an exemplary approach taken by the »Deutsches Textarchiv« (DTA; www.deutschestextarchiv.de) at the Berlin-Brandenburg Academy of Sciences and Humanities (BBAW) to integrate dispersed textual resources and corresponding image scans from various sources into a large historical text corpus of its own and to insert these into the infrastructure of CLARIN-D.