Refine
Language
- English (3) (remove)
Keywords
- Digitalisierung (3) (remove)
Has Fulltext
- yes (3) (remove)
Norsk Ordbok is a 12 volume academic dictionary covering Norwegian Nynorsk literature and all Norwegian dialects from 1600 to the present. The dictionary is to be completed in 2014, the year of the bicentenary of the Norwegian constitution. The collection of data started in 1930 and the editing of the dictionary started in 1946. In the 1990s the Norwegian language collections were digitized, and from 2002 onwards Norsk Ordbok has been edited on a digital platform which communicates with a system of relational databases for manuscript storage. These databases include digitized slip archives, a draft manuscript from 1940, glossaries from the period between 1600 and 1850, canonical dictionaries from the period 1870-1910, bibliography, local dictionaries, text corpus (90 mill. words) etc. The source material is linked together in a Meta dictionary (MD). The MD is an electronic index with headwords in standard spelling, and it represents the hub of the language collections, where the source material from the databases is linked to headword nodes. This MD in turn communicates with the editing system and the dictionary database. The electronic linking up of the source material with the dictionary entries secures that the interpretation of data and product of scientific research can be reproducible in a very easy way. This is important to a scholarly dictionary. Further, the MD index system enables us to set a relative dimension for each dictionary entry and to make a master plan for setting alphabet dimensions for the whole dictionary. This is important to all modern dictionary projects with limited resources. The digitized source material, the digital editing platform and the digital dictionary product also point forward to new ways of presenting the data, and they point forward to future lexicographical research. The paper will present the digital resources of the Norsk Ordbok 2014 project, developed in close cooperation with the scientific programmers at the Unit of Digital Documentation at the University of Oslo. It will focus on the Norsk Ordbok 2014 experience with working on a fully digitized editing platform for the last 10 years, and it will also comment briefly on how the developed tools and resources point forward into Norwegian lexicography in the future.
Among mass digitization methods, double-keying is considered to be the one with the lowest error rate. This method requires two independent transcriptions of a text by two different operators. It is particularly well suited to historical texts, which often exhibit deficiencies like poor master copies or other difficulties such as spelling variation or complex text structures. Providers of data entry services using the double-keying method generally advertise very high accuracy rates (around 99.95% to 99.98%). These advertised percentages are generally estimated on the basis of small samples, and little if anything is said about either the actual amount of text or the text genres which have been proofread, about error types, proofreaders, etc. In order to obtain significant data on this problem it is necessary to analyze a large amount of text representing a balanced sample of different text types, to distinguish the structural XML/TEI level from the typographical level, and to differentiate between various types of errors which may originate from different sources and may not be equally severe. This paper presents an extensive and complex approach to the analysis and correction of double-keying errors which has been applied by the DFG-funded project “Deutsches Textarchiv” (German Text Archive, hereafter DTA) in order to evaluate and preferably to increase the transcription and annotation accuracy of double-keyed DTA texts. Statistical analyses of the results gained from proofreading a large quantity of text are presented, which verify the common accuracy rates for the double-keying method.
The FEW is a huge dictionary when we consider the sheer mass of data (25 volumes, 16000 pages) and its exhaustive aims. It has indeed the purpose of registering and etymologizing the whole lexicon, not only of French, but also of earlier stages of the language and of Occitan; of every Gallo-romance dialect; of every technical or professional genre; of every language register, including slang. Summing up, the FEW aims to include and describe every single lexical unit which exists or has existed in the territory of ancient Gaul. The sheer size of this undertaking means two things, which directly influence the digitalisation of the dictionary: Firstly, there is a a huge amount of data; secondly, the presentation and organization of the data is exceedingly complex. The reasons for digitalising the FEW are the easy searches for units, and the carrying out of searches using criteria that are not possible to use with the printed version. However, the fulfillment of these purposes includes some risks, and potentially the cutting of some corners, especially the temptation of renouncing reading.