Am 08.11.2016 finden in den Räumlichkeiten des Institutes für Deutsche Sprache (Standort R5) vier Tutorien statt. Alle Tutorien sind
kostenlos und richten sich sowohl an die regulären Konferenzteilnehmer als auch - sofern freie Plätze vorhanden - an alle Interessierten am und außerhalb des IDS.
Zur Anmeldung für ein Tutorium klicken Sie bitte auf den " Anmelden"-Button.
TITEL | LEITER | SPRACHE | RAUM | ||
---|---|---|---|---|---|
09:00 - 10:30 |
Working with Web Corpora (Teil 1) |
Felix Bildhauer und Roland Schäfer (Mannheim / Berlin) | Englisch | Vortragssaal |
Anmelden |
10:30 - 10:45 |
KAFFEEPAUSE | ||||
10:45 - 12:15 |
Working with Web Corpora (Teil 2) | Felix Bildhauer und Roland Schäfer (Mannheim / Berlin) | Englisch | Vortragssaal | |
12:15 - 14:00 |
MITTAGSPAUSE | ||||
14:00 - 15:30 |
InterCorp: Exploring a Multilingual Parallel Corpus (Abstract) Präsentation |
Alexandr Rosen (Prag) |
Englisch | Vortragssaal |
Anmelden |
PARALLEL | |||||
Introduction to Corpus Analysis with KorAP | Nils Diewald und Eliza Margaretha (Mannheim) | Englisch / Deutsch | Raum 1.28 |
Anmelden | |
15:30 - 15:45 |
KAFFEEPAUSE | ||||
15:45 - 17:15 |
Visualisierung linguistischer Daten mit der freien Grafik- und Statistikumgebung R (Teil 1) |
Sandra Hansen-Morath und Sascha Wolfer (Mannheim) |
Englisch / Deutsch | Vortragssaal |
Anmelden |
17:15 - 17:30 |
KAFFEEPAUSE | ||||
17:30 - 19:00 |
Visualisierung linguistischer Daten mit der freien Grafik- und Statistikumgebung R (Teil 2) | Sandra Hansen-Morath und Sascha Wolfer (Mannheim) | Englisch / Deutsch | Vortragssaal | |
20:00 |
GET-TOGETHER in Wirtshaus UHLAND! | Anmelden |
Web corpora (huge, post-processed collections of web pages) provide an increasingly important source of data for linguistic research, thanks to their size, content, and availability. The last decade has seen important developments in the construction of web corpora, and the current generation surpasses its predecessors in cleanliness, level and quality of linguistic annotation and enrichment with meta data. At the same time, web corpora have peculiarities (such as sampling biases, duplication, non-standard orthography and language, lack of some meta data) that may discourage linguists from using them. Linguists working with web corpora should at all times be aware of these limitations.
This workshop will start with a brief introduction to the making of web corpora, discussing some of the most important questions of design and processing, including linguistic annotation. The main focus of the workshop, however, is on practical questions that frequently arise from a linguist's perspective. In particular, we will discuss what web corpora can (and cannot) do for linguists in their daily corpus linguistic work, regarding such issues as reliability of annotation, availability of meta data, data integrity and representativeness and practical limitations of typical query engines. Much of the workshop will be hands-on examples and exercises, and we will introduce practical solutions and workarounds for a number of frequently encountered problems. For maximal benefit, participants should bring their own laptop computer.
Roland Schäfer and Felix Bildhauer have been involved in building corpora from the web since 2011. They have created some of the world's largest web corpora for a variety of languages, including German.
After a brief introduction of parallel corpora, focusing on their specifics in comparison to standard monolingual corpora, and an overview of those publicly available, we take a closer look at InterCorp, a part of the Czech National Corpus. InterCorp has been on-line since 2008, growing steadily to its present size of 1.7 billion words in 40 languages, with a focus on Czech, but also a substantial share of English, Spanish, German, French, Croatian, Polish, Dutch and a number of other languages. Its core part includes mainly fiction, complemented by legal and journalistic texts, parliament proceedings and film subtitles. The texts are sentence-aligned, tagged (in 23 languages) and lemmatized (in 20 languages). In the practical, hands-on part of the tutorial, we learn how to:
Finally, we will discuss some challenges and prospects of the on-going project. Experience with the use of corpus search tools will be useful, as well as the registration as a user of the Czech National Corpus.
Präsentation als PDF-Datei
In recent years, due to technical advances and accessibility of resources through the world wide web, the field of corpus analysis gained new attention in providing tools to deal with very large corpora. DeReKo, the German Reference Corpus, for example, has grown beyond 25 billion words alone (Kupietz and Lüngen, 2014). Additional layers of linguistic annotations increase these amounts of data even further, pushing popular applications for corpus analysis like IMS Corpus Workbench (Evert and Hardie, 2011), Annis (Zeldes et al., 2009) or COSMAS II (Bodmer, 1996) to their limits.
KorAP is a web-based corpus analysis platform, developed with a focus on scalability, flexibility, and sustainability - and with the intention to replace COSMAS II as the main access point to DeReKo in the future. KorAP is capable of dealing with very large, multiple annotated, and heterogeneously licensed text collections. It supports researchers by providing a wide range of query constructs and the ad-hoc creation of virtual corpora. In this tutorial, the developers will introduce KorAP for corpus analysis. Starting with a brief description of the current state of development and the architecture of the system, the participants will be able to do their own research using KorAP in a hands-on session.
Following a short starting guide, all participants will be able to search for linguistic phenomena using KorAP, starting with simple sequences of words up to complex linguistic structures across multiple annotation layers. They will also be able to construct complex virtual corpora by means of meta data constraints, and make use of the built-in assisting tools. As KorAP supports multiple query languages like COSMAS II, ANNIS QL (Rosenfeld, 2010), or Poliqarp (Przepiórkowski et al., 2004; a variant of the popular CQP language), users known to these languages will easily be able to work with the new system. However, previous knowledge of corpus analysis platforms or corpus query languages is not necessary. To close the session, the developers would like to gather feedback on the current version of the software and discuss further improvements. For those interested in technical details of the KorAP system, the developers are open for questions afterwards.
The tutorial welcomes anyone interested in corpus analysis and corpus analysis software. Participants are requested to bring their own laptops for use in the hands-on session. A common browser in a current version should be pre-installed (e.g. Mozilla Firefox, Google Chrome).
Literature
R ist eine flexible und freie Entwicklungsumgebung zur Umsetzung von statistischen Analysen, die zahlreiche Optionen zur Datenvisualisierung bereit hält und sehr gut für große Datensätze geeignet ist. Unser Workshop vermittelt einen stark anwendungsorientierten Einstieg in das Programm und legt mit Hilfe von vielen praktischen Übungen und linguistischen Anwendungsbeispielen die Grundlagen für ein eigenständiges Weiterentwickeln der eigenen Fähigkeiten im Umgang mit der Software. Wir werden elementare explorative Visualisierungen vorstellen und in die Logik des Basis-Grafiksystems von R einführen. Darüber hinaus werden wir inferenzstatistische und multivariate Statistiken vorstellen und zeigen, wie man die Ergebnisse dieser Verfahren visuell darstellen kann. Wir werden außerdem vorstellen, wie in R interaktive Grafiken erstellt werden können.