Paper presentations are linked in the Publications section, and guest lectures in the Teaching section.
Invited talks & keynotes
-
Beyond “noisy” text: How (and why) to process dialect data
Verena BlaschkeProcessing data from non-standard dialects links two lines of research: creating NLP tools that are robust to “noisy” inputs, and extending the coverage of NLP tools to underserved language communities. In this talk, I will describe ways in which processing dialect data differs from processing standard-language data, and discuss some of the current challenges in dialect NLP research. For instance, I will talk about strategies to mitigate the effect of infelicitous subword tokenization caused by ad-hoc pronunciation spellings. Additionally, I argue that we should not only consider how to tackle dialectal variation in NLP, but also why. To this end, I will highlight perspectives of some dialect speaker communities on which language technologies should (or should not) be able to process or produce dialectal in- or output.
-
Dialect NLP: How (and why) to process non-standard language varieties
Verena BlaschkeInvited talk at the Linguistics Graduate Colloquium, Passau University (12/2024) · Abstract Slides
Invited talk at the NLPnorth group, ITU Copenhagen (09/2024) · SlidesNatural language processing (NLP) has improved by leaps and bounds when it comes to processing data from standardized languages with much available data, like German. However, NLP lags behind when closely related non-standard varieties (such as Bavarian dialects) are concerned. I will briefly discuss three challenges. Firstly, there is a general lack of high-quality data for statistical methods, and resources are not always shared outside their original research communities. Secondly, ways of encoding textual data do not generalize well to ad-hoc dialect spellings. Lastly, I will discuss investigating what NLP technologies some dialect speaker communities are actually interested in.
Non-archival conference/workshop presentations
-
Large language models and small language varieties:
Challenges and current methodsVerena Blaschke & Barbara Plank -
Natural dialect processing:
NLP for non-standardized language varietiesVerena Blaschke & Barbara Plank -
Configurable language-specific tokenization for CLDF databases
Johannes Dellert & Verena BlaschkeIn any workflow for computational historical linguistics, tokenization of IPA sequences is a crucial preprocessing step, as it shapes the alignments which provide the input of algorithms for cognate detection and proto-form reconstruction. This is also true for EtInEn (Dellert 2019), our forthcoming integrated development environment for etymological theories. An EtInEn project can be created from any CLDF database such as the ones that have been aggregated and unified by the Lexibank initiative (List ea. 2022). Whereas the tools for preparing CLDF databases (Forkel & List 2020) encourage the application of a uniform tokenization across all languages in a dataset, our view is that in many contexts, it is more natural to tokenize phonetic sequences in ways that differ between languages. To provide a simple example, many geminates in Italian need to be aligned to consonant clusters in other Romance languages (e.g. notte vs. Romanian noapte “night”), which is much easier if they are tokenized into two instances of the same consonant, whereas geminates in Swedish are best treated as cognate to their shortened counterparts in other Germanic languages.
To provide comprehensive support for such cases, EtInEn includes configurable language-specific tokenizers as an additional abstraction layer that allows to reshape forms after the import, and also serves as a generic way to bridge phonetic surface forms and the underlying forms that historical linguists are primarily interested in. Each tokenizer is defined by a token alphabet which is used for greedy tokenization, a list of allophone sets which can be used to abstract over irrelevant subphonemic distinctions, and a list of non-IPA symbols that are defined in terms of phonetic features. The initial state of each tokenizer is based on an analysis of the tokens used by the imported CLDF database. Tokenizer definitions are stored in a human-editable plain-text format which we would like to propose as a new standard.
In EtInEn, tokenizer definitions are manipulated through a graphical editor in which the potential tokens for each language are arranged in the familiar layout of consonant and vowel charts, enhanced by additional panels for diphthongs and tones. Currently defined tokens are highlighted, and allophone sets are summarized under their canonical symbols. Basic edit operations serve to group several sounds into an allophone set, and to join or split a multi-symbol sequence, such as a diphthong or a sound with a coarticulation. More complex operations support workflows for parallel configuration of multiple tokenizers.
Additional non-IPA symbols can be given semantics in terms of a combination of phonetic features, and declared to be part of the token set for any language. On the representational level, this provides the option to use non-IPA symbols for form display, whereas underlyingly, the system will interpret the symbols in terms of their features. On the conceptual level, underspecified definitions provide support for metasymbols. In addition to some predefined metasymbols (such as V for vowels and C for consonants), the user can assign additional symbols to arbitrary classes of sounds. These are then available throughout EtInEn for various purposes, such as concisely representing the conditioning environments for a soundlaw, or summarizing the probabilistic output of an automated reconstruction module.
In addition to configurable tokenizers, EtInEn provides the option to define form-specific tokenization overrides, allowing to substitute the result of automated tokenization with any sequence over the current token alphabet for the relevant language. This is currently our strategy for handling otherwise challenging phenomena such as metathesis or root-pattern morphology, which we normalize into alignable and concatenative representations. This forms a bridge to existing standards for representing morphology in the CLDF framework (e.g. Schweikhard & List 2020), which currently only support the annotation of morpheme boundaries in terms of simple splits in phonetic IPA sequences.
References:
Dellert, Johannes (2019): “Interactive Etymological Inference via Statistical Relational Learning.” Workshop on Computer-Assisted Language Comparison at SLE-2019.
Forkel, Robert and Johann-Mattis List (2020): “CLDFBench. Give your Cross-Linguistic data a lift.” Proceedings of LREC 2020, 6997-7004.
List, Johann-Mattis, Robert Forkel, S. J. Greenhill, Christoph Rzymski, Johannes Englisch & Russell Gray (2022): “Lexibank, A public repository of standardized wordlists with computed phonological and lexical features.” Scientific Data 9.316, 1-31.
Schweikhard, Nathanael E. and Johann-Mattis List (2020): “Developing an annotation framework for word formation processes in comparative linguistics.“ SKASE Journal of Theoretical Linguistics 17(1), 2-26. -
Correlating borrowing events across concepts to derive a data-driven source of evidence for loanword etymologies
Verena Blaschke & Johannes DellertComputational methods for approximating various aspects of the reasoning of a historical linguist have great potential as components of a future generation of systems for more rapid machine-aided theory development (List 2019). One of the main challenges for such methods is that some of the heuristics and reasoning patterns commonly used in historical linguistics are difficult to formalize completely. Etymological arguments frequently appeal more to the shared experience of experts than to a fully developed theoretical framework. Computationally emulating this process will require experience in the shape of data with annotations that represent the heuristics and preferences employed within human expert communities.
Our first application of this general paradigm focuses on informal evidence used for establishing loanword etymologies. Classical arguments for assigning a loanword etymology to a word rely on deviations from the sound laws which would have applied if the word had been inherited, or borrowed at a different point in time. For instance, it is clear that the German word Person is a borrowing and not strictly cognate with Latin persona, because otherwise the initial p would have had to undergo a sound shift to f. Such a criterion would be rather straightforward to formalize based on a formal description of the expected sound laws. However, this criterion is only helpful if some known sound law would have applied to a part of the phonetic material of the word in question. In many cases, we are not in this comfortable position, and the etymological discussion will be based on more elusive evidence.
In some cases, historical, geographical or archaeological knowledge will help to make the decision, but the most systematically exploitable type of evidence builds on the tendency for loanwords to appear in batches. For instance, if some language has already been established as a donor language for some words, it becomes more likely as a candidate donor for other words as well, even if the evidence from the individual words alone would not warrant such a conclusion. Even more crucially, arguments often rely on the observation that words from the same semantic field tend to get borrowed together. This applies to obvious cases like numbers and month names as well as to less obviously connected sets of concepts such as tools belonging to a certain craft (Tadmor 2009, Carling et al. 2009).
A helpful automated method for inferring possible loanword relations will have to emulate at least some of these types of informal reasoning. As a first step in this direction, we develop data-driven measures of how much evidence establishing one borrowing event provides for assuming others. We also explore the extent to which such a correlation structure of borrowing events can be extracted from the limited amounts of existing cross-linguistic loanword data.
Given a set of parallel wordlists annotated with loanword status and semantic concept information, we extract how often each concept was loaned and by which pairs of donor and target languages. To quantify the non-independence of borrowing events for each pair of concepts, we average the normalized pointwise mutual information across 1,000 bootstrap samples. In order to additionally retrieve some directional signal that can be interpreted as an approximation to implicational universals of borrowing, the same procedure is applied to the conditional probabilities of concept pairs given one of the concepts.
We execute our methods on WOLD (Haspelmath and Tadmor 2009), and find that even from this limited sample of 41 languages, it is possible to extract quite a few of the expected within-domain correlations (such as the ones between numbers or between kinship terms), which validated our approach. In addition, we also receive some more surprising cross-domain correlations (such as between NARROW and HOLE and between KNEEL and DEFEAT, but also between BEESWAX and KIDNEY) which require further investigation.
References:
Carling, Gerd, Sandra Cronhamn, Robert Farren, Elnur Aliyev, and Johan Frid. 2019. “The causality of borrowing: Lexical loans in Eurasian languages.” PloS one 14(10): e0223588.
Haspelmath, Martin and Uri Tadmor, eds. 2009. World Loanword Database. Leipzig: Max Planck Institute for Evolutionary Anthropology. Available at https://wold.clld.org/.
List, Johann-Mattis. 2019. “Automated methods for the investigation of language contact, with a focus on lexical borrowing.” Language and Linguistics Compass 13(10): e12355.
Tadmor, Uri. 2009. “Loanwords in the world’s languages: Findings and results.” In Martin Haspelmath, and Uri Tadmor, eds. Loanwords in the world’s languages: A comparative handbook. Berlin: De Gruyter Mouton. 55-75. -
Clustering dialect varieties based on historical sound correspondences
Verena BlaschkeWhile information on historical sound shifts plays an important role for examining the relationships between related language varieties, it has rarely been used for computational dialectology. This thesis explores the performance of two algorithms for clustering language varieties based on sound correspondences between Proto-Germanic and modern continental West Germanic dialects. Our experiments suggest that the results of agglomerative clustering match common dialect groupings more closely than the results of (divisive) bipartite spectral graph co-clustering. We also observe that adding phonetic context information to the sound correspondences yields clusters that are more frequently associated with representative and distinctive sound correspondences).