Scalable Computational Cognitive Models of the Bilingual Lexicon

This joint PhD project is based at the University of Melbourne, with a 12 month stay at the Hebrew University of Jerusalem.

Supervision Team: Dr Lea Frermann, University of Melbourne; Dr Omri Abend, Hebrew University of Jersualem

Project Description:

Learning a second language (L2) is a major cognitive effort, yet humans are able to reliably acquire languages in addition to their native language (L1) with remarkable success, and decades of research have revealed intricate shifts in conceptual and linguistic representations caused by second language acquisition (SLA). This project will leverage machine learning (ML) and  natural language processing (NLP) methods, as well as the availability of large-scale naturalistic data sets of learner language, in order to investigate the structure and development of the bilingual lexicon. We will expose established models of SLA to large corpora of native and learner language. This project has a dual nature: First, it will contribute novel insights to the validity of different SLA models by exposing them to diverse and naturalistic data, and testing them on a larger scale. Secondly, we will utilize our findings to inform cross-lingual transfer of NLP models, i.e., the automatic adaptation of a model trained on one language to a different one.

Scalable Models of Lexical and Conceptual Representations in SLA. We will draw on recent developments in distributional and contextual language modelling, and incorporate mechanisms of lexical structure and development in SLA derived from established psycholinguistic models of bilingualism . We will evaluate our models on a broader scale than previously done, with the aim of drawing more robust conclusions, and separating universal phenomena from language pair-specific ones. We will test our models on predicting controlled behavioral data as well as on predicting naturalistic learner data (observed in L2 essays).

Informed Priors for Cross-lingual Model Transfer. We will incorporate our insights on global and language-pair specific shifts in lexical representations as priors in cross-lingual model transfer, hypothesizing that they will enable more effective transfer models. Cross-lingual domain adaptation, where models trained on one language (typically data-rich) are transferred to a different language (typically data-poorer), is an important, yet open, research problem in NLP. We will experiment with ways of incorporating priors in order to constrain and guide the adaptation process in an informed way.