Course details


Robustness and generalisation in natural language processing

WS 2022 Dr. Elia Bruni Hybrid
B.Sc modules:
CS-BWP-AI - Artificial Intelligence
CS-BWP-CL - (Computational) Linguistics
CS-BWP-NI - Neuroinformatics
KOGW-WPM-CL - Computational Linguistics
KOGW-WPM-KI - Artificial Intelligence
KOGW-WPM-NI - Neuroinformatics
M.Sc modules:
CC-MWP-AI - Artificial Intelligence
CC-MWP-CL - Computational Linguistics
CC-MWP-NI - Neuroinformatics
CS-MWP-AI - Artificial Intelligence
CS-MWP-CL - (Computational) Linguistics
CS-MWP-NI - Neuroinformatics

CS-BW - Bachelor elective course
CS-MW - Master elective course
Doctorate program
Mon: 16-18
Tue: 14-16

Abstract In natural language processing (NLP), we set out to solve language-related tasks (e.g., machine translation, question answering) but often evaluate on narrow, in-distribution test datasets. With recent advances in deep learning, modern systems have achieved high accuracy on many canonical datasets, but still seem far from solving general tasks. In this class, we will survey recent research on robustness and generalisation that studies this gap between in-distribution accuracy and task competency through out-of-distribution settings. We will learn about different settings in which NLP systems often fail to generalise well, including adversarial perturbations, settings that require compositional reasoning, and domain transfer. Across these topics, we will cover methods both for measuring these robustness and generalisation issues and ways that we can improve model robustness and generalisation. Prerequisites Familiarity with natural language processing and/or machine learning at the level of 8.3470 Deep learning for natural language processing (students enrolled to 8.3470 in this semester are also eligible). Please email me if you want to enrol but are unsure if you meet the prerequisites.