Contribute Media
A thank you to everyone who makes this possible: Read More

Natural Language Processing: Challenges and Next Frontiers

Description

Barbara Plank is tenured Assistant Professor in Natural Language Processing at the University of Groningen, The Netherlands.

Her research focuses on cross-domain and cross-language NLP. She is interested in robust language technology, learning under sample selection bias (domain adaptation, transfer learning), annotation bias (embracing annotator disagreements in learning), and generally, semi-supervised and weakly-supervised machine learning for a variety of NLP tasks and applications, including syntactic processing, opinion mining, information and relation extraction and personality prediction.

Natural Language Processing: Challenges and Next Frontiers

Despite many advances of Natural Language Processing (NLP) in recent years, largely due to the advent of deep learning approaches, there are still many challenges ahead to build successful NLP models. In this talk I will outline what makes NLP so challenging. Besides ambiguity, one major challenges is variability. In NLP, we typically deal with data from a variety of sources, like data from different domains, languages and media, while assuming that our models work well on a range of tasks, from classification to structured prediction. Data variability is an issue that affects all NLP models. I will then delineate one possible way to go about it, by combining recent success in deep multi-task learning with fortuitous data sources, which allows learning from distinct views and distinct sources. This will be one step towards one of the next frontiers: learning under limited (or absence) of annotated resources, for a variety of NLP tasks.

Link to Q&A: https://youtu.be/JtiCdsESuT0 (Second pyvideo tab)

Details

Improve this page