Description
Transfer learning has been changing the NLP landscape tremendously since the release of BERT one year ago. Transformers of all kinds have emerged, dominate most research leaderboards and have made their way into industrial applications. In this talk we will dissect the paradigm of transfer learning and its effects on pipelines, modelling and the engineers mindset.
Sufficient training data is often a bottleneck for real-world machine learning applications. The computer vision community mitigated this problem by pretraining models on ImageNet and transferring knowledge to the desired task. Thanks to an emerging new class of deep language models, transfer learning has also become the new standard in NLP. In this talk we will share strategies, tips & tricks along all model phases: Pretraining a language model from scratch, adjusting it for domain specific language and fine-tuning it for the desired down-stream task. We will demonstrate the practical implications by showing how models like BERT caused major breakthroughs for the task of Question Answering.