Description
With the advent of large pre-trained language models like GPT, BERT, etc., and their usage in almost all natural language understanding and generation applications, it is important that we evaluate the fairness and mitigate biases of these models. Since these models are fed with human-generated data (mostly from the web), they are exposed to human biases. Hence, they carry forward and also amplify these biases in their results. In this talk, we will discuss the motivation for fairness and bias research in NLP and discuss different approaches used to detect and mitigate biases. We will also explore some available tools to include in your models to ensure fairness.