Description
Word embeddings are a family of Natural Language Processing (NLP) algorithms where words are mapped to vectors in low-dimensional space. The interest around word embeddings has been on the rise in the past few years, because these techniques have been driving important improvements in many NLP applications like text classification, sentiment analysis or machine translation.
In this talk we’ll describe the intuitions behind this family of algorithms, we’ll explore some of the Python tools that allow us to implement modern NLP applications and we’ll conclude with some practical considerations.