Description
Word embeddings is a very convenient and efficient way to extract semantic information from large collections of textual or textual-like data. We will be presenting an exploration and comparison of the performance of "traditional" embeddings techniques like word2vec and GloVe as well as fastText and StarSpace in NLP related problems such as metaphor and sarcasm detection