Contribute Media
A thank you to everyone who makes this possible: Read More

Learning to rank with the Transformer

Description

Learning to Rank (LTR) is concerned with optimising the global ordering of a list of items, according to their utility to the users. In this talk, we present the results of ongoing research at Allegro.pl into applying the Transformer architecture known from Neural Machine Translation literature to the LTR setting and introduce allRank, an open-source, Pytorch based framework for LTR.

Self-attention based architectures fuelled recent breakthroughs in many NLP tasks. Models like The Transformer, GPT-2 or BERT pushed the boundaries of what's possible in NLP and made headlines along the way. Self-attention mechanism can be seen as an encoder for an unordered set of objects, taking into account interactions between items in the set. This property makes self- attention mechanism an attractive choice for Learning to Rank (LTR) models, which usually struggle with modelling inter-item dependencies.

In this talk, we present the results of ongoing research in applying self- attention based architectures to LTR. Our proposed model is a modification of the popular Transformer architecture, adapted to the LTR task. We guide the audience into both the setting of LTR and its most popular algorithms as well the details of self-attention mechanism and the Transformer architecture. We present results on both proprietary data of Allegro's clickthrough logs and most popular LTR dataset, WEB30K. We demonstrate considerable performance gains of self-attention based models over MLP baselines across popular pointwise, pairwise and listwise losses. Finally, we present allRank, an open- source, Pytorch based framework for neural ranking models. After the talk, the audience will have a good understanding of the basics of LTR and its importance to the industry, as well as will see how to get started in training state-of-the-art neural network models for learning to rank using allRank.

Details

Improve this page