Description
Scientific Python has historically relied on compiled extensions for
performance critical parts of the code. In this talk, we outline how
to write Rust extensions for Python using
rust-numpy,
project. Advantages and limitations of this approach as compared to
Cython or wrapping Fortran, C or C++ are also discussed.
In the second part, we introduce the vtext project that allows fast text processing in Python using Rust. In particular, we consider the problems of text tokenization, and (parallel) token counting resulting in a sparse vector representation of documents. These can then be used as input in machine learning or information retrieval applications. We outline the approach used in vtext and compare to existing solutions of these problems in the Python ecosystem.
In this talk, we present some of the benefits of writing extensions for Python in Rust. We then illustrate this approach on the vtext project, that aims to be a high- performance library for text processing.