Summary
Authors: Author: Gael Varoquaux Institution: INRIA, Parietal team
Track: Machine Learning
While big data spans many terabytes and requires distributed computing, most mere mortals deal with gigabytes. In this talk I will discuss our experience in applying efficiently machine learning to hundreds of gigabytes on commodity hardware. In particular, I will discuss patterns implemented in two Python libraries, joblib and scikit-learn, dissecting why they help addressing big data and how to implement them efficiently with simple tools.
In particular, I will cover:
On the fly data reduction On-line algorithms and out-of-core computing Parallel computing patterns: performance outside of a framework Caching of common operations, with efficient hashing of arbitrary Python objects and a robust datastore relying on Posix disk semantics The talk will illustrate the high-level concepts introduced with detailed technical discussions on Python implementations, based both on examples using scikit-learn and joblib and on an analysis of how these libraries work. The goal here is less to sell the libraries themselves than to share the insights gained in using and developing them.