Summary
Presenters: Gaƫl Varoquaux, Jake Vanderplas, Olivier Grisel
Description
Machine Learning is the branch of computer science concerned with the development of algorithms which can learn from previously-seen data in order to make predictions about future data, and has become an important part of research in many scientific fields. This set of tutorials will introduce the basics of machine learning, and how these learning tasks can be accomplished using Scikit-Learn, a machine learning library written in Python and built on NumPy, SciPy, and Matplotlib. By the end of the tutorials, participants will be poised to take advantage of Scikit-learn's wide variety of machine learning algorithms to explore their own data sets. The tutorial will comprise two sessions, Session I in the morning (intermediate track), and Session II in the afternoon (advanced track). Participants are free to attend either one or both, but to get the most out of the material, we encourage those attending in the afternoon to attend in the morning as well.
Session II will build upon Session I, and assume familiarity with the concepts covered there. The goals of Session II are to introduce more involved algorithms and techniques which are vital for successfully applying machine learning in practice. It will cover cross-validation and hyperparameter optimization, unsupervised algorithms, pipelines, and go into depth on a few extremely powerful learning algorithms available in Scikit-learn: Support Vector Machines, Random Forests, and Sparse Models. We will finish with an extended exercise applying scikit-learn to a real-world problem.
Outline
Tutorial 2 (advanced track)
0:00 - 0:30 -- Model validation and testing Bias, Variance, Over-fitting, Under-fitting Using validation curves & learning to improve your model Exercise: Tuning a random forest for the digits data 0:30 - 1:30 -- In depth with a few learners SVMs and kernels Trees and forests Sparse and non-sparse linear models 1:30 - 2:00 -- Unsupervised Learning Example of Dimensionality Reduction: hand-written digits Example of Clustering: Olivetti Faces 2:00 - 2:15 -- Pipelining learners Examples of unsupervised data reduction followed by supervised learning. 2:15 - 2:30 -- Break (possibly in the middle of the previous section) 2:30 - 3:00 -- Learning on big data Online learning: MiniBatchKmeans Stochastic Gradient Descent for linear models Data-reducing transforms: random-projections 3:00 - 4:00 -- Parallel Machine Learning with IPython IPython.parallel, a short primer Parallel Model Assessment and Selection Running a cluster on the EC2 cloud using StarCluster
https://github.com/jakevdp/sklearn_scipy2013
Required Packages
This tutorial will use Python 2.6 / 2.7, and require recent versions of numpy (version 1.5+), scipy (version 0.10+), matplotlib (version 1.1+), scikit-learn (version 0.13.1+), and IPython (version 0.13.1+) with notebook support. The final requirement is particularly important: participants should be able to run IPython notebook and create & manipulate notebooks in their web browser. The easiest way to install these requirements is to use a packaged distribution: we recommend Anaconda CE, a free package provided by Continuum Analytics: http://continuum.io/downloads.html or the Enthought Python Distribution: http://www.enthought.com/products/epd_free.php