Description
Neural networks with many layers are known as "deep" neural networks. While the phrase "deep learning" to describe deep neural networks is new, the ideas behind these networks have been around for many years. Deep neural networks attempt to learn concise, hierarchical representations of complex data, and recent advances in research have made it much easier to use these networks on many tasks, including computer vision. Due to massive increases in compute power, available data, and algorithmic techniques, neural networks have greatly improved the state of the art for many aspects of computer vision including object recognition and localization.
Two key Python packages (pylearn2 and theano) enable machine learning researchers easily develop new architectures and algorithms for neural networks which utilize GPU processing. This specialized hardware improves the computational feasibility of big neural networks, and GPU programming was a key driver (along with the development of a technique called dropout) in the resurgence of neural networks for machine learning tasks.
Utilizing networks trained on extremely large compute farms as "black-box" preprocessing, a consumer grade desktop or laptop can approach state of the art results using standard machine learning techniques such as logistic regression and support vector machines (SVM). This idea has serious potential in the embedded/FPGA/ASIC space as well.
The python interface to pylearn2 will be heavily discussed, while also using the preprocessing features of scikit-learn for data preparation. There will also be discussion of the recent advances for pre-trained neural networks as preprocessing.
At the end of this talk, I hope you will understand what "deep learning" really means, how to apply these techniques to image data using Python, and how these techniques will shape the future of machine learning research and its applications.