Description
Filmed at PyData London 2017 www.pydata.org
Description In this highly interactive session, you will learn how to leverage Apache Spark, Amazon Elastic Map Reduce (EMR) and Apache Zeppelin to rapidly mine a large real-world data set. You will learn how to apply common Spark patterns to extract insights as well as learn useful performance and monitoring tips.
Abstract You may have been hearing a lot of buzz around Big Data, Apache Spark, Amazon Elastic Map Reduce (EMR) and Apache Zeppelin. What’s the fuss about, and how can you benefit from these state of the art technologies?
In this highly interactive session, you will learn how to leverage Spark to rapidly mine a large real-world data set. We will conduct the analysis live entirely using an iPython Notebook to show you how easy it can be to get to grips with these technologies.
In the first part of the session, we will use a sample of data from the Open Library dataset, and you will learn how to apply common Spark patterns to extract insights and aggregate data. In the second part of the session, you will see how to leverage Spark on Amazon EMR to scale your data processing queries over a cluster of machines and interactively analyse a large data set (100GB) with a Zeppelin Notebook. Along the way you will learn gotchas as well as useful performance and monitoring tips