Visit to discover Indian blogs

Tuesday, 19 September 2017

Mastering Apache Spark 2.x!

Apache Spark is a fast and general engine for large-scale data processing. It is an in-memory cluster based parallel processing system that provides a wide range of functionality like graph processing, machine learning, stream processing and SQL.

If you are a developer with some experience with Spark and want to strengthen your knowledge of how to get around in the world of Spark you need to have a good guidance to take your proficiency to the next level.

A very good guide would be Mastering Apache Spark 2.x

What is this book about?

The book commences with an overview of the Spark ecosystem. It will introduce you to Project Tungsten and Catalyst, two of the major advancements of Apache Spark 2.x.

You will understand how memory management and binary processing, cache-aware computation, and code generation are used to speed things up dramatically. The book extends to show how to incorporate H20, SystemML, and Deeplearning4j for machine learning, and Jupyter Notebooks and Kubernetes/Docker for cloud-based Spark. During the course of the book, you will learn about the latest enhancements to Apache Spark 2.x, such as interactive querying of live data and unifying DataFrames and Datasets.

You will also learn about the updates on the APIs and how DataFrames and Datasets affect SQL, machine learning, graph processing, and
streaming. You will learn to use Spark as a big data operating system, understand how to implement advanced analytics on the new APIs, and explore how easy it is to use Spark in day-to-day tasks.

Who will find this book useful?

If you are a developer with some experience with Spark and want to strengthen your knowledge of how to get around in the world of Spark, then this book is ideal for you. Knowledge of Linux, Hadoop and Spark is a plus. 
Reasonable knowledge of Scala is expected, although you will be able to catch up with knowledge of any object oriented programming language like python or Java

What you will learn from this book?

  • Examine Advanced Machine Learning and DeepLearning with MLlib, SparkML, SystemML, H2O and DeepLearning4J
  • Study highly optimised unified batch and real-time data processing using
    SparkSQL and Structured Streaming
  • Evaluate large-scale Graph Processing and Analysis using GraphX and GraphFrames
  • Apply Apache Spark in Elastic deployments using Jupyter and Zeppelin Notebooks, Docker, Kubernetes and the IBM Cloud
  • Understand internal details of cost based optimizers used in Catalyst,
    SystemML and GraphFrames
  • Learn how specific parameter settings affect overall performance of an
    Apache Spark cluster
  • Leverage Scala, R and python for your data science projects

About the author

Romeo Kienzler is the Chief Data Scientist of the IBM Watson IoT Division and working as a Senior Architect helping client worldwide to solve their data analysis problems.
He holds an M. Sc. of Information System, Bioinformatics and Applied Statistics from the Swiss Federal Institute of Technology. He works as an Associate Professor for data mining at a Swiss University and his current research focus is on cloud-scale data mining using open source technologies including R, ApacheSpark, SystemML, ApacheFlink, and DeepLearning4J. He also contributes to various open source projects. Additionally, he written a chapter on Hyperledger for a book on Blockchain technologies.
Romeo has spoken at the O'Reilly's Velocity conference.
In addition is Romeo is an elected member of the IBM Technical Expert Council and the IBM Academy of Technology