See how Insoft Services is responding to COVID-19

Big Data Processing with Apache Spark

X

Student Registration Form

Thank you for being interested in our training! Fill out this form to pre-book or request information about the delivery options.

* Required

Course Schedule

I'd like to receive emails with the latest updates and promotions from Insoft.

Data Protection & Privacy

I hereby allow Insoft Ltd. to contact me on this topic. Further, I authorise Insoft Ltd. processing, using collecting and storing my personal data for the purpose of these activities. All your data will be protected and secured as outlined in our privacy policy.


Schema


Oct 19 - Oct 20, 2020
09:00 - 17:00 (CEST)
Online

 Dec 7 - Dec 8, 2020
09:00 - 17:00 (CEST)
Online

Big Data Processing with Apache Spark
2 days  (Instructor Led Online)  |  Data Science

Course Details

Sammanfattning

Processing big data in real-time is challenging due to scalability, information consistency, and fault tolerance. This Big Data Processing with Apache Spark course shows you how you can use Spark to make your overall analysis workflow faster and more efficient. You’ll learn all about the core concepts and tools within the Spark ecosystem, like Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming.

You’ll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. After grasping these fundamentals, you’ll move on to using Spark Streaming APIs to consume data in real-time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption.

By the end of this course, you’ll not only have understood how to use machine learning extensions and structured streams but you’ll also be able to apply Spark in your own upcoming big data projects.

 

See other courses available

InnehÄll

Lesson 1: Introduction to Spark Distributed Processing

  • Introduction to Spark and Resilient Distributed Datasets
  • Operations Supported by the RDD API
  • Self-Contained Python Spark Programs
  • Introduction to SQL, Datasets, and DataFrames

Lesson 2: Introduction to Spark Streaming

  • Streaming Architectures
  • Introduction to Discretized Streams
  • Windowing Operations
  • Introduction to Structured Streaming

Lesson 3: Spark Streaming Integration with AWS

  • Spark Integration with AWS Services
  • Integrating AWS Kinesis and Python
  • AWS S3 Basic Functionality

Lesson 4: Spark Streaming, ML, and Windowing Operations

  • Spark Integration with Machine Learning

MĂ„lgrupp

Big Data Processing with Apache Spark is for you if you are a software engineer, architect, or IT professional who wants to explore distributed systems and big data analytics. Although you don‘t need any knowledge of Spark, prior experience of working with Python is recommended.

Förkunskaper

Hardware:

For an optimal experience with the hands-on labs and other practical activities, we recommend the following hardware configuration:

  • Processor: Intel Core i5 or equivalent
  • Memory: 4GB RAM
  • Storage: 35 GB available space

 

Software:

  • OS: Windows 7 SP1 64-bit, Windows 8.1 64-bit or Windows 10 64-bit
  • PostgreSQL 9.0 or above
  • Python 3.0 or above
  • Spark 2.3.0
  • Amazon Web Services (AWS) account