Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Apache Hadoop Basics | Big Data Basics
Introduction to Big Data with Apache Spark in Python
course content

Course Content

Introduction to Big Data with Apache Spark in Python

Introduction to Big Data with Apache Spark in Python

1. Big Data Basics
2. Spark Basics
3. Spark SQL

bookApache Hadoop Basics

What is MapReduce?

It was popularized by Google and has been widely adopted in various big data processing frameworks, most notably in Apache Hadoop.

Structure of MapReduce

MapReduce consist of 3 phases:

  • Map Phase - involves dividing the input data into smaller chunks and processing each chunk independently. Each chunk is processed by a "mapper" function that applies a user-defined operation to generate intermediate key-value pairs.
  • Shuffle and Sort Phase - after the map phase, intermediate key-value pairs are shuffled and sorted to group values by keys. This phase prepares the data for the reduce phase by organizing it so that all values for the same key are grouped together.
  • Reduce Phase - involves processing the grouped key-value pairs produced by the shuffle and sort phase. Each reducer applies a user-defined reduce function to aggregate, summarize, or otherwise process the data for each key.

Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 1. Chapter 5
We're sorry to hear that something went wrong. What happened?
some-alt