Conteúdo do Curso
Introduction to Big Data with Apache Spark in Python
Introduction to Big Data with Apache Spark in Python
Why Apache Spark?
What is Apache Spark?
It provides a fast, general-purpose engine for big data processing, capable of handling both batch and real-time data processing tasks.
Spark was developed to overcome the limitations of traditional MapReduce frameworks and offers advanced features like in-memory processing, support for complex analytics, and seamless integration with various data sources.
Spark provides high-level APIs in Java, Scala, Python, and R.
Key Features
- In-Memory Computing - processes data in memory rather than on disk, which accelerates performance for iterative algorithms and interactive queries.
- Unified Analytics Engine - supports batch processing, interactive queries, streaming analytics, and machine learning through a single framework.
- Scalability - scales horizontally by adding more nodes to a cluster, making it capable of handling petabytes of data.
Data Structures
The primary abstractions in Spark are:
- Resilient Distributed Dataset(RDD);
- DataFrame;
- Dataset.
We will discuss them in a more detailed way soon.
Tudo estava claro?
Obrigado pelo seu feedback!
Seção 2. Capítulo 1