menu EXPLORE
history NEW

Spark courses with Scala, Python and Java

Spark is the Big Data framework most used by companies around the world. For this reason, mastering it is a great advantage when finding a job or acquiring better positions within a technology company.

Learn Apache Spark for Big Data

In this section we have made a compilation of courses where the Apache Spark framework is introduced, teaching how to manipulate large volumes of data distributed in parallel in a virtual cluster.

The languages ​​​​used in the different courses are the UNIX or Windows terminal, Python through the Pyspark API, Scala and Java.

Many of the courses include an introduction to other applications in the Hadoop ecosystem such as Hive, Sqoop, Flume or Kafka.

These other frameworks from the Hadoop ecosystem can connect to Spark so, by combining them, we can achieve a solution to every Big Data problem we have.

Most of the courses we have chosen introduce the Spark core along with its components: SparkSQL, Spark Streaming, Spark MLlib and Graphx.

In each of the courses we have made a description so that the user can know if it adapts to their needs or not. In addition, we have included the objectives, the syllabus and a summary of the ratings of the Udemy platform where these courses are taught.

Apache Spark with Scala – Hand on with Big Data

This is the most popular Spark in Scala course on the Udemy Platform. It is designed for people who want to enter the world of distributed processing using Spark and programming with its native programming language: scala.

In it you will find a small theoretical introduction to how Spark works inside and the types of objects it uses to manipulate data in a distributed way on a distributed file system such as HDFS.

The course is based on practice using large data sets. This data is processed using the different spark components: sparkSQL, Spark Streaming, Spark ML and Graphx.

Once the introductory course is finished, the possibility of acquiring more courses is offered to improve and have a more advanced level of these Big Data technologies.

Course time : 9 hours

Devices : computer, mobile phones and TV

Warranty : 30 days

Language : English

The objectives of this Spark course are:

1. Face typical Big Data problems

2. Optimize Spark processes through dataset partitioning techniques among others

3. Process real-time data in Spark Streaming

4. Use machine learning techniques on distributed data through MLlib

5. Apply transformations on the data using the SparkSQL module

1. Initial programming course in Scala

2. Using Spark RDD objects

3. SparkSQL Module: Dataframes and Datasets

4. Spark usage examples

5. Run Spark on a cluster in a distributed way

6. Machine learning with the Spark ML component

7. Introduction to real-time data processing with Spark Streaming

8. Introduction to GraphX

This course is one of the best to start mastering Spark using Scala. Professor Frank Kane gives an extensive explanation of the most important Spark components such as RDDs, Dataframes or Datasets.

In addition, different practical exercises are proposed to establish the concepts given in the theoretical class.

Frank Kane is undoubtedly one of the best teachers with whom to start learning in the Big Data world since he has a long history of teaching thousands and thousands of students from all over the world.

Master Apache Spark 2.0 with Scala

This Apache Spark with Scala program is designed to learn the fundamentals of Spark using Scala as a programming language.

Practical effects are used to teach the student to solve real problems through distributed Big Data technologies.

The course begins by giving instructions on how to install java, git and other components necessary to run Spark. Below is a brief introduction to RDD objects and the advantages of using this Big Data technology.

The use of methods to manipulate data such as filters, groupings or mappings are shown in a practical way.

Once the operation of the Spark core has been introduced, the course focuses on managing large amounts of data using the SparkSQL module where through SQL-like statements we can work with large volumes of data in a distributed manner.

Course time : 4 hours

Devices : computer, mobile phones and TV

Warranty : 30 days

Language : Castilian

Certificate of completion

1. Learn the architecture of the Spark core

2. Use of operations on RDD objects (Resilient Distributed Datasets)

3. Improved performance using caches and persistence

4. Be able to scale applications on a Hadoop cluster using Elastic MapReduce

1. Introduction to Apache Spark: Project installation and configuration

2. Use of RDDs: transformations of RDDs through operations on data

3. Spark architecture and components

4. Introduction to SparkSQL

5. Spark Distributed Execution in Cluster

The content of this course is suitable for deepening the use of Apache Spark and SparkSQL in distributed clusters. The majority of students who have participated comment that a prior basic knowledge of the Scala programming language and operating systems such as Linux is necessary since during the course it is assumed that the student knows how to program in said language.

Spark and Python on AWS for Big Data

This course is designed to introduce the student to the use of Amazon Web Server (AWS) services with Apache Spark. In this case the language used is Python instead of Scala.

You start by creating an AWS account and configuring the Jupyter notebook to work with the creation of the EC2 virtual machine. Spark configuration is also performed.

The course progresses with an introduction to Apache Spark. The transformations that allow data to be carried out in a distributed manner such as filters, groupings or mappings are detailed.

Next, the use of SparkSQL is taught through commands similar to those of relational databases such as aggregations and filters. This allows data to be processed in a distributed way in a very simple way using natural language.

Finally, MLlib is introduced, a Spark component that allows performing statistical techniques on a distributed data set.

Course time : 4.5 hours

Devices : computer, mobile phones and TV

Warranty : 30 days

Language : Spanish

Completion Certification

1. Learn about Big Data and parallel/distributed computing

2. Using SparkSQL and dataframe objects with pyspark

3. Use of the MLlib library to create statistical models

1. Introduction to Big Data and Spark

2. Setting up Spark on AWS

3. Introduction to lambda expressions, transformations and actions

4. Importance of RDDs and key-value

5. Optimization improvement with cache and data persistence

6. Explanation and use of dataframes in SparkSQL

7. Explanation and examples of the use of the MLlib component

In general, the students who have taken the course are happy with the training received. They highlight that it is designed for people who have just started in the world of Big Data since the basic concepts are detailed.

It is based mostly on practice, although some people mention that it would be interesting to add a little more theory in the initial part of the course.

The introduction section to the Spark machine learning library, MLlib, could be extended since it only includes an example of use with linear regression when said library includes clustering models, decision trees, among others.

Big Data course with Hadoop and Spark from scratch

This is a complete course that explains how to use Hadoop and different components of its ecosystem such as Spark, Sqoop, Pig or Flume, providing an extensive introduction to the technologies used in the Big Data sector.

The student will learn how to configure the Big Data application ecosystem in a virtualized Cloudera cluster. It is advisable to know the basics of the Java programming language since all these frameworks are written in said language.

Course time: 4.5 hours

Devices : computer, mobile phones and TV

Warranty : 30 days

Language : Spanish

Completion Certification

1. Learn the basics of the main tools used in the world of data

2. Create Big Data applications by combining different big data technologies such as Spark or Hive

3. Processing large amounts of information with MapReduce

4. Be able to process and manipulate data stored in a distributed file system using Spark

5. Introduction to the YARN (Yet Another Resource Negotiator) resource manager

6. Learn to store data in the Hadoop distributed file system (HDFS)

1. Learn to store data in the Hadoop distributed file system (HDFS)

2. Manage data via HDFS

3. Data processing with MapReduce operations

4. Data query with Hive

5. Master data flows with Apache Flume

6. Data processing with Apache Pig

7. Real-time data processing with Spark Streaming

This is a course where more than 1000 students have participated to date. They highlight that the teacher goes directly to the point, explaining the most important things about each technology, perfect for having a global vision of the Big Data architecture.

The teacher introduces the Hadoop system to the Cloudera distribution with a special focus on Spark. Some students mention that some more practical examples would be necessary.

It is important to dedicate time to assimilate the programming scripts delivered by the teacher to ensure that you assimilate the taught syllabus as much as possible.