Apache Spark Interview Questions and Answers
Intermediate / 1 to 5 years experienced level questions & answers
Ques 1. Explain the difference between Spark transformations and actions.
Transformations are operations that create a new RDD, while actions are operations that return a value to the driver program or write data to an external storage system.
Example:
val mappedRDD = inputRDD.map(x => x * 2)
val result = mappedRDD.reduce((x, y) => x + y)
Ques 2. What is the significance of Spark's lineage graph (DAG)?
Spark's lineage graph (DAG) is a directed acyclic graph that represents the sequence of transformations and actions on RDDs. It helps in recovering lost data in case of node failure.
Example:
val filteredRDD = inputRDD.filter(x => x > 0)
filteredRDD.toDebugString
Ques 3. Explain the concept of partitions in Apache Spark.
Partitions are basic units of parallelism in Spark. They represent the logical division of data across the nodes in a cluster, and each partition is processed independently.
Example:
val inputRDD = sc.parallelize(Seq(1, 2, 3, 4, 5), 2)
Ques 4. What is a Spark Executor and what role does it play in Spark applications?
A Spark Executor is a process responsible for executing tasks on a worker node. Executors are launched at the beginning of a Spark application and run tasks until the application completes or encounters an error.
Example:
spark-submit --master yarn --deploy-mode client --num-executors 3 mySparkApp.jar
Ques 5. What is the Broadcast variable in Spark and when is it used?
A Broadcast variable is a read-only variable cached on each worker node. It is used to efficiently distribute large read-only data structures, such as lookup tables, to all tasks in a Spark job.
Example:
val broadcastVar = sc.broadcast(Array(1, 2, 3))
Ques 6. What is the purpose of the Spark SQL module?
Spark SQL is a Spark module for structured data processing. It provides a programming interface for data manipulation using SQL, as well as a DataFrame API for processing structured and semi-structured data.
Example:
val df = spark.sql("SELECT * FROM table")
Ques 7. How can you persist an RDD in Apache Spark? Provide an example.
You can persist an RDD using the persist() or cache() method. It allows you to store the RDD's data in memory or on disk for faster access.
Example:
val cachedRDD = inputRDD.persist(StorageLevel.MEMORY_ONLY)
Ques 8. Explain the difference between narrow and wide transformations in Spark.
Narrow transformations involve operations where each input partition contributes to only one output partition. Wide transformations involve operations where multiple input partitions contribute to multiple output partitions.
Example:
Narrow: map, filter
Wide: groupByKey, reduceByKey
Ques 9. What is the purpose of the Spark Streaming module?
Spark Streaming is an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. It allows processing real-time data using batch processing capabilities of Spark.
Example:
val streamingContext = new StreamingContext(sparkContext, Seconds(1))
Ques 10. How does Spark handle data serialization and why is it important?
Spark uses Java's Object Serialization to serialize data between the Spark Driver and Executors. Efficient serialization is crucial for optimizing data transfer and reducing network overhead.
Example:
sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
Ques 11. What is the purpose of the accumulator in Spark?
An accumulator is a variable that can be added to and is used in Spark to implement counters and sums in a parallel and fault-tolerant manner across distributed tasks.
Example:
val accumulator = sc.longAccumulator("MyAccumulator")
Ques 12. Explain the concept of Spark DAG (Directed Acyclic Graph).
The Spark DAG represents the logical execution plan of transformations and actions in a Spark application. It is a graph of stages, where each stage contains a sequence of tasks that can be executed in parallel.
Example:
val dag = inputRDD.map(x => x * 2).toDebugString
Ques 13. Explain the concept of a Spark task.
A task is the smallest unit of work in Spark, representing the execution of a transformation or action on a partition of data. Tasks are scheduled by the Spark Scheduler on Spark Executors.
Example:
val taskResult = executor.runTask(taskID, taskInfo)
Ques 14. What is the purpose of the Spark MLlib library?
Spark MLlib is Spark's machine learning library, providing scalable implementations of various machine learning algorithms and tools for building and evaluating machine learning models.
Example:
val model = new RandomForestClassifier().fit(trainingData)
Most helpful rated by users:
- What is the purpose of the Spark SQL module?
- Explain the difference between narrow and wide transformations in Spark.