PySpark Interview Questions and Answers
Freshers / Beginner level questions & answers
Ques 1. What is PySpark?
PySpark is the Python API for Apache Spark, a fast and general-purpose cluster computing system.
Example:
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('example').getOrCreate()
Ques 2. Explain the purpose of the 'groupBy' operation in PySpark.
'groupBy' is used to group the data based on one or more columns. It is often followed by aggregation functions to perform operations on each group.
Example:
grouped_data = df.groupBy('Category').agg({'Price': 'mean'})
Ques 3. Explain the concept of a SparkSession in PySpark.
SparkSession is the entry point to any PySpark functionality. It is used to create DataFrames, register DataFrames as tables, and execute SQL queries.
Example:
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('example').getOrCreate()
Ques 4. Explain the purpose of the 'collect' action in PySpark.
The 'collect' action retrieves all elements of a distributed dataset (RDD or DataFrame) and brings them to the driver program.
Example:
data = df.collect()
Ques 5. How can you perform a union operation on two DataFrames in PySpark?
You can use the 'union' method to combine two DataFrames with the same schema.
Example:
result = df1.union(df2)
Ques 6. What is the purpose of the 'groupBy' operation in PySpark?
'groupBy' is used to group the data based on one or more columns. It is often followed by aggregation functions to perform operations on each group.
Example:
grouped_data = df.groupBy('Category').agg({'Price': 'mean'})
Ques 7. How can you create a temporary view from a PySpark DataFrame?
You can use the 'createOrReplaceTempView' method to create a temporary view from a PySpark DataFrame.
Example:
df.createOrReplaceTempView('temp_view')
Ques 8. What is the purpose of the 'orderBy' operation in PySpark?
'OrderBy' is used to sort the rows of a DataFrame based on one or more columns.
Example:
result = df.orderBy('column')
Intermediate / 1 to 5 years experienced level questions & answers
Ques 9. Explain the concept of Resilient Distributed Datasets (RDD) in PySpark.
RDD is the fundamental data structure in PySpark, representing an immutable distributed collection of objects. It allows parallel processing and fault tolerance.
Example:
data = [1, 2, 3, 4, 5]
rdd = spark.sparkContext.parallelize(data)
Ques 10. What is the difference between a DataFrame and an RDD in PySpark?
DataFrame is a higher-level abstraction on top of RDD, providing a structured and tabular representation of data. It supports various optimizations and operations similar to SQL.
Example:
df = spark.createDataFrame([(1, 'John'), (2, 'Jane')], ['ID', 'Name'])
Ques 11. What is the purpose of the 'cache' operation in PySpark?
The 'cache' operation is used to persist a DataFrame or RDD in memory, enhancing the performance of iterative algorithms or repeated operations.
Example:
df.cache()
Ques 12. How can you handle missing or null values in a PySpark DataFrame?
You can use the 'na' functions like 'drop' or 'fill' to handle missing values in a PySpark DataFrame.
Example:
df.na.drop()
Ques 13. What is the purpose of the 'explode' function in PySpark?
The 'explode' function is used to transform a column with arrays or maps into multiple rows, duplicating the values of the other columns.
Example:
from pyspark.sql.functions import explode
exploded_df = df.select('ID', explode('items').alias('item'))
Ques 14. Explain the purpose of the 'persist' operation in PySpark.
'Persist' is used to persist a DataFrame or RDD in memory or on disk, allowing faster access to the data in subsequent operations.
Example:
df.persist()
Ques 15. What is the purpose of the 'explode' function in PySpark?
The 'explode' function is used to transform a column with arrays or maps into multiple rows, duplicating the values of the other columns.
Example:
from pyspark.sql.functions import explode
exploded_df = df.select('ID', explode('items').alias('item'))
Ques 16. How can you handle missing or null values in a PySpark DataFrame?
You can use the 'na' functions like 'drop' or 'fill' to handle missing values in a PySpark DataFrame.
Example:
df.na.drop()
Ques 17. Explain the difference between 'cache' and 'persist' operations in PySpark.
'Cache' is a shorthand for 'persist(memory_only=True)', while 'persist' allows more flexibility by specifying storage levels (memory-only, disk-only, etc.).
Example:
df.cache()
Ques 18. What is the purpose of the 'agg' method in PySpark?
The 'agg' method is used for aggregating data in a PySpark DataFrame. It allows you to perform various aggregate functions like sum, avg, max, min, etc., on specified columns.
Example:
result = df.agg({'Sales': 'sum', 'Quantity': 'avg'})
Ques 19. Explain the purpose of the 'coalesce' method in PySpark.
The 'coalesce' method is used to reduce the number of partitions in a PySpark DataFrame. It helps in optimizing the performance when the number of partitions is unnecessarily large.
Example:
df_coalesced = df.coalesce(5)
Experienced / Expert level questions & answers
Ques 20. How can you perform the join operation in PySpark?
You can use the 'join' method on DataFrames. For example, df1.join(df2, df1['key'] == df2['key'], 'inner') performs an inner join on 'key'.
Example:
result = df1.join(df2, df1['key'] == df2['key'], 'inner')
Ques 21. What is the role of the 'broadcast' variable in PySpark?
A 'broadcast' variable is used to cache a read-only variable in each node of a cluster to enhance the performance of joins.
Example:
from pyspark.sql.functions import broadcast
result = df1.join(broadcast(df2), 'key')
Ques 22. Explain the significance of the 'window' function in PySpark.
The 'window' function in PySpark is used for defining windows over data based on partitioning and ordering, often used with aggregation functions.
Example:
from pyspark.sql.window import Window
from pyspark.sql.functions import row_number
window_spec = Window.orderBy('column')
result = df.withColumn('row_num', row_number().over(window_spec))
Ques 23. Explain the concept of 'checkpointing' in PySpark.
'Checkpointing' is a mechanism in PySpark to truncate the lineage of a RDD or DataFrame by saving it to a reliable distributed file system.
Example:
spark.sparkContext.setCheckpointDir('hdfs://path/to/checkpoint')
df_checkpointed = df.checkpoint()
Ques 24. How can you handle skewed data in PySpark?
You can use techniques like salting, bucketing, or using the 'broadcast' hint to handle skewed data in PySpark.
Example:
df.write.option('skew_hint', 'true').parquet('output_path')
Ques 25. Explain the purpose of the 'window' function in PySpark.
The 'window' function is used for defining windows over data based on partitioning and ordering, often used with aggregation functions.
Example:
from pyspark.sql.window import Window
from pyspark.sql.functions import sum
window_spec = Window.partitionBy('category').orderBy('value')
result = df.withColumn('sum_value', sum('value').over(window_spec))
Ques 26. Explain the concept of 'broadcast' variables in PySpark.
'Broadcast' variables are read-only variables cached on each node of a cluster to efficiently distribute large read-only data structures.
Example:
from pyspark.sql.functions import broadcast
result = df1.join(broadcast(df2), 'key')
Ques 27. Explain the role of the 'broadcast' variable in PySpark.
A 'broadcast' variable is used to cache a read-only variable in each node of a cluster to enhance the performance of joins.
Example:
from pyspark.sql.functions import broadcast
result = df1.join(broadcast(df2), 'key')
Ques 28. What is the purpose of the 'accumulator' in PySpark?
An 'accumulator' is a variable that can be used in parallel operations and is updated by multiple tasks. It is typically used for implementing counters or sums in distributed computing.
Example:
accumulator = spark.sparkContext.accumulator(0)
# Inside a transformation or action
accumulator.add(1)
Ques 29. Explain the use of the 'broadcast' hint in PySpark.
The 'broadcast' hint is used to explicitly instruct PySpark to use a broadcast join strategy for better performance, especially when one DataFrame is significantly smaller than the other.
Example:
from pyspark.sql.functions import broadcast
result = df1.join(broadcast(df2), 'key')
Ques 30. How can you handle data skewness in PySpark?
Data skewness can be handled by using techniques like salting, bucketing, or using the 'broadcast' hint to distribute data more evenly across partitions.
Example:
df.write.option('skew_hint', 'true').parquet('output_path')
Most helpful rated by users:
Related interview subjects
Python Pandas interview questions and answers - Total 48 questions |
Python Matplotlib interview questions and answers - Total 30 questions |
Django interview questions and answers - Total 50 questions |
Pandas interview questions and answers - Total 30 questions |
Deep Learning interview questions and answers - Total 29 questions |
PySpark interview questions and answers - Total 30 questions |
Flask interview questions and answers - Total 40 questions |
PyTorch interview questions and answers - Total 25 questions |
Data Science interview questions and answers - Total 23 questions |
SciPy interview questions and answers - Total 30 questions |
Generative AI interview questions and answers - Total 30 questions |
NumPy interview questions and answers - Total 30 questions |
Python interview questions and answers - Total 106 questions |