Explain the concept of Resilient Distributed Datasets (RDD) in PySpark.
Example:
data = [1, 2, 3, 4, 5]
rdd = spark.sparkContext.parallelize(data)
Save For Revision
Save For Revision
Bookmark this item, mark it difficult, or place it in a revision set.
Log in to save bookmarks, difficult questions, and revision sets.