Guide for Apache Spark Setup, Job Optimisation, AWS EMR Cluster Configuration, S3, YARN and HDFS Optimisation
How to tune Apache Spark Job for optimizations? How to perform join efficiently? How to tune AWS EMR Cluster for optimizations? How to tune S3 for optimizations? How to tune YARN for optimizations? How to tune HDFS for optimizations? How to Apache Spark Job fix errors? How to fix AWS EMR Cluster errors? How to fix S3 errors? How to fix YARN errors? and How to fix HDFS errors?
We cover answer all those questions in this super long technical blog.😅 Since this is optimisation guide, I will consider you are already familiar with basics.
Blog repo link for requesting changes: https://devendraap.github.io/Spark-job-and-AWS-EMR-cluster-S3-YARN-and-HDFS-tuning/
We will process our data in AWS environment using Spark (on EMR) and use object storage (S3) for storage in below examples.
While processing billions of records or TB’s of data we faced multiple hurdles. This wiki documents extensively the error team faced while processing large dataset using Spark jobs and how to resolve them. The Spark job and cluster optimization for processing large dataset are also explained below.
We should prefer Dataset for storing and processing data in memory because of following reasons:
- Static-typing and runtime type-safety. With a DataFrame, you can select a nonexistent column and notice your mistake only when you run your code. With a Dataset, you have a compile time error.
- Provides Catalyst optimization and benefit from Tungsten’s efficient bytecode generation due to Encoders used for Dataset
- Dataset has helpers called encoders, which are smart and efficient encoding utilities that convert data inside each user-defined object into a compact binary format. Spark understands the structure of data in Datasets, it can create a more optimal layout in memory when caching Datasets. This translates into a reduction of memory usage if and when a Dataset is cached in memory as well as a reduction in the number of bytes that Spark needs to transfer over a network during the shuffling process.
- Kryo serializer usage leads to Spark storing every row in the Dataset as a flat binary object using Spark’s internal encoders and is >10x faster than dataframe’s Kryo serialization.
Note: Please go through reference links provided to fully understand how spark options affects the data processing
Optimising Spark job for Joins and other data feature based optimisation:
Let’s assume we have two tables whose raw/csv file size is 3TB and 500GB respectively and needs to be joined on particular columns. Following are the ways to optimize the joins and prevent the job failures as the data grows gradually after each refresh.
- Set spark.sql. files.maxPartitionBytes to 128MB which will reparation the files after reading so that resultant partitions will be each of 128MB.
- If fill rate of joining column is not 100%, filter records containing null and perform join on those records. Union the output with records containing null values.
- Set spark.shuffle.paritions value to re-partition data and increase tasks during join operation resulting in increased parallel processing. The partition size should be ~128MB corresponding to the block size in EMRFS (ref. AWS docs).
- To know amount of data processed and time taken by each task, open the stage summary metrics in Application Master.
- In Application Master, if the MAX time taken is greater than 5 min for a task, try increasing partition size.
- In Application Master, if few tasks are taking too long to execute then you are performing cartesian joins due to NULL or repeated values in column used to join.
- In Application Master, if 25th percentile takes <100ms, but MAX time is > 5 min for task implies that the data is skewed. The data can be evenly distributed by adding salt column:
import org.apache. spark.sql.functions._
df.withColumn(“salt”, (rand * n).cast(IntegerType)) .groupBy(“salt”, groupByFields) .agg(aggFields) .groupBy(groupByFields) .agg(aggFields)
If data processed in only few ETL steps is too large try following options:
- Turn on auto-scaling option of cluster to allocate more core nodes while processing those few large data processing ETL steps.
- Partition the larger table by column which can evenly distribute the records (like year or quarter) and persist it to EMRFS.
- Read each partition at a time by using filter, perform join and write output to EMRFS.
Executor resource calculation:
The r series memory optimised instance for EMR have 1:8 core to (GB) memory ratio. The optimal CPU count per executor is 5. So, to prevent underutilisation of CPU or memory resource, the executor’s optimal resource per executor will be 40GB memory and 5 CPU. Following is an example for r5.12xlarge instace:
By assigning 1 core and 1GB for YARN, we are left with 47 core per node for r5.12xlarge instace.
We allocated 5 cores per executor for max HDFS throughput
MemoryStore and BlockManagerMaster per node consumes 12GB per node
- Memory per executor = (374–12 -12) / 9 ~= 40 GB
- Number of executor = (48–1) / 5 ~= 9
Specs per CORE or TASK node of r4.12xlarge instance type:
- Cores = 48
- Memory = (384 GiB * 1000) / 1024 = 375 GB
Note: If EMR cluster is configured to use task nodes, do not exceed CORE Node to TASK Node ratio 2:1 (as task node does not have HDFS storage. Also, allocate more HDFS storage to compensate for the lack of HDFS storage on task nodes).
Spark submit options:
Spark executor memory allocation layout and calculations:
spark.yarn.executor.MemoryOverhead = 3 * 1024 = 3072
spark.executor.memory = 33 * 1024 = 33792
spark.memory.fraction = 0.8 * 34816 = 27852.8
spark.memory.storageFraction (cache, broadcast, accumulator) = 0.4 * 34816 = 13926.4
User memory = ( 1.0–0.8 ) * 34816 = 6963.2
yarn.nodemanager.resource.memory-mb stays around = ~40GB
Reference: (Explained above)
Explanation: Approx. (spark.memory.fraction * spark.executor.memory) memory for task execution, shuffle, join, sort, aggregate
Parameter: spark.memory. storageFraction
Explanation: Approx. (spark.memory. storageFraction * spark.executor. memory) memory for cache, broadcast and accumulator
Parameter: spark.dynamicAllocation. enabled and
Explanation: To allocate executor dynamically based on yarn.scheduler. capacity.resource- calculator = org.apache. hadoop.yarn. util.resource. DominantResource Calculator
Benefits: Scales number of executors based on CPU and memory requirements.
Parameter: spark.shuffle. service.enabled
Explanation: Spark shuffle service maintains the shuffle files generated by all Spark executors that ran on that node. Spark executors write the shuffle data and manage it
Benefits: Spark shuffle service service preserves the shuffle files written by executors so the executors can be safely removedResolves error: java.io.IOException: All datanodes are bad.”
Parameter: spark.executor. extraJavaOptions
Value: -XX:+UseG1GC -XX: InitiatingHeapOccupancy Percent=35 -XX: OnOutOfMemoryError=’ kill -9 %p’
Explanation: The parameter -XX:+UseG1GC specifies that the G1GC garbage collector should be used. (The default is -XX: +UseParallelGC.) To understand the frequency and execution time of the garbage collection, use the parameters -verbose:gc -XX: +PrintGCDetails -XX: +PrintGCDateStamps. To initiate garbage collection sooner, set Initiating HeapOccupancyPercent to 35 (the default is 0.45). Doing this helps avoid potential garbage collection for the total memory, which can take a significant amount of time.
Benefits: Better garbage collection as G1 is suItable for large heap to resolve Out of memory issue, reduce the gc pause time, high latency and low throughput
Parameter: spark.driver. maxResultSize
Explanation: spark.sql. autoBroadcast JoinThreshold \< spark.driver. maxResultSize \< spark.driver.memory
Benefits: Resolves error: serialized results of x tasks is bigger than spark.driver. maxResultSize
Parameter: spark.yarn. maxAppAttempts
Explanation: Maximum attempts for running application
Parameter: spark.rpc. message.maxSize
Explanation: Increases remote procedure call message size
Benefits: Resolves error: exceeds max allowed: spark.rpc. message.maxSize
Parameter: spark.spark. worker.timeout
Explanation: Allows task working on skewed data more time for execution. Proper re-partitioning (with salting) on join or groupBy column reduces time for execution
Resolves: Lost executor xx on slave1.cluster: Executor heartbeat timed out after xxxxx msWARN TransportChannel Handler: Exception in connection from /172.31.3.245:46014
Parameter: spark.network. timeout
Benefits: java.io.IOException: Connection reset by peer
Parameter: spark.shuffle. file.buffer
Explanation: Reduce the number of times the disk file overflows during the shuffle write process, which can reduce the number of disk IO times and improve performance
Parameter: spark.locality. wait
Explanation: Reduces large amounts of data transfer over network (shuffling)
Parameter: spark.shuffle. io.connectionTimeout
Benefits: Resolves error: “org.apache. spark.rpc. RpcTimeoutException: Futures timed out after [120 seconds]”
Parameter: spark.shuffle. io.retryWait
Resolves error: org.apache. spark.shuffle. MetadataFetch FailedException: Missing an output location for shuffle 1
Parameter: spark.reducer. maxReqsInFlight
Parameter: spark.shuffle. io.maxRetries
Parameter: spark.scheduler. maxRegistered ResourcesWaitingTime
Explanation: The maximum amount of time it will wait before scheduling begins is controlled
Benefits: Resolves error: Application_xxxxx_xxx failed 2 times due to AM container for appattempt_xxxx_xxxxx. Exception from container-launch.
Parameter: spark.dynamicAllocation. enabled
Parameter: spark.dynamicAllocation. executorIdleTimeout
Explanation: Remove executor with if idle for more than this duration
Parameter: spark.dynamicAllocation. cachedExecutorIdleTimeout
Explanation: Remove executor with cached data blocks if idle for more than this duration
Parameter: spark.sql. broadcastTimeout
Explanation: Timeout in seconds for the broadcast wait time in broadcast joins
Benefits: Resolves error: ERROR yarn.ApplicationMaster: User class threw exception: java.util. concurrent. TimeoutException: Futures timed out after
Parameter: spark.hadoop. mapreduce. fileoutputcommitter. algorithm.version
Explanation: Major difference between mapreduce. fileoutputcommitter. algorithm.version =1 and 2 is : Either AM or Reducers will do the mergePaths().
Benefits: Allows reducers to do mergePaths() to move those files to the final output directory
Parameter: spark.sql. autoBroadcast JoinThreshold
Explanation: Maximum broadcast table is limited by spark default i.e 8gb
Parameter: spark.io. compression.codec
Explanation: Reduces serialized data size by 50% resulting in less spill size (memory and disk), storage io and network io, but increases CPU overhead by 2–5% which is acceptable while processing large datasets
Benefits: Used by spark.sql. inMemoryColumnarStorage. compressed, spark.rdd. compress, spark.shuffle. compress, spark.shuffle. compress, spark.shuffle. spill.compress, spark.checkpoint. compress, spark.broadcast. compress. Which allows us to broadcast table with 2x records, spill less size (memory and data), reduce disk and network io.
Parameter: spark.io. compression.zstd. level
Parameter: spark.sql. inMemoryColumnarStorage. compressed
Explanation: Enables compression. Reduce network IO and memory usage using spark compression codec spark.io. compression.codec
Parameter: spark.rdd. compress
Parameter: spark.shuffle. compress
Parameter: spark.shuffle. spill.compress
Parameter: spark.checkpoint. compress
Parameter: spark.broadcast. compress
Parameter: spark.storage. level
Explanation: Spill partitions that don’t fit in executor memory. Uses low space (i.e. memory in RAM or storage in SSD)
Value: org.apache. spark.serializer. KryoSerializer
Explanation: Better than default spark serializer
Parameter: spark.hadoop.s3. multipart.committer. conflict-mode
Explanation: Setting for new Hadoop parquet magic committer
Parameter: spark.shuffle. consolidateFiles
Explanation:Optimization for custom ShuffleHash join implementation. Note that the MergeSort join is default method which is better for large datasets due to memory limitation
Parameter: spark.reducer. maxSizeInFlight
Explanation: Increase data reducers is requested from “map” task outputs in bigger chunks which would improve performance
Parameter: spark.kryoserializer. buffer.max
Benefits: Resolves error: com.esotericsoftware. kryo.KryoException: Buffer overflow. Available: 0, required: 57197
Parameter: spark.sql.shuffle. partitions
Explanation: Number of partitions during join operation
Parameter: spark.sql. files.maxPartitionBytes
Explanation: Reparation file after reading to 128MB each
Parameter: spark.scheduler. listenerbus.eventqueue. capacity
Explanation: Resolves error: ERROR scheduler. LiveListenerBus: Dropping SparkListenerEvent because no remaining room in event queue. This likely means one of the SparkListeners is too slow and cannot keep up with the rate at which tasks are being started by the scheduler
EMR cluster tuning:
Configuration Property: dfs.replication
Usage: HDFS data replication factor for EMR with auto scaling enabled for core nodes
Configuration Property: fs.s3.enableServer SideEncryption
Usage: Enables S3 AES256 data encryption
Usage: Workaround to resolve S3’s storage eventual consistency missing file error due to replication in multiple AZ (availability zone)
Configuration Property: fs.s3a.committer. magic.enabled
Usage: Setting for new Hadoop parquet magic committer
Configuration Property: fs.s3a.connection. maximum
Usage: Increases S3 IO speed
Configuration Property: fs.s3a.fast. upload
Usage: Increases S3 IO speed
Configuration Property: fs.s3a.server-side-encryption-algorithm
Usage: Enables S3 AES256 data encryption
Configuration Property: fs.s3a. threads.core
Usage: Increases S3 IO speed
Configuration Property: yarn.log-aggregation. retain-seconds
Configuration Property: yarn.log-aggregation-enable
Usage: Aggregates logs at driver node
Configuration Property: yarn.nm.liveness-monitor.expiry-interval-ms
Usage: Increases time to wait until a node manager is considered dead
Configuration Property: yarn.nodemanager. pmem-check-enabled
Usage: (Note: Re-partition data in job based on size)
Configuration Property: yarn.nodemanager. vmem-check-enabled
Usage: To disable hard memory restriction causing OOM (out of memory) JVM error
Configuration Property: yarn.resourcemanager. decommissioning.timeout
Usage: Increases timeout interval to blacklist node
Configuration Property: yarn.scheduler. capacity.resource-calculator
Value: org.apache.hadoop. yarn.util.resource. DominantResourceCalculator
Usage: The default resource calculator i.e org.apache. hadoop.yarn. util.resource. DefaultResource Calculator uses only memory information for allocating containers and CPU scheduling is not enabled by default
Configuration Property: yarn.scheduler. capacity.root. default.capacity
Usage: Uses all resources of dedicated cluster
Configuration Property: yarn.scheduler. capacity.root. default.maximum-capacity
Usage: Uses all resources of dedicated cluster
Spark Option summary
If data size is range of GB i.e Millions of records.
If data size is range of TB i.e Billions of records. Note: Not tested above 15 Billion records or above 15TB
--conf \"spark.executor.extraJavaOptions=-XX:+UseG1GC -XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark -XX:InitiatingHeapOccupancyPercent=35 -XX:OnOutOfMemoryError='kill -9 %p'\"
Spark development setup using containers:
- Using Docker images: Setting up Kubernetes or Docker for SPARK, HDFS, YARN, Hue, Map-Reduce, HIVE and WebHCat development
- Using Kubernetes and HELM charts:
- Spark development environment setup using Helm charts using Kubernetes:
- Install Docker, Kubernetes and Helm in cluster and run following commands:
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install bitnami/spark