github.com/smintz/nomad@v0.8.3/website/source/guides/spark/dynamic.html.md (about)

     1  ---
     2  layout: "guides"
     3  page_title: "Apache Spark Integration - Dynamic Executors"
     4  sidebar_current: "guides-spark-dynamic"
     5  description: |-
     6    Learn how to dynamically scale Spark executors based the queue of pending 
     7    tasks.
     8  ---
     9  
    10  # Dynamically Allocate Spark Executors
    11  
    12  By default, the Spark application will use a fixed number of executors. Setting 
    13  `spark.dynamicAllocation` to `true` enables Spark to add and remove executors 
    14  during execution depending on the number of Spark tasks scheduled to run. As 
    15  described in [Dynamic Resource Allocation](http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation), dynamic allocation requires that `spark.shuffle.service.enabled` be set to `true`.
    16  
    17  On Nomad, this adds an additional shuffle service task to the executor 
    18  task group. This results in a one-to-one mapping of executors to shuffle 
    19  services.
    20  
    21  When the executor exits, the shuffle service continues running so that it can 
    22  serve any results produced by the executor. Due to the nature of resource 
    23  allocation in Nomad, the resources allocated to the executor tasks are not
    24   freed until the shuffle service (and the application) has finished.
    25  
    26  ## Next Steps
    27  
    28  Learn how to [integrate Spark with HDFS](/guides/spark/hdfs.html).