github.com/smintz/nomad@v0.8.3/website/source/intro/vs/mesos.html.md (about)

     1  ---
     2  layout: "intro"
     3  page_title: "Nomad vs. Mesos with Aurora, Marathon, etc"
     4  sidebar_current: "vs-other-mesos"
     5  description: |-
     6    Comparison between Nomad and Mesos with Aurora, Marathon, etc
     7  ---
     8  
     9  # Nomad vs. Mesos with Aurora, Marathon
    10  
    11  Mesos is a resource manager, which is used to pool together the
    12  resources of a datacenter and exposes an API to integrate with
    13  Frameworks that have scheduling and job management logic. Mesos
    14  depends on ZooKeeper to provide both coordination and storage.
    15  
    16  There are many different frameworks that integrate with Mesos;
    17  popular general purpose ones include Aurora and Marathon.
    18  These frameworks allow users to submit jobs and implement scheduling
    19  logic. They depend on Mesos for resource management, and external
    20  systems like ZooKeeper to provide coordination and storage.
    21  
    22  Nomad is architecturally much simpler. Nomad is a single binary, both for clients
    23  and servers, and requires no external services for coordination or storage.
    24  Nomad combines features of both resource managers and schedulers into a single system.
    25  This makes Nomad operationally simpler and enables more sophisticated
    26  optimizations.
    27  
    28  Nomad is designed to be a global state, optimistically concurrent scheduler.
    29  Global state means schedulers get access to the entire state of the cluster when
    30  making decisions enabling richer constraints, job priorities, resource preemption,
    31  and faster placements. Optimistic concurrency allows Nomad to make scheduling
    32  decisions in parallel increasing throughput, reducing latency, and increasing
    33  the scale that can be supported.
    34  
    35  Mesos does not support federation or multiple failure isolation regions.
    36  Nomad supports multi-datacenter and multi-region configurations for failure
    37  isolation and scalability.
    38