github.com/iqoqo/nomad@v0.11.3-0.20200911112621-d7021c74d101/website/pages/intro/vs/mesos.mdx (about)

     1  ---
     2  layout: intro
     3  page_title: 'Nomad vs. Mesos with Aurora, Marathon, etc'
     4  sidebar_title: Mesos & Marathon
     5  description: Comparison between Nomad and Mesos with Marathon
     6  ---
     7  
     8  # Nomad vs. Mesos with Marathon
     9  
    10  Mesos is a resource manager, which is used to pool together the
    11  resources of a datacenter and exposes an API to integrate with
    12  Frameworks that have scheduling and job management logic. Mesos
    13  depends on ZooKeeper to provide both coordination and storage.
    14  
    15  There are many different frameworks that integrate with Mesos;
    16  popular general purpose ones include Aurora and Marathon.
    17  These frameworks allow users to submit jobs and implement scheduling
    18  logic. They depend on Mesos for resource management, and external
    19  systems like ZooKeeper to provide coordination and storage.
    20  
    21  Nomad is architecturally much simpler. Nomad is a single binary, both for clients
    22  and servers, and requires no external services for coordination or storage.
    23  Nomad combines features of both resource managers and schedulers into a single system.
    24  This makes Nomad operationally simpler and enables more sophisticated
    25  optimizations.
    26  
    27  Nomad is designed to be a global state, optimistically concurrent scheduler.
    28  Global state means schedulers get access to the entire state of the cluster when
    29  making decisions enabling richer constraints, job priorities, resource preemption,
    30  and faster placements. Optimistic concurrency allows Nomad to make scheduling
    31  decisions in parallel increasing throughput, reducing latency, and increasing
    32  the scale that can be supported.
    33  
    34  Mesos does not support federation or multiple failure isolation regions.
    35  Nomad supports multi-datacenter and multi-region configurations for failure
    36  isolation and scalability.