github.com/KyaXTeam/consul@v1.4.5/website/source/intro/vs/eureka.html.md (about)

     1  ---
     2  layout: "intro"
     3  page_title: "Consul vs. Eureka"
     4  sidebar_current: "vs-other-eureka"
     5  description: |-
     6    Eureka is a service discovery tool that provides a best effort registry and discovery service. It uses central servers and clients which are typically natively integrated with SDKs. Consul provides a super set of features, such as health checking, key/value storage, ACLs, and multi-datacenter awareness.
     7  ---
     8  
     9  # Consul vs. Eureka
    10  
    11  Eureka is a service discovery tool. The architecture is primarily client/server,
    12  with a set of Eureka servers per datacenter, usually one per availability zone.
    13  Typically clients of Eureka use an embedded SDK to register and discover services.
    14  For clients that are not natively integrated, a sidecar such as Ribbon is used
    15  to transparently discover services via Eureka.
    16  
    17  Eureka provides a weakly consistent view of services, using best effort replication.
    18  When a client registers with a server, that server will make an attempt to replicate
    19  to the other servers but provides no guarantee. Service registrations have a short
    20  Time-To-Live (TTL), requiring clients to heartbeat with the servers. Unhealthy services
    21  or nodes will stop heartbeating, causing them to timeout and be removed from the registry.
    22  Discovery requests can route to any service, which can serve stale or missing data due to
    23  the best effort replication. This simplified model allows for easy cluster administration
    24  and high scalability.
    25  
    26  Consul provides a super set of features, including richer health checking, key/value store,
    27  and multi-datacenter awareness. Consul requires a set of servers in each datacenter, along
    28  with an agent on each client, similar to using a sidecar like Ribbon. The Consul agent allows
    29  most applications to be Consul unaware, performing the service registration via configuration
    30  files and discovery via DNS or load balancer sidecars.
    31  
    32  Consul provides a strong consistency guarantee, since servers replicate state using the
    33  [Raft protocol](/docs/internals/consensus.html). Consul supports a rich set of health checks
    34  including TCP, HTTP, Nagios/Sensu compatible scripts, or TTL based like Eureka. Client nodes
    35  participate in a [gossip based health check](/docs/internals/gossip.html), which distributes
    36  the work of health checking, unlike centralized heartbeating which becomes a scalability challenge.
    37  Discovery requests are routed to the elected Consul leader which allows them to be strongly consistent
    38  by default. Clients that allow for stale reads enable any server to process their request allowing
    39  for linear scalability like Eureka.
    40  
    41  The strongly consistent nature of Consul means it can be used as a locking service for leader
    42  elections and cluster coordination. Eureka does not provide similar guarantees, and typically
    43  requires running ZooKeeper for services that need to perform coordination or have stronger
    44  consistency needs.
    45  
    46  Consul provides a toolkit of features needed to support a service oriented architecture.
    47  This includes service discovery, but also rich health checking, locking, Key/Value, multi-datacenter
    48  federation, an event system, and ACLs. Both Consul and the ecosystem of tools like consul-template
    49  and envconsul try to minimize application changes required to integration, to avoid needing
    50  native integration via SDKs. Eureka is part of a larger Netflix OSS suite, which expects applications
    51  to be relatively homogeneous and tightly integrated. As a result, Eureka only solves a limited
    52  subset of problems, expecting other tools such as ZooKeeper to be used alongside.
    53