github.com/enmand/kubernetes@v1.2.0-alpha.0/docs/admin/multi-cluster.md (about)

     1  <!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
     2  
     3  <!-- BEGIN STRIP_FOR_RELEASE -->
     4  
     5  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
     6       width="25" height="25">
     7  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
     8       width="25" height="25">
     9  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
    10       width="25" height="25">
    11  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
    12       width="25" height="25">
    13  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
    14       width="25" height="25">
    15  
    16  <h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
    17  
    18  If you are using a released version of Kubernetes, you should
    19  refer to the docs that go with that version.
    20  
    21  <strong>
    22  The latest 1.0.x release of this document can be found
    23  [here](http://releases.k8s.io/release-1.0/docs/admin/multi-cluster.md).
    24  
    25  Documentation for other releases can be found at
    26  [releases.k8s.io](http://releases.k8s.io).
    27  </strong>
    28  --
    29  
    30  <!-- END STRIP_FOR_RELEASE -->
    31  
    32  <!-- END MUNGE: UNVERSIONED_WARNING -->
    33  
    34  # Considerations for running multiple Kubernetes clusters
    35  
    36  You may want to set up multiple Kubernetes clusters, both to
    37  have clusters in different regions to be nearer to your users, and to tolerate failures and/or invasive maintenance.
    38  This document describes some of the issues to consider when making a decision about doing so.
    39  
    40  Note that at present,
    41  Kubernetes does not offer a mechanism to aggregate multiple clusters into a single virtual cluster. However,
    42  we [plan to do this in the future](../proposals/federation.md).
    43  
    44  ## Scope of a single cluster
    45  
    46  On IaaS providers such as Google Compute Engine or Amazon Web Services, a VM exists in a
    47  [zone](https://cloud.google.com/compute/docs/zones) or [availability
    48  zone](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html).
    49  We suggest that all the VMs in a Kubernetes cluster should be in the same availability zone, because:
    50    - compared to having a single global Kubernetes cluster, there are fewer single-points of failure
    51    - compared to a cluster that spans availability zones, it is easier to reason about the availability properties of a
    52      single-zone cluster.
    53    - when the Kubernetes developers are designing the system (e.g. making assumptions about latency, bandwidth, or
    54      correlated failures) they are assuming all the machines are in a single data center, or otherwise closely connected.
    55  
    56  It is okay to have multiple clusters per availability zone, though on balance we think fewer is better.
    57  Reasons to prefer fewer clusters are:
    58    - improved bin packing of Pods in some cases with more nodes in one cluster (less resource fragmentation)
    59    - reduced operational overhead (though the advantage is diminished as ops tooling and processes matures)
    60    - reduced costs for per-cluster fixed resource costs, e.g. apiserver VMs (but small as a percentage
    61      of overall cluster cost for medium to large clusters).
    62  
    63  Reasons to have multiple clusters include:
    64    - strict security policies requiring isolation of one class of work from another (but, see Partitioning Clusters
    65      below).
    66    - test clusters to canary new Kubernetes releases or other cluster software.
    67  
    68  ## Selecting the right number of clusters
    69  
    70  The selection of the number of Kubernetes clusters may be a relatively static choice, only revisited occasionally.
    71  By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to
    72  load and growth.
    73  
    74  To pick the number of clusters, first, decide which regions you need to be in to have adequate latency to all your end users, for services that will run
    75  on Kubernetes (if you use a Content Distribution Network, the latency requirements for the CDN-hosted content need not
    76  be considered).  Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions.
    77  Call the number of regions to be in `R`.
    78  
    79  Second, decide how many clusters should be able to be unavailable at the same time, while still being available.  Call
    80  the number that can be unavailable `U`.  If you are not sure, then 1 is a fine choice.
    81  
    82  If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
    83  you need `R + U` clusters.  If it is not (e.g you want to ensure low latency for all users in the event of a
    84  cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions).  In any case, try to put each cluster in a different zone.
    85  
    86  Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
    87  you may need even more clusters.  Kubernetes v1.0 currently supports clusters up to 100 nodes in size, but we are targeting
    88  1000-node clusters by early 2016.
    89  
    90  ## Working with multiple clusters
    91  
    92  When you have multiple clusters, you would typically create services with the same config in each cluster and put each of those
    93  service instances behind a load balancer (AWS Elastic Load Balancer, GCE Forwarding Rule or HTTP Load Balancer) spanning all of them, so that
    94  failures of a single cluster are not visible to end users.
    95  
    96  
    97  <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
    98  [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/multi-cluster.md?pixel)]()
    99  <!-- END MUNGE: GENERATED_ANALYTICS -->