github.com/enmand/kubernetes@v1.2.0-alpha.0/docs/admin/limitrange/README.md (about)

     1  <!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
     2  
     3  <!-- BEGIN STRIP_FOR_RELEASE -->
     4  
     5  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
     6       width="25" height="25">
     7  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
     8       width="25" height="25">
     9  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
    10       width="25" height="25">
    11  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
    12       width="25" height="25">
    13  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
    14       width="25" height="25">
    15  
    16  <h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
    17  
    18  If you are using a released version of Kubernetes, you should
    19  refer to the docs that go with that version.
    20  
    21  <strong>
    22  The latest 1.0.x release of this document can be found
    23  [here](http://releases.k8s.io/release-1.0/docs/admin/limitrange/README.md).
    24  
    25  Documentation for other releases can be found at
    26  [releases.k8s.io](http://releases.k8s.io).
    27  </strong>
    28  --
    29  
    30  <!-- END STRIP_FOR_RELEASE -->
    31  
    32  <!-- END MUNGE: UNVERSIONED_WARNING -->
    33  Limit Range
    34  ========================================
    35  By default, pods run with unbounded CPU and memory limits.  This means that any pod in the
    36  system will be able to consume as much CPU and memory on the node that executes the pod.
    37  
    38  Users may want to impose restrictions on the amount of resource a single pod in the system may consume
    39  for a variety of reasons.
    40  
    41  For example:
    42  
    43  1. Each node in the cluster has 2GB of memory.  The cluster operator does not want to accept pods
    44  that require more than 2GB of memory since no node in the cluster can support the requirement.  To prevent a
    45  pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB
    46  of memory as part of admission control.
    47  2. A cluster is shared by two communities in an organization that runs production and development workloads
    48  respectively.  Production workloads may consume up to 8GB of memory, but development workloads may consume up
    49  to 512MB of memory.  The cluster operator creates a separate namespace for each workload, and applies limits to
    50  each namespace.
    51  3. Users may create a pod which consumes resources just below the capacity of a machine.  The left over space
    52  may be too small to be useful, but big enough for the waste to be costly over the entire cluster.  As a result,
    53  the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their
    54  average node size in order to provide for more uniform scheduling and to limit waste.
    55  
    56  This example demonstrates how limits can be applied to a Kubernetes namespace to control
    57  min/max resource limits per pod.  In addition, this example demonstrates how you can
    58  apply default resource limits to pods in the absence of an end-user specified value.
    59  
    60  See [LimitRange design doc](../../design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](../../../docs/user-guide/compute-resources.md)
    61  
    62  Step 0: Prerequisites
    63  -----------------------------------------
    64  This example requires a running Kubernetes cluster.  See the [Getting Started guides](../../../docs/getting-started-guides/) for how to get started.
    65  
    66  Change to the `<kubernetes>/examples/limitrange` directory if you're not already there.
    67  
    68  Step 1: Create a namespace
    69  -----------------------------------------
    70  This example will work in a custom namespace to demonstrate the concepts involved.
    71  
    72  Let's create a new namespace called limit-example:
    73  
    74  ```console
    75  $ kubectl create -f docs/admin/limitrange/namespace.yaml
    76  namespaces/limit-example
    77  $ kubectl get namespaces
    78  NAME            LABELS             STATUS
    79  default         <none>             Active
    80  limit-example   <none>             Active
    81  ```
    82  
    83  Step 2: Apply a limit to the namespace
    84  -----------------------------------------
    85  Let's create a simple limit in our namespace.
    86  
    87  ```console
    88  $ kubectl create -f docs/admin/limitrange/limits.yaml --namespace=limit-example
    89  limitranges/mylimits
    90  ```
    91  
    92  Let's describe the limits that we have imposed in our namespace.
    93  
    94  ```console
    95  $ kubectl describe limits mylimits --namespace=limit-example
    96  Name:   mylimits
    97  Type      Resource  Min  Max Default
    98  ----      --------  ---  --- ---
    99  Pod       memory    6Mi  1Gi -
   100  Pod       cpu       250m   2 -
   101  Container memory    6Mi  1Gi 100Mi
   102  Container cpu       250m   2 250m
   103  ```
   104  
   105  In this scenario, we have said the following:
   106  
   107  1. The total memory usage of a pod across all of its container must fall between 6Mi and 1Gi.
   108  2. The total cpu usage of a pod across all of its containers must fall between 250m and 2 cores.
   109  3. A container in a pod may consume between 6Mi and 1Gi of memory.  If the container does not
   110  specify an explicit resource limit, each container in a pod will get 100Mi of memory.
   111  4. A container in a pod may consume between 250m and 2 cores of cpu.  If the container does
   112  not specify an explicit resource limit, each container in a pod will get 250m of cpu.
   113  
   114  Step 3: Enforcing limits at point of creation
   115  -----------------------------------------
   116  The limits enumerated in a namespace are only enforced when a pod is created or updated in
   117  the cluster.  If you change the limits to a different value range, it does not affect pods that
   118  were previously created in a namespace.
   119  
   120  If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time
   121  of creation explaining why.
   122  
   123  Let's first spin up a replication controller that creates a single container pod to demonstrate
   124  how default values are applied to each pod.
   125  
   126  ```console
   127  $ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
   128  CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR    REPLICAS
   129  nginx        nginx          nginx      run=nginx   1
   130  $ kubectl get pods --namespace=limit-example
   131  POD           IP           CONTAINER(S)   IMAGE(S)   HOST          LABELS      STATUS    CREATED          MESSAGE
   132  nginx-ykj4j   10.246.1.3                             10.245.1.3/   run=nginx   Running   About a minute
   133                             nginx          nginx                                Running   54 seconds
   134  $ kubectl get pods nginx-ykj4j --namespace=limit-example -o yaml | grep resources -C 5
   135  ```
   136  
   137  ```yaml
   138    containers:
   139    - capabilities: {}
   140      image: nginx
   141      imagePullPolicy: IfNotPresent
   142      name: nginx
   143      resources:
   144        limits:
   145          cpu: 250m
   146          memory: 100Mi
   147      terminationMessagePath: /dev/termination-log
   148      volumeMounts:
   149  ```
   150  
   151  Note that our nginx container has picked up the namespace default cpu and memory resource limits.
   152  
   153  Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
   154  
   155  ```console
   156  $ kubectl create -f docs/admin/limitrange/invalid-pod.yaml --namespace=limit-example
   157  Error from server: Pod "invalid-pod" is forbidden: Maximum CPU usage per pod is 2, but requested 3
   158  ```
   159  
   160  Let's create a pod that falls within the allowed limit boundaries.
   161  
   162  ```console
   163  $ kubectl create -f docs/admin/limitrange/valid-pod.yaml --namespace=limit-example
   164  pods/valid-pod
   165  $ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 5 resources
   166  ```
   167  
   168  ```yaml
   169    containers:
   170    - capabilities: {}
   171      image: gcr.io/google_containers/serve_hostname
   172      imagePullPolicy: IfNotPresent
   173      name: nginx
   174      resources:
   175        limits:
   176          cpu: "1"
   177          memory: 512Mi
   178      securityContext:
   179        capabilities: {}
   180  ```
   181  
   182  Note that this pod specifies explicit resource limits so it did not pick up the namespace default values.
   183  
   184  Step 4: Cleanup
   185  ----------------------------
   186  To remove the resources used by this example, you can just delete the limit-example namespace.
   187  
   188  ```console
   189  $ kubectl delete namespace limit-example
   190  namespaces/limit-example
   191  $ kubectl get namespaces
   192  NAME      LABELS    STATUS
   193  default   <none>    Active
   194  ```
   195  
   196  Summary
   197  ----------------------------
   198  Cluster operators that want to restrict the amount of resources a single container or pod may consume
   199  are able to define allowable ranges per Kubernetes namespace.  In the absence of any hard limits,
   200  the Kubernetes system is able to apply default resource limits if desired in order to constrain the
   201  amount of resource a pod consumes on a node.
   202  
   203  
   204  
   205  
   206  <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
   207  [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/limitrange/README.md?pixel)]()
   208  <!-- END MUNGE: GENERATED_ANALYTICS -->