github.com/imran-kn/cilium-fork@v1.6.9/Documentation/gettingstarted/kata-gce.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      http://docs.cilium.io
     6  
     7  .. _kata-gce:
     8  
     9  ******************************
    10  Kata with Cilium on Google GCE
    11  ******************************
    12  
    13  Kata Containers is an open source project that provides a secure container
    14  runtime with lightweight virtual machines that feel and perform like containers,
    15  but provide stronger workload isolation using hardware virtualization technology
    16  as a second layer of defense.
    17  Similar to the OCI runtime ``runc`` provided by Docker, Cilium can be used with
    18  Kata Containers, providing a higher degree of security at the network layer and
    19  at the compute layer with Kata.
    20  This guide provides a walkthrough of installing Kata with Cilium on GCE.
    21  Kata Containers on Google Compute Engine (GCE) makes use of nested virtualization.
    22  At the time of this writing, nested virtualization support was not yet available
    23  on GKE.
    24  
    25  GCE Requirements
    26  ================
    27  
    28  1. Install the Google Cloud SDK (``gcloud``) see `Installing Google Cloud SDK <https://cloud.google.com/sdk/install>`_
    29     Verify your gcloud installation and configuration:
    30  
    31  .. code:: bash
    32  
    33      gcloud info || { echo "ERROR: no Google Cloud SDK"; exit 1; }
    34  
    35  2. Create a project or use an existing one
    36  
    37  .. code:: bash
    38  
    39     export GCE_PROJECT=kata-with-cilium
    40     gcloud projects create $GCE_PROJECT
    41  
    42  
    43  Create an image on GCE with Nested Virtualization support
    44  =========================================================
    45  
    46  As mentioned before, Kata Containers on Google Compute Engine (GCE) makes use of
    47  nested virtualization. As a prerequisite you need to create an image with
    48  nested virtualization enabled in your currently active GCE project.
    49  
    50  1. Choose a base image
    51  
    52  Officially supported images are automatically discoverable with:
    53  
    54  .. code:: bash
    55  
    56    gcloud compute images list
    57    NAME                                                  PROJECT            FAMILY                            DEPRECATED  STATUS
    58    centos-6-v20190423                                    centos-cloud       centos-6                                      READY
    59    centos-7-v20190423                                    centos-cloud       centos-7                                      READY
    60    coreos-alpha-2121-0-0-v20190423                       coreos-cloud       coreos-alpha                                  READY
    61    cos-69-10895-211-0                                    cos-cloud          cos-69-lts                                    READY
    62    ubuntu-1604-xenial-v20180522                          ubuntu-os-cloud    ubuntu-1604-lts                               READY
    63    ubuntu-1804-bionic-v20180522                          ubuntu-os-cloud    ubuntu-1804-lts                               READY
    64  
    65  Select an image based on project and family rather than by name. This ensures
    66  any scripts or other automation always works with a non-deprecated image,
    67  including security updates, updates to GCE-specific scripts, etc.
    68  
    69  2. Create the image with nested virtualization support
    70  
    71  .. code:: bash
    72  
    73    SOURCE_IMAGE_PROJECT=ubuntu-os-cloud
    74    SOURCE_IMAGE_FAMILY=ubuntu-1804-lts
    75    IMAGE_NAME=${SOURCE_IMAGE_FAMILY}-nested
    76  
    77    gcloud compute images create \
    78        --source-image-project $SOURCE_IMAGE_PROJECT \
    79        --source-image-family $SOURCE_IMAGE_FAMILY \
    80        --licenses=https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx \
    81        $IMAGE_NAME
    82  
    83  If successful, gcloud reports that the image was created.
    84  
    85  3. Verify VMX is enabled
    86  
    87  Verify that a virtual machine created with the previous image has VMX enabled.
    88  
    89  .. code:: bash
    90  
    91    gcloud compute instances create \
    92      --image $IMAGE_NAME \
    93      --machine-type n1-standard-2 \
    94      --min-cpu-platform "Intel Broadwell" \
    95      kata-testing
    96  
    97    gcloud compute ssh kata-testing
    98    # While ssh'd into the VM:
    99    $ [ -z "$(lscpu|grep GenuineIntel)" ] && { echo "ERROR: Need an Intel CPU"; exit 1; }
   100  
   101  Setup Kubernetes with CRI
   102  =========================
   103  
   104  Kata Containers runtime is an OCI compatible runtime and cannot directly interact
   105  with the CRI API level. For this reason we rely on a CRI implementation to translate
   106  CRI into OCI. There are two supported ways called CRI-O and CRI-containerd.
   107  It is up to you to choose the one that you want, but you have to pick one.
   108  
   109  If you select CRI-O, follow the "CRI-O Tutorial" instructions
   110  `here <https://github.com/cri-o/cri-o/blob/master/tutorial.md/>`__ to properly install it.
   111  If you select containerd with cri plugin, follow the "Getting Started for Developers"
   112  instructions `here <https://github.com/containerd/cri#getting-started-for-developers>`__ to properly install it.
   113  
   114  Setup your Kubernetes environment and make sure the following requirements are met:
   115  
   116  * Kubernetes >= 1.12
   117  * Linux kernel >= 4.9
   118  * Kubernetes in CNI mode
   119  * Running kube-dns/coredns (When using the etcd-operator installation method)
   120  * Mounted BPF filesystem mounted on all worker nodes
   121  * Enable PodCIDR allocation (``--allocate-node-cidrs``) in the ``kube-controller-manager`` (recommended)
   122  
   123  Refer to the section :ref:`k8s_requirements` for detailed instruction on how to
   124  prepare your Kubernetes environment.
   125  
   126  .. note::
   127     Minimum version of kubernetes 1.12 is required to use the RuntimeClass Feature
   128     for Kata Container runtime described below. It is possible to use kubernetes<=1.10
   129     with Kata, but that requires for a slightly different setup that has been
   130     deprecated.
   131  
   132  Kubernetes talks with CRI implementations through a container-runtime-endpoint,
   133  also called CRI socket. This socket path is different depending on which CRI
   134  implementation you chose, and the kubelet service has to be updated accordingly.
   135  
   136  Configure Kubernetes for CRI-O
   137  ------------------------------
   138  
   139  Add ``/etc/systemd/system/kubelet.service.d/0-crio.conf``
   140  
   141  ::
   142  
   143    [Service]
   144    Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///var/run/crio/crio.sock"
   145  
   146  Configure for Kubernetes for containerd
   147  ---------------------------------------
   148  
   149  Add ``/etc/systemd/system/kubelet.service.d/0-cri-containerd.conf``
   150  
   151  ::
   152  
   153    [Service]
   154    Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
   155  
   156  After you update your kubelet service based on the CRI implementation you are
   157  using, reload and restart kubelet.
   158  
   159  Deploy Cilium
   160  =============
   161  
   162  .. include:: k8s-install-download-release.rst
   163  
   164  Generate the required YAML file and deploy it:
   165  
   166  .. code:: bash
   167  
   168     helm template cilium \
   169       --namespace kube-system \
   170       --set global.containerRuntime.integration=crio \
   171       > cilium.yaml
   172     kubectl create -f cilium.yaml
   173  
   174  .. note::
   175  
   176     If you are using ``containerd``, set ``global.containerRuntime.integration=containerd``.
   177  
   178  Validate cilium
   179  ===============
   180  
   181  You can monitor as Cilium and all required components are being installed:
   182  
   183  .. parsed-literal::
   184  
   185      kubectl -n kube-system get pods --watch
   186      NAME                                    READY   STATUS              RESTARTS   AGE
   187      cilium-cvp8q                            0/1     Init:0/1            0          53s
   188      cilium-operator-788c55554-gkpbf         0/1     ContainerCreating   0          54s
   189      cilium-tdzcx                            0/1     Init:0/1            0          53s
   190      coredns-77b578f78d-km6r4                1/1     Running             0          11m
   191      coredns-77b578f78d-qr6gq                1/1     Running             0          11m
   192      kube-proxy-l47rx                        1/1     Running             0          6m28s
   193      kube-proxy-zj6v5                        1/1     Running             0          6m28s
   194  
   195  It may take a couple of minutes for the etcd-operator to bring up the necessary
   196  number of etcd pods to achieve quorum. Once it reaches quorum, all components
   197  should be healthy and ready:
   198  
   199  .. parsed-literal::
   200  
   201     kubectl -n=kube-system get pods
   202     NAME                                    READY   STATUS    RESTARTS   AGE
   203     cilium-cvp8q                            1/1     Running   0          42s
   204     cilium-operator-788c55554-gkpbf         1/1     Running   2          43s
   205     cilium-tdzcx                            1/1     Running   0          42s
   206     coredns-77b578f78d-2khwp                1/1     Running   0          13s
   207     coredns-77b578f78d-bs6rp                1/1     Running   0          13s
   208     kube-proxy-l47rx                        1/1     Running   0          6m
   209     kube-proxy-zj6v5                        1/1     Running   0          6m
   210  
   211  For troubleshooting any issues, please refer to :ref:`k8s_install_etcd_operator`
   212  
   213  Install Kata on a running Kubernetes Cluster
   214  ============================================
   215  
   216  Kubernetes configured with CRI runtimes by default uses ``runc`` runtime for running a
   217  workload. You will need to configure Kubernetes to be able to use an alternate runtime.
   218  
   219  `RuntimeClass <https://kubernetes.io/docs/concepts/containers/runtime-class/>`_
   220  is a Kubernetes feature first introduced in Kubernetes 1.12 as alpha. It is the
   221  feature for selecting the container runtime configuration to use
   222  to run a pod’s containers.
   223  To use Kata-Containers, ensure the RuntimeClass feature gate is enabled for k8s < 1.13.
   224  It is enabled by default on k8s 1.14.
   225  See `Feature Gates <https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/>`_
   226  for an explanation of enabling feature gates.
   227  
   228  To install Kata Containers and configure CRI to use Kata as a one step process,
   229  you will use `kata-deploy <https://github.com/kata-containers/packaging/tree/master/kata-deploy>`_
   230  tool as shown below.
   231  
   232  1) Install Kata on a running k8s cluster
   233  
   234  .. code:: bash
   235  
   236    kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata-deploy/kata-rbac.yaml
   237    kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata-deploy/kata-deploy.yaml
   238  
   239  This will install all the required Kata binaries under ``/opt/kata`` and configure
   240  CRI implementation with the RuntimeClass handlers for the Kata runtime binaries.
   241  Kata Containers can leverage Qemu and Firecracker hypervisor for running
   242  the lightweight VM. ``kata-fc`` binary runs a Firecracker isolated Kata Container while
   243  ``kata-qemu`` runs a Qemu isolated Kata Container.
   244  
   245  2) Create the RuntimeClass resource for Kata-containers
   246  
   247  To add a RuntimeClass for Qemu isolated Kata-Containers:
   248  
   249  .. tabs::
   250    .. group-tab:: K8s 1.14
   251  
   252      .. parsed-literal::
   253  
   254        kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata-deploy/k8s-1.14/kata-qemu-runtimeClass.yaml
   255  
   256    .. group-tab:: K8s 1.13
   257  
   258      .. parsed-literal::
   259  
   260        kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata-deploy/k8s-1.13/kata-qemu-runtimeClass.yaml
   261  
   262    .. group-tab:: K8s 1.12
   263  
   264      .. parsed-literal::
   265  
   266        kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata-deploy/k8s-1.13/kata-qemu-runtimeClass.yaml
   267  
   268  To add a RuntimeClass for Firecracker isolated Kata-Containers:
   269  
   270  .. tabs::
   271    .. group-tab:: K8s 1.14
   272  
   273      .. parsed-literal::
   274  
   275        kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata-deploy/k8s-1.14/kata-fc-runtimeClass.yaml
   276  
   277    .. group-tab:: K8s 1.13
   278  
   279      .. parsed-literal::
   280  
   281        kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata-deploy/k8s-1.13/kata-fc-runtimeClass.yaml
   282  
   283    .. group-tab:: K8s 1.12
   284  
   285      .. parsed-literal::
   286  
   287        kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata-deploy/k8s-1.13/kata-fc-runtimeClass.yaml
   288  
   289  Run Kata Containers with Cilium CNI
   290  ===================================
   291  
   292  Now that Kata is installed on the k8s cluster, you can run an untrusted workload
   293  with Kata Containers with Cilium as the CNI.
   294  
   295  The following YAML snippet shows how to specify a workload should use Kata with QEMU:
   296  
   297  ::
   298  
   299    spec:
   300      template:
   301        spec:
   302          runtimeClassName: kata-qemu
   303  
   304  The following YAML snippet shows how to specify a workload should use Kata with Firecracker:
   305  
   306  ::
   307  
   308    spec:
   309      template:
   310        spec:
   311          runtimeClassName: kata-fc
   312  
   313  To run an example pod with kata-qemu:
   314  
   315  .. code:: bash
   316  
   317    kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata-deploy/examples/test-deploy-kata-qemu.yaml
   318  
   319  To run an example with kata-fc:
   320  
   321  .. code:: bash
   322  
   323    kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata-deploy/examples/test-deploy-kata-fc.yaml