github.com/cilium/cilium@v1.16.2/Documentation/installation/k8s-install-kubespray.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      https://docs.cilium.io
     6  
     7  .. _k8s_install_kubespray:
     8  
     9  ****************************
    10  Installation using Kubespray
    11  ****************************
    12  
    13  The guide is to use Kubespray for creating an AWS Kubernetes cluster running 
    14  Cilium as the CNI. The guide uses:
    15  
    16    - Kubespray v2.6.0
    17    - Latest `Cilium released version`_ (instructions for using the version are mentioned below)
    18  
    19  Please consult `Kubespray Prerequisites <https://github.com/kubernetes-sigs/kubespray#requirements>`__ and Cilium :ref:`admin_system_reqs`. 
    20  
    21  .. _Cilium released version: `latest released Cilium version`_
    22  
    23  Installing Kubespray
    24  ====================
    25  
    26  .. code-block:: shell-session
    27  
    28    $ git clone --branch v2.6.0 https://github.com/kubernetes-sigs/kubespray
    29  
    30  Install dependencies from ``requirements.txt``
    31  
    32  .. code-block:: shell-session
    33  
    34    $ cd kubespray
    35    $ sudo pip install -r requirements.txt
    36  
    37  
    38  Infrastructure Provisioning
    39  ===========================
    40  
    41  We will use Terraform for provisioning AWS infrastructure.
    42  
    43  Configure AWS credentials
    44  -------------------------
    45  
    46  Export the variables for your AWS credentials 
    47  
    48  .. code-block:: shell-session
    49  
    50    export AWS_ACCESS_KEY_ID="www"
    51    export AWS_SECRET_ACCESS_KEY ="xxx"
    52    export AWS_SSH_KEY_NAME="yyy"
    53    export AWS_DEFAULT_REGION="zzz"
    54  
    55  Configure Terraform Variables
    56  -----------------------------
    57  
    58  We will start by specifying the infrastructure needed for the Kubernetes cluster.
    59  
    60  .. code-block:: shell-session
    61  
    62    $ cd contrib/terraform/aws
    63    $ cp contrib/terraform/aws/terraform.tfvars.example terraform.tfvars
    64  
    65  Open the file and change any defaults particularly, the number of master, etcd, and worker nodes. 
    66  You can change the master and etcd number to 1 for deployments that don't need high availability.
    67  By default, this tutorial will create:
    68  
    69    - VPC with 2 public and private subnets
    70    - Bastion Hosts and NAT Gateways in the Public Subnet
    71    - Three of each (masters, etcd, and worker nodes) in the Private Subnet
    72    - AWS ELB in the Public Subnet for accessing the Kubernetes API from
    73      the internet
    74    - Terraform scripts using ``CoreOS`` as base image.
    75  
    76  Example ``terraform.tfvars`` file:
    77  
    78  .. code-block:: bash
    79  
    80    #Global Vars
    81    aws_cluster_name = "kubespray"
    82  
    83    #VPC Vars
    84    aws_vpc_cidr_block = "XXX.XXX.192.0/18"
    85    aws_cidr_subnets_private = ["XXX.XXX.192.0/20","XXX.XXX.208.0/20"]
    86    aws_cidr_subnets_public = ["XXX.XXX.224.0/20","XXX.XXX.240.0/20"]
    87  
    88    #Bastion Host
    89    aws_bastion_size = "t2.medium"
    90  
    91  
    92    #Kubernetes Cluster
    93  
    94    aws_kube_master_num = 3
    95    aws_kube_master_size = "t2.medium"
    96  
    97    aws_etcd_num = 3
    98    aws_etcd_size = "t2.medium"
    99  
   100    aws_kube_worker_num = 3
   101    aws_kube_worker_size = "t2.medium"
   102  
   103    #Settings AWS ELB
   104  
   105    aws_elb_api_port = 6443
   106    k8s_secure_api_port = 6443
   107    kube_insecure_apiserver_address = "0.0.0.0"
   108  
   109  
   110  Apply the configuration
   111  -----------------------
   112  
   113  ``terraform init`` to initialize the following modules
   114  
   115    - ``module.aws-vpc``
   116    - ``module.aws-elb``
   117    - ``module.aws-iam``
   118  
   119  .. code-block:: shell-session
   120  
   121    $ terraform init
   122  
   123  Once initialized , execute:
   124  
   125  .. code-block:: shell-session
   126  
   127    $ terraform plan -out=aws_kubespray_plan
   128  
   129  This will generate a file, ``aws_kubespray_plan``, depicting an execution
   130  plan of the infrastructure that will be created on AWS. To apply, execute:
   131  
   132  .. code-block:: shell-session
   133  
   134    $ terraform init
   135    $ terraform apply "aws_kubespray_plan"
   136  
   137  Terraform automatically creates an Ansible Inventory file at ``inventory/hosts``.
   138  
   139  Installing Kubernetes cluster with Cilium as CNI
   140  ================================================
   141  
   142  Kubespray uses Ansible as its substrate for provisioning and orchestration. Once the infrastructure is created, you can run the Ansible playbook to install Kubernetes and all the required dependencies. Execute the below command in the kubespray clone repo, providing the correct path of the AWS EC2 ssh private key in ``ansible_ssh_private_key_file=<path to EC2 SSH private key file>``
   143  
   144  We recommend using the `latest released Cilium version`_ by passing the variable when running the ``ansible-playbook`` command.
   145  For example, you could add the following flag to the command below: ``-e cilium_version=v1.11.0``.
   146  
   147  .. code-block:: shell-session
   148  
   149    $ ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=cilium -b --become-user=root --flush-cache  -e ansible_ssh_private_key_file=<path to EC2 SSH private key file>
   150  
   151  .. _latest released Cilium version: https://github.com/cilium/cilium/releases
   152  
   153  If you are interested in configuring your Kubernetes cluster setup, you should consider copying the sample inventory. Then, you can edit the variables in the relevant file in the ``group_vars`` directory.
   154  
   155  .. code-block:: shell-session
   156  
   157    $ cp -r inventory/sample inventory/my-inventory
   158    $ cp ./inventory/hosts ./inventory/my-inventory/hosts
   159    $ echo 'cilium_version: "v1.11.0"' >> ./inventory/my-inventory/group_vars/k8s_cluster/k8s-net-cilium.yml
   160    $ ansible-playbook -i ./inventory/my-inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=cilium -b --become-user=root --flush-cache -e ansible_ssh_private_key_file=<path to EC2 SSH private key file>
   161  
   162  Validate Cluster
   163  ================
   164  
   165  To check if cluster is created successfully, ssh into the bastion host with the user ``core``. 
   166  
   167  .. code-block:: shell-session
   168  
   169    $ # Get information about the basiton host
   170    $ cat ssh-bastion.conf
   171    $ ssh -i ~/path/to/ec2-key-file.pem core@public_ip_of_bastion_host
   172  
   173  Execute the commands below from the bastion host. If ``kubectl`` isn't installed on the bastion host, you can login to the master node to test the below commands. You may need to copy the private key to the bastion host to access the master node.
   174  
   175  .. include:: k8s-install-validate.rst
   176  
   177  Delete Cluster
   178  ==============
   179  
   180  .. code-block:: shell-session
   181  
   182    $ cd contrib/terraform/aws
   183    $ terraform destroy
   184