github.com/imran-kn/cilium-fork@v1.6.9/Documentation/gettingstarted/k8s-install-kubespray.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      http://docs.cilium.io
     6  
     7  .. _k8s_install_kubespray:
     8  
     9  ****************************
    10  Installation using Kubespray
    11  ****************************
    12  
    13  The guide is to use Kubespray for creating an AWS Kubernetes cluster running 
    14  Cilium as the CNI. The guide uses:
    15  
    16    - Kubespray v2.6.0
    17    - Latest `Cilium released version <https://github.com/cilium/cilium/releases>`__ (instructions for using the version are mentioned below)
    18  
    19  Please consult `Kubespray Prerequisites <https://github.com/kubernetes-incubator/kubespray#requirements>`__ and Cilium :ref:`admin_system_reqs`. 
    20  
    21  
    22  Installing Kubespray
    23  ====================
    24  
    25  .. code:: bash
    26  
    27    $ git clone --branch v2.6.0 https://github.com/kubernetes-incubator/kubespray 
    28  
    29  Install dependencies from ``requirements.txt``
    30  
    31  .. code:: bash
    32  
    33    $ cd kubespray
    34    $ sudo pip install -r requirements.txt
    35  
    36  
    37  Infrastructure Provisioning
    38  ===========================
    39  
    40  We will use Terraform for provisioning AWS infrastructure.
    41  
    42  -------------------------
    43  Configure AWS credentials
    44  -------------------------
    45  
    46  Export the variables for your AWS credentials 
    47  
    48  .. code:: bash
    49  
    50    export AWS_ACCESS_KEY_ID="www"
    51    export AWS_SECRET_ACCESS_KEY ="xxx"
    52    export AWS_SSH_KEY_NAME="yyy"
    53    export AWS_DEFAULT_REGION="zzz"
    54  
    55  -----------------------------
    56  Configure Terraform Variables
    57  -----------------------------
    58  
    59  We will start by specifying the infrastructure needed for the Kubernetes cluster.
    60  
    61  .. code:: bash
    62  
    63    $ cd contrib/terraform/aws
    64    $ cp contrib/terraform/aws/terraform.tfvars.example terraform.tfvars`
    65  
    66  Open the file and change any defaults particularly, the number of master, etcd, and worker nodes. 
    67  You can change the master and etcd number to 1 for deployments that don't need high availability.
    68  By default, this tutorial will create:
    69  
    70    - VPC with 2 public and private subnets
    71    - Bastion Hosts and NAT Gateways in the Public Subnet
    72    - Three of each (masters, etcd, and worker nodes) in the Private Subnet
    73    - AWS ELB in the Public Subnet for accessing the Kubernetes API from
    74      the internet
    75    - Terraform scripts using ``CoreOS`` as base image.
    76  
    77  Example ``terraform.tfvars`` file:
    78  
    79  .. code:: bash
    80  
    81    #Global Vars
    82    aws_cluster_name = "kubespray"
    83  
    84    #VPC Vars
    85    aws_vpc_cidr_block = "XXX.XXX.192.0/18"
    86    aws_cidr_subnets_private = ["XXX.XXX.192.0/20","XXX.XXX.208.0/20"]
    87    aws_cidr_subnets_public = ["XXX.XXX.224.0/20","XXX.XXX.240.0/20"]
    88  
    89    #Bastion Host
    90    aws_bastion_size = "t2.medium"
    91  
    92  
    93    #Kubernetes Cluster
    94  
    95    aws_kube_master_num = 3
    96    aws_kube_master_size = "t2.medium"
    97  
    98    aws_etcd_num = 3
    99    aws_etcd_size = "t2.medium"
   100  
   101    aws_kube_worker_num = 3
   102    aws_kube_worker_size = "t2.medium"
   103  
   104    #Settings AWS ELB
   105  
   106    aws_elb_api_port = 6443
   107    k8s_secure_api_port = 6443
   108    kube_insecure_apiserver_address = "0.0.0.0"
   109  
   110  
   111  -----------------------
   112  Apply the configuration
   113  -----------------------
   114  
   115  ``terraform init`` to initialize the following modules
   116  
   117    - ``module.aws-vpc``
   118    - ``module.aws-elb``
   119    - ``module.aws-iam``
   120  
   121  .. code:: bash
   122  
   123    $ terraform init
   124  
   125  Once initialized , execute:
   126  
   127  .. code:: bash
   128  
   129    $ terraform plan -out=aws_kubespray_plan
   130  
   131  This will generate a file, ``aws_kubespray_plan``, depicting an execution
   132  plan of the infrastructure that will be created on AWS. To apply, execute:
   133  
   134  .. code:: bash
   135  
   136    $ terraform init
   137    $ terraform apply "aws_kubespray_plan"
   138  
   139  Terraform automatically creates an Ansible Inventory file at ``inventory/hosts``.
   140  
   141  Installing Kubernetes cluster with Cilium as CNI
   142  ================================================
   143  
   144  Kubespray uses Ansible as its substrate for provisioning and orchestration. Once the infrastructure is created, you can run the Ansible playbook to install Kubernetes and all the required dependencies. Execute the below command in the kubespray clone repo, providing the correct path of the AWS EC2 ssh private key in ``ansible_ssh_private_key_file=<path to EC2 SSH private key file>``
   145  
   146  We recommend using the `latest released Cilium version <https://github.com/cilium/cilium/releases>`__ by editing ``roles/download/defaults/main.yml``. Open the file, search for ``cilium_version``, and replace the version with the latest released. As an example, the updated version entry will look like: ``cilium_version: "v1.2.0"``.
   147  
   148  
   149  .. code:: bash
   150  
   151    $ ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=cilium -b --become-user=root --flush-cache  -e ansible_ssh_private_key_file=<path to EC2 SSH private key file>
   152  
   153  
   154  Validate Cluster
   155  ================
   156  
   157  To check if cluster is created successfully, ssh into the bastion host with the user ``core``. 
   158  
   159  .. code:: bash
   160  
   161    # Get information about the basiton host 
   162    $ cat ssh-bastion.conf    
   163    $ ssh -i ~/path/to/ec2-key-file.pem core@public_ip_of_bastion_host 
   164  
   165  Execute the commands below from the bastion host. If ``kubectl`` isn't installed on the bastion host, you can login to the master node to test the below commands. You may need to copy the private key to the bastion host to access the master node.
   166  
   167  .. code:: bash
   168  
   169    $ kubectl get nodes
   170    $ kubectl get pods -n kube-system
   171  
   172  You should see that nodes are in ``Ready`` state and Cilium pods are in ``Running`` state
   173  
   174  Demo Application
   175  ================
   176  
   177  Follow this `link <https://cilium.readthedocs.io/en/stable/gettingstarted/minikube/#step-2-deploy-the-demo-application>`__ to deploy a demo application and verify the correctness of the installation.
   178  
   179  Delete Cluster
   180  ==============
   181  
   182  .. code:: bash
   183  
   184    $ cd contrib/terraform/aws
   185    $ terraform destroy
   186