github.com/apprenda/kismatic@v1.12.0/docs/provision.md (about)

     1  # Provisioning Machines
     2  
     3  ## <a name="get"></a>Get the installer
     4  
     5  You will need to run the installer either from a Linux machine or from a Darwin (OSX) machine. This machine will need to be able to access via SSH all the machines that will become nodes of the Kubernetes cluster.
     6  
     7  The installer can run from a machine that will become a node on the cluster, but since the installer's machine holds secrets (such as SSH/SSL keys and certificates), it's best to run from a machine with limited user access and an encrypted disk.
     8  
     9  The machine the installer is run from should be available for future modifications to the cluster (adding and removing nodes, upgrading Kubernetes).
    10  
    11  The binaries are published to the repository's [releases page](https://github.com/apprenda/kismatic/releases). Once downloaded, you may use `tar` or any other archive extraction utility.
    12  
    13  ## Generate A Plan File
    14  
    15  From the machine you installed Kismatic to, run the following:
    16  
    17  `./kismatic install plan`
    18  
    19  You will be asked a few questions regarding the decisions you made in the Plan section above. The kismatic installer will then generate a **kismatic-cluster.yaml** file.
    20  
    21  As machines are being provisioned, you must record their identity and credentials in this file.
    22  
    23  ## Create Machines
    24  
    25  ### <a name="access"></a>Providing access to the Installer
    26  
    27  Kismatic deploys packages on each node, so you will need a user with remote passwordless sudo access and an ssh public key added to each node. The same username and keypair must be used for all nodes. This account should only be used by the kismatic installer.
    28  
    29  We suggest a default user **kismaticuser** via:
    30  ```
    31  sudo useradd -d /home/kismaticuser -m kismaticuser
    32  sudo passwd kismaticuser
    33  ```
    34  
    35  The user can be given full, passwordless sudo privileges via:
    36  ```
    37  echo "kismaticuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/kismaticuser
    38  sudo chmod 0440 /etc/sudoers.d/kismaticuser
    39  ```
    40  
    41  We also suggest a corresponding private key **kismaticuser.key** added to the directory you're running the installer from. This would be the ideal spot to generate the keypair via:
    42  
    43  `ssh-keygen -t rsa -b 4096 -f kismaticuser.key -P ""`
    44  
    45  The resulting **kismaticuser.pub** will need to be copied to each node. ssh-copy-id can be convenient for this, or you can simply copy its contents to ~/.ssh/idrsa
    46  
    47  There are four pieces of information we will need to be able to address each node:
    48  
    49  <table>
    50    <tr>
    51      <td><b>hostname</b></td>
    52      <td>This is a short name that machines can access each other with. If you opt for Kismatic to manage host files for your cluster, this name will be copied to host files.</td>
    53    </tr>
    54    <tr>
    55      <td><b>ip</b></td>
    56      <td>This is the ip that the installer should connect to your node with. If you don't specify a separate internal_ip for the node, the ip will be used for cluster traffic as well</td>
    57    </tr>
    58    <tr>
    59      <td><b>internal_ip</b><br/> (optional)</td>
    60      <td>In many cases nodes will have more than one physical network card or more than one ip address. Specifying an internal IP address allows you to route traffic over a specific network. It's best for Kubernetes components to communicate with each other via a local network, rather than over the internet.</td>
    61    </tr>
    62    <tr>
    63      <td><b>labels</b> <br/> (optional)</td>
    64      <td>With worker nodes, labels allow you to identify details of the hardware that you may want to be available to Kubernetes to aid in scheduling decisions. For example, if you have worker nodes with GPUs and worker nodes without, you may want to tag the nodes with GPUs.</td>
    65    </tr>
    66  </table>
    67  
    68  
    69  ### Pre-Install Configuration
    70  
    71  By default, Kismatic will attempt to install any of the software packages below if missing. This can cause the installer to take significantly more time and use more bandwidth than pre-baking an image and will require internet access to the Kismatic package repository, DockerHub and a repository for your operating system.
    72  
    73  If you are building a large cluster or one that won't have access to these repositories, you will want to [cache the necessary packages](packages.md#synchronizing-a-local-repo) in a repository on your network.
    74  
    75  <table>
    76    <tr>
    77      <th>Requirement</th>
    78      <th>Required for</th>
    79      <th>etcd</th>
    80      <th>master</th>
    81      <th>worker</th>
    82    </tr>
    83    <tr>
    84      <td>user and public key installed on all nodes</td>
    85      <td>Access from kismatic to manage node</td>
    86      <td>yes</td>
    87      <td>yes</td>
    88      <td>yes</td>
    89    </tr>
    90    <tr>
    91      <td>/etc/ssh/sshd_config contains `PubkeyAuthentication yes`</td>
    92      <td>Access from kismatic to manage node</td>
    93      <td>yes</td>
    94      <td>yes</td>
    95      <td>yes</td>
    96    </tr>
    97    <tr>
    98      <td>Access to an apt or yum repository</td>
    99      <td>Retrieving binaries over the internet during installation</td>
   100      <td>yes</td>
   101      <td>yes</td>
   102      <td>yes</td>
   103    </tr>
   104    <tr>
   105      <td>Python 2.7</td>
   106      <td>Kismatic management of nodes</td>
   107      <td>yes</td>
   108      <td>yes</td>
   109      <td>yes</td>
   110    </tr>
   111    <tr>
   112      <td>Kismatic package of Docker 1.11.2</td>
   113      <td>hosting containers</td>
   114      <td></td>
   115      <td>yes</td>
   116      <td>yes</td>
   117    </tr>
   118    <tr>
   119      <td>Kismatic package of Etcd 3.1.13</td>
   120      <td>inter-pod networking</td>
   121      <td>yes</td>
   122      <td></td>
   123      <td></td>
   124    </tr>
   125    <tr>
   126      <td>Kismatic package of Kubernetes kubelet 1.10.5</td>
   127      <td>Kubernetes</td>
   128      <td></td>
   129      <td>yes</td>
   130      <td>yes</td>
   131    </tr>
   132    <tr>
   133      <td>Kismatic package of Kubernetes kubectl 1.10.5</td>
   134      <td>Kubernetes</td>
   135      <td></td>
   136      <td>yes</td>
   137      <td>yes</td>
   138    </tr>
   139  </table>
   140  
   141  ### Inspector
   142  
   143  To double check that your nodes are fit for purpose, you can run the kismatic inspector. This tool will be run on each node as part of validating your cluster and network fitness prior to installation.
   144  
   145  ## Networking
   146  
   147  Enter your network settings in the plan file, including
   148  
   149  * pod networking technique (**routed** or **overlay**)
   150  * CIDR ranges for pod and services networks
   151  * whether the Kismatic installer should manage hosts files for your cluster
   152  
   153  Create your DNS CNAME or load balancer alias for your Kubernetes master nodes based on their hostnames.