github.com/enmand/kubernetes@v1.2.0-alpha.0/docs/getting-started-guides/coreos/bare_metal_offline.md (about)

     1  <!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
     2  
     3  <!-- BEGIN STRIP_FOR_RELEASE -->
     4  
     5  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
     6       width="25" height="25">
     7  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
     8       width="25" height="25">
     9  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
    10       width="25" height="25">
    11  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
    12       width="25" height="25">
    13  <img src="http://kubernetes.io/img/warning.png" alt="WARNING"
    14       width="25" height="25">
    15  
    16  <h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
    17  
    18  If you are using a released version of Kubernetes, you should
    19  refer to the docs that go with that version.
    20  
    21  <strong>
    22  The latest 1.0.x release of this document can be found
    23  [here](http://releases.k8s.io/release-1.0/docs/getting-started-guides/coreos/bare_metal_offline.md).
    24  
    25  Documentation for other releases can be found at
    26  [releases.k8s.io](http://releases.k8s.io).
    27  </strong>
    28  --
    29  
    30  <!-- END STRIP_FOR_RELEASE -->
    31  
    32  <!-- END MUNGE: UNVERSIONED_WARNING -->
    33  Bare Metal CoreOS with Kubernetes (OFFLINE)
    34  ------------------------------------------
    35  Deploy a CoreOS running Kubernetes environment. This particular guild is made to help those in an OFFLINE system, wither for testing a POC before the real deal, or you are restricted to be totally offline for your applications.
    36  
    37  **Table of Contents**
    38  
    39  - [Prerequisites](#prerequisites)
    40  - [High Level Design](#high-level-design)
    41  - [This Guides variables](#this-guides-variables)
    42  - [Setup PXELINUX CentOS](#setup-pxelinux-centos)
    43  - [Adding CoreOS to PXE](#adding-coreos-to-pxe)
    44  - [DHCP configuration](#dhcp-configuration)
    45  - [Kubernetes](#kubernetes)
    46  - [Cloud Configs](#cloud-configs)
    47      - [master.yml](#masteryml)
    48      - [node.yml](#nodeyml)
    49  - [New pxelinux.cfg file](#new-pxelinuxcfg-file)
    50  - [Specify the pxelinux targets](#specify-the-pxelinux-targets)
    51  - [Creating test pod](#creating-test-pod)
    52  - [Helping commands for debugging](#helping-commands-for-debugging)
    53  
    54  
    55  ## Prerequisites
    56  
    57  1. Installed *CentOS 6* for PXE server
    58  2. At least two bare metal nodes to work with
    59  
    60  ## High Level Design
    61  
    62  1. Manage the tftp directory
    63    * /tftpboot/(coreos)(centos)(RHEL)
    64    * /tftpboot/pxelinux.0/(MAC) -> linked to Linux image config file
    65  2. Update per install the link for pxelinux
    66  3. Update the DHCP config to reflect the host needing deployment
    67  4. Setup nodes to deploy CoreOS creating a etcd cluster.
    68  5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/).
    69  6. Installing the CoreOS slaves to become Kubernetes nodes.
    70  
    71  ## This Guides variables
    72  
    73  | Node Description              | MAC               | IP          |
    74  | :---------------------------- | :---------------: | :---------: |
    75  | CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 |
    76  | CoreOS Slave 1                | d0:00:67:13:0d:01 | 10.20.30.41 |
    77  | CoreOS Slave 2                | d0:00:67:13:0d:02 | 10.20.30.42 |
    78  
    79  
    80  ## Setup PXELINUX CentOS
    81  
    82  To setup CentOS PXELINUX environment there is a complete [guide here](http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html). This section is the abbreviated version.
    83  
    84  1. Install packages needed on CentOS
    85  
    86          sudo yum install tftp-server dhcp syslinux
    87  
    88  2. `vi /etc/xinetd.d/tftp` to enable tftp service and change disable to 'no'
    89          disable = no
    90  
    91  3. Copy over the syslinux images we will need.
    92  
    93          su -
    94          mkdir -p /tftpboot
    95          cd /tftpboot
    96          cp /usr/share/syslinux/pxelinux.0 /tftpboot
    97          cp /usr/share/syslinux/menu.c32 /tftpboot
    98          cp /usr/share/syslinux/memdisk /tftpboot
    99          cp /usr/share/syslinux/mboot.c32 /tftpboot
   100          cp /usr/share/syslinux/chain.c32 /tftpboot
   101  
   102          /sbin/service dhcpd start
   103          /sbin/service xinetd start
   104          /sbin/chkconfig tftp on
   105  
   106  4. Setup default boot menu
   107  
   108          mkdir /tftpboot/pxelinux.cfg
   109          touch /tftpboot/pxelinux.cfg/default
   110  
   111  5. Edit the menu `vi /tftpboot/pxelinux.cfg/default`
   112  
   113          default menu.c32
   114          prompt 0
   115          timeout 15
   116          ONTIMEOUT local
   117          display boot.msg
   118  
   119          MENU TITLE Main Menu
   120  
   121          LABEL local
   122                  MENU LABEL Boot local hard drive
   123                  LOCALBOOT 0
   124  
   125  Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers.
   126  
   127  ## Adding CoreOS to PXE
   128  
   129  This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
   130  
   131  1. Find or create the TFTP root directory that everything will be based off of.
   132      * For this document we will assume `/tftpboot/` is our root directory.
   133  2. Once we know and have our tftp root directory we will create a new directory structure for our CoreOS images.
   134  3. Download the CoreOS PXE files provided by the CoreOS team.
   135  
   136          MY_TFTPROOT_DIR=/tftpboot
   137          mkdir -p $MY_TFTPROOT_DIR/images/coreos/
   138          cd $MY_TFTPROOT_DIR/images/coreos/
   139          wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz
   140          wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig
   141          wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz
   142          wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig
   143          gpg --verify coreos_production_pxe.vmlinuz.sig
   144          gpg --verify coreos_production_pxe_image.cpio.gz.sig
   145  
   146  4. Edit the menu `vi /tftpboot/pxelinux.cfg/default` again
   147  
   148          default menu.c32
   149          prompt 0
   150          timeout 300
   151          ONTIMEOUT local
   152          display boot.msg
   153  
   154          MENU TITLE Main Menu
   155  
   156          LABEL local
   157                  MENU LABEL Boot local hard drive
   158                  LOCALBOOT 0
   159  
   160          MENU BEGIN CoreOS Menu
   161  
   162              LABEL coreos-master
   163                  MENU LABEL CoreOS Master
   164                  KERNEL images/coreos/coreos_production_pxe.vmlinuz
   165                  APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-single-master.yml
   166  
   167              LABEL coreos-slave
   168                  MENU LABEL CoreOS Slave
   169                  KERNEL images/coreos/coreos_production_pxe.vmlinuz
   170                  APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<xxx.xxx.xxx.xxx>/pxe-cloud-config-slave.yml
   171          MENU END
   172  
   173  This configuration file will now boot from local drive but have the option to PXE image CoreOS.
   174  
   175  ## DHCP configuration
   176  
   177  This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images.
   178  
   179  1. Add the `filename` to the _host_ or _subnet_ sections.
   180  
   181          filename "/tftpboot/pxelinux.0";
   182  
   183  2. At this point we want to make pxelinux configuration files that will be the templates for the different CoreOS deployments.
   184  
   185          subnet 10.20.30.0 netmask 255.255.255.0 {
   186                  next-server 10.20.30.242;
   187                  option broadcast-address 10.20.30.255;
   188                  filename "<other default image>";
   189  
   190                  ...
   191                  # http://www.syslinux.org/wiki/index.php/PXELINUX
   192                  host core_os_master {
   193                          hardware ethernet d0:00:67:13:0d:00;
   194                          option routers 10.20.30.1;
   195                          fixed-address 10.20.30.40;
   196                          option domain-name-servers 10.20.30.242;
   197                          filename "/pxelinux.0";
   198                  }
   199                  host core_os_slave {
   200                          hardware ethernet d0:00:67:13:0d:01;
   201                          option routers 10.20.30.1;
   202                          fixed-address 10.20.30.41;
   203                          option domain-name-servers 10.20.30.242;
   204                          filename "/pxelinux.0";
   205                  }
   206                  host core_os_slave2 {
   207                          hardware ethernet d0:00:67:13:0d:02;
   208                          option routers 10.20.30.1;
   209                          fixed-address 10.20.30.42;
   210                          option domain-name-servers 10.20.30.242;
   211                          filename "/pxelinux.0";
   212                  }
   213                  ...
   214          }
   215  
   216  We will be specifying the node configuration later in the guide.
   217  
   218  ## Kubernetes
   219  
   220  To deploy our configuration we need to create an `etcd` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
   221  1. Is to template the cloud config file and programmatically create new static configs for different cluster setups.
   222  2. Have a service discovery protocol running in our stack to do auto discovery.
   223  
   224  This demo we just make a static single `etcd` server to host our Kubernetes and `etcd` master servers.
   225  
   226  Since we are OFFLINE here most of the helping processes in CoreOS and Kubernetes are then limited. To do our setup we will then have to download and serve up our binaries for Kubernetes in our local environment.
   227  
   228  An easy solution is to host a small web server on the DHCP/TFTP host for all our binaries to make them available to the local CoreOS PXE machines.
   229  
   230  To get this up and running we are going to setup a simple `apache` server to serve our binaries needed to bootstrap Kubernetes.
   231  
   232  This is on the PXE server from the previous section:
   233  
   234      rm /etc/httpd/conf.d/welcome.conf
   235      cd /var/www/html/
   236      wget -O kube-register  https://github.com/kelseyhightower/kube-register/releases/download/v0.0.2/kube-register-0.0.2-linux-amd64
   237      wget -O setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment
   238      wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubernetes --no-check-certificate
   239      wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver --no-check-certificate
   240      wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager --no-check-certificate
   241      wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler --no-check-certificate
   242      wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl --no-check-certificate
   243      wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubecfg --no-check-certificate
   244      wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet --no-check-certificate
   245      wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy --no-check-certificate
   246      wget -O flanneld https://storage.googleapis.com/k8s/flanneld --no-check-certificate
   247  
   248  This sets up our binaries we need to run Kubernetes. This would need to be enhanced to download from the Internet for updates in the future.
   249  
   250  Now for the good stuff!
   251  
   252  ## Cloud Configs
   253  
   254  The following config files are tailored for the OFFLINE version of a Kubernetes deployment.
   255  
   256  These are based on the work found here: [master.yml](cloud-configs/master.yaml), [node.yml](cloud-configs/node.yaml)
   257  
   258  To make the setup work, you need to replace a few placeholders:
   259  
   260   - Replace `<PXE_SERVER_IP>` with your PXE server ip address (e.g. 10.20.30.242)
   261   - Replace `<MASTER_SERVER_IP>` with the Kubernetes master ip address (e.g. 10.20.30.40)
   262   - If you run a private docker registry, replace `rdocker.example.com` with your docker registry dns name.
   263   - If you use a proxy, replace `rproxy.example.com` with your proxy server (and port)
   264   - Add your own SSH public key(s) to the cloud config at the end
   265  
   266  ### master.yml
   267  
   268  On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-master.yml`.
   269  
   270  
   271      #cloud-config
   272      ---
   273      write_files:
   274        - path: /opt/bin/waiter.sh
   275          owner: root
   276          content: |
   277            #! /usr/bin/bash
   278            until curl http://127.0.0.1:4001/v2/machines; do sleep 2; done
   279        - path: /opt/bin/kubernetes-download.sh
   280          owner: root
   281          permissions: 0755
   282          content: |
   283            #! /usr/bin/bash
   284            /usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubectl"
   285            /usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubernetes"
   286            /usr/bin/wget -N -P "/opt/bin" "http://<PXE_SERVER_IP>/kubecfg"
   287            chmod +x /opt/bin/*
   288        - path: /etc/profile.d/opt-path.sh
   289          owner: root
   290          permissions: 0755
   291          content: |
   292            #! /usr/bin/bash
   293            PATH=$PATH/opt/bin
   294      coreos:
   295        units:
   296          - name: 10-eno1.network
   297            runtime: true
   298            content: |
   299              [Match]
   300              Name=eno1
   301              [Network]
   302              DHCP=yes
   303          - name: 20-nodhcp.network
   304            runtime: true
   305            content: |
   306              [Match]
   307              Name=en*
   308              [Network]
   309              DHCP=none
   310          - name: get-kube-tools.service
   311            runtime: true
   312            command: start
   313            content: |
   314              [Service]
   315              ExecStartPre=-/usr/bin/mkdir -p /opt/bin
   316              ExecStart=/opt/bin/kubernetes-download.sh
   317              RemainAfterExit=yes
   318              Type=oneshot
   319          - name: setup-network-environment.service
   320            command: start
   321            content: |
   322              [Unit]
   323              Description=Setup Network Environment
   324              Documentation=https://github.com/kelseyhightower/setup-network-environment
   325              Requires=network-online.target
   326              After=network-online.target
   327              [Service]
   328              ExecStartPre=-/usr/bin/mkdir -p /opt/bin
   329              ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
   330              ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
   331              ExecStart=/opt/bin/setup-network-environment
   332              RemainAfterExit=yes
   333              Type=oneshot
   334          - name: etcd.service
   335            command: start
   336            content: |
   337              [Unit]
   338              Description=etcd
   339              Requires=setup-network-environment.service
   340              After=setup-network-environment.service
   341              [Service]
   342              EnvironmentFile=/etc/network-environment
   343              User=etcd
   344              PermissionsStartOnly=true
   345              ExecStart=/usr/bin/etcd \
   346              --name ${DEFAULT_IPV4} \
   347              --addr ${DEFAULT_IPV4}:4001 \
   348              --bind-addr 0.0.0.0 \
   349              --cluster-active-size 1 \
   350              --data-dir /var/lib/etcd \
   351              --http-read-timeout 86400 \
   352              --peer-addr ${DEFAULT_IPV4}:7001 \
   353              --snapshot true
   354              Restart=always
   355              RestartSec=10s
   356          - name: fleet.socket
   357            command: start
   358            content: |
   359              [Socket]
   360              ListenStream=/var/run/fleet.sock
   361          - name: fleet.service
   362            command: start
   363            content: |
   364              [Unit]
   365              Description=fleet daemon
   366              Wants=etcd.service
   367              After=etcd.service
   368              Wants=fleet.socket
   369              After=fleet.socket
   370              [Service]
   371              Environment="FLEET_ETCD_SERVERS=http://127.0.0.1:4001"
   372              Environment="FLEET_METADATA=role=master"
   373              ExecStart=/usr/bin/fleetd
   374              Restart=always
   375              RestartSec=10s
   376          - name: etcd-waiter.service
   377            command: start
   378            content: |
   379              [Unit]
   380              Description=etcd waiter
   381              Wants=network-online.target
   382              Wants=etcd.service
   383              After=etcd.service
   384              After=network-online.target
   385              Before=flannel.service
   386              Before=setup-network-environment.service
   387              [Service]
   388              ExecStartPre=/usr/bin/chmod +x /opt/bin/waiter.sh
   389              ExecStart=/usr/bin/bash /opt/bin/waiter.sh
   390              RemainAfterExit=true
   391              Type=oneshot
   392          - name: flannel.service
   393            command: start
   394            content: |
   395              [Unit]
   396              Wants=etcd-waiter.service
   397              After=etcd-waiter.service
   398              Requires=etcd.service
   399              After=etcd.service
   400              After=network-online.target
   401              Wants=network-online.target
   402              Description=flannel is an etcd backed overlay network for containers
   403              [Service]
   404              Type=notify
   405              ExecStartPre=-/usr/bin/mkdir -p /opt/bin
   406              ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
   407              ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
   408              ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{"Network":"10.100.0.0/16", "Backend": {"Type": "vxlan"}}'
   409              ExecStart=/opt/bin/flanneld
   410          - name: kube-apiserver.service
   411            command: start
   412            content: |
   413              [Unit]
   414              Description=Kubernetes API Server
   415              Documentation=https://github.com/kubernetes/kubernetes
   416              Requires=etcd.service
   417              After=etcd.service
   418              [Service]
   419              ExecStartPre=-/usr/bin/mkdir -p /opt/bin
   420              ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-apiserver
   421              ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-apiserver
   422              ExecStart=/opt/bin/kube-apiserver \
   423              --address=0.0.0.0 \
   424              --port=8080 \
   425              --service-cluster-ip-range=10.100.0.0/16 \
   426              --etcd-servers=http://127.0.0.1:4001 \
   427              --logtostderr=true
   428              Restart=always
   429              RestartSec=10
   430          - name: kube-controller-manager.service
   431            command: start
   432            content: |
   433              [Unit]
   434              Description=Kubernetes Controller Manager
   435              Documentation=https://github.com/kubernetes/kubernetes
   436              Requires=kube-apiserver.service
   437              After=kube-apiserver.service
   438              [Service]
   439              ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-controller-manager
   440              ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-controller-manager
   441              ExecStart=/opt/bin/kube-controller-manager \
   442              --master=127.0.0.1:8080 \
   443              --logtostderr=true
   444              Restart=always
   445              RestartSec=10
   446          - name: kube-scheduler.service
   447            command: start
   448            content: |
   449              [Unit]
   450              Description=Kubernetes Scheduler
   451              Documentation=https://github.com/kubernetes/kubernetes
   452              Requires=kube-apiserver.service
   453              After=kube-apiserver.service
   454              [Service]
   455              ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-scheduler
   456              ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-scheduler
   457              ExecStart=/opt/bin/kube-scheduler --master=127.0.0.1:8080
   458              Restart=always
   459              RestartSec=10
   460          - name: kube-register.service
   461            command: start
   462            content: |
   463              [Unit]
   464              Description=Kubernetes Registration Service
   465              Documentation=https://github.com/kelseyhightower/kube-register
   466              Requires=kube-apiserver.service
   467              After=kube-apiserver.service
   468              Requires=fleet.service
   469              After=fleet.service
   470              [Service]
   471              ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-register
   472              ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-register
   473              ExecStart=/opt/bin/kube-register \
   474              --metadata=role=node \
   475              --fleet-endpoint=unix:///var/run/fleet.sock \
   476              --healthz-port=10248 \
   477              --api-endpoint=http://127.0.0.1:8080
   478              Restart=always
   479              RestartSec=10
   480        update:
   481          group: stable
   482          reboot-strategy: off
   483      ssh_authorized_keys:
   484        - ssh-rsa AAAAB3NzaC1yc2EAAAAD...
   485  
   486  
   487  ### node.yml
   488  
   489  On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-slave.yml`.
   490  
   491      #cloud-config
   492      ---
   493      write_files:
   494        - path: /etc/default/docker
   495          content: |
   496            DOCKER_EXTRA_OPTS='--insecure-registry="rdocker.example.com:5000"'
   497      coreos:
   498        units:
   499          - name: 10-eno1.network
   500            runtime: true
   501            content: |
   502              [Match]
   503              Name=eno1
   504              [Network]
   505              DHCP=yes
   506          - name: 20-nodhcp.network
   507            runtime: true
   508            content: |
   509              [Match]
   510              Name=en*
   511              [Network]
   512              DHCP=none
   513          - name: etcd.service
   514            mask: true
   515          - name: docker.service
   516            drop-ins:
   517              - name: 50-insecure-registry.conf
   518                content: |
   519                  [Service]
   520                  Environment="HTTP_PROXY=http://rproxy.example.com:3128/" "NO_PROXY=localhost,127.0.0.0/8,rdocker.example.com"
   521          - name: fleet.service
   522            command: start
   523            content: |
   524              [Unit]
   525              Description=fleet daemon
   526              Wants=fleet.socket
   527              After=fleet.socket
   528              [Service]
   529              Environment="FLEET_ETCD_SERVERS=http://<MASTER_SERVER_IP>:4001"
   530              Environment="FLEET_METADATA=role=node"
   531              ExecStart=/usr/bin/fleetd
   532              Restart=always
   533              RestartSec=10s
   534          - name: flannel.service
   535            command: start
   536            content: |
   537              [Unit]
   538              After=network-online.target
   539              Wants=network-online.target
   540              Description=flannel is an etcd backed overlay network for containers
   541              [Service]
   542              Type=notify
   543              ExecStartPre=-/usr/bin/mkdir -p /opt/bin
   544              ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/flanneld
   545              ExecStartPre=/usr/bin/chmod +x /opt/bin/flanneld
   546              ExecStart=/opt/bin/flanneld -etcd-endpoints http://<MASTER_SERVER_IP>:4001
   547          - name: docker.service
   548            command: start
   549            content: |
   550              [Unit]
   551              After=flannel.service
   552              Wants=flannel.service
   553              Description=Docker Application Container Engine
   554              Documentation=http://docs.docker.io
   555              [Service]
   556              EnvironmentFile=-/etc/default/docker
   557              EnvironmentFile=/run/flannel/subnet.env
   558              ExecStartPre=/bin/mount --make-rprivate /
   559              ExecStart=/usr/bin/docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} -s=overlay -H fd:// ${DOCKER_EXTRA_OPTS}
   560              [Install]
   561              WantedBy=multi-user.target
   562          - name: setup-network-environment.service
   563            command: start
   564            content: |
   565              [Unit]
   566              Description=Setup Network Environment
   567              Documentation=https://github.com/kelseyhightower/setup-network-environment
   568              Requires=network-online.target
   569              After=network-online.target
   570              [Service]
   571              ExecStartPre=-/usr/bin/mkdir -p /opt/bin
   572              ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/setup-network-environment
   573              ExecStartPre=/usr/bin/chmod +x /opt/bin/setup-network-environment
   574              ExecStart=/opt/bin/setup-network-environment
   575              RemainAfterExit=yes
   576              Type=oneshot
   577          - name: kube-proxy.service
   578            command: start
   579            content: |
   580              [Unit]
   581              Description=Kubernetes Proxy
   582              Documentation=https://github.com/kubernetes/kubernetes
   583              Requires=setup-network-environment.service
   584              After=setup-network-environment.service
   585              [Service]
   586              ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kube-proxy
   587              ExecStartPre=/usr/bin/chmod +x /opt/bin/kube-proxy
   588              ExecStart=/opt/bin/kube-proxy \
   589              --etcd-servers=http://<MASTER_SERVER_IP>:4001 \
   590              --logtostderr=true
   591              Restart=always
   592              RestartSec=10
   593          - name: kube-kubelet.service
   594            command: start
   595            content: |
   596              [Unit]
   597              Description=Kubernetes Kubelet
   598              Documentation=https://github.com/kubernetes/kubernetes
   599              Requires=setup-network-environment.service
   600              After=setup-network-environment.service
   601              [Service]
   602              EnvironmentFile=/etc/network-environment
   603              ExecStartPre=/usr/bin/wget -N -P /opt/bin http://<PXE_SERVER_IP>/kubelet
   604              ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
   605              ExecStart=/opt/bin/kubelet \
   606              --address=0.0.0.0 \
   607              --port=10250 \
   608              --hostname-override=${DEFAULT_IPV4} \
   609              --api-servers=<MASTER_SERVER_IP>:8080 \
   610              --healthz-bind-address=0.0.0.0 \
   611              --healthz-port=10248 \
   612              --logtostderr=true
   613              Restart=always
   614              RestartSec=10
   615        update:
   616          group: stable
   617          reboot-strategy: off
   618      ssh_authorized_keys:
   619        - ssh-rsa AAAAB3NzaC1yc2EAAAAD...
   620  
   621  
   622  ## New pxelinux.cfg file
   623  
   624  Create a pxelinux target file for a _slave_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-slave`
   625  
   626      default coreos
   627      prompt 1
   628      timeout 15
   629  
   630      display boot.msg
   631  
   632      label coreos
   633        menu default
   634        kernel images/coreos/coreos_production_pxe.vmlinuz
   635        append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
   636  
   637  And one for the _master_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-master`
   638  
   639      default coreos
   640      prompt 1
   641      timeout 15
   642  
   643      display boot.msg
   644  
   645      label coreos
   646        menu default
   647        kernel images/coreos/coreos_production_pxe.vmlinuz
   648        append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-master.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
   649  
   650  ## Specify the pxelinux targets
   651  
   652  Now that we have our new targets setup for master and slave we want to configure the specific hosts to those targets. We will do this by using the pxelinux mechanism of setting a specific MAC addresses to a specific pxelinux.cfg file.
   653  
   654  Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found [here](http://www.syslinux.org/wiki/index.php/PXELINUX).
   655  
   656      cd /tftpboot/pxelinux.cfg
   657      ln -s coreos-node-master 01-d0-00-67-13-0d-00
   658      ln -s coreos-node-slave 01-d0-00-67-13-0d-01
   659      ln -s coreos-node-slave 01-d0-00-67-13-0d-02
   660  
   661  
   662  Reboot these servers to get the images PXEd and ready for running containers!
   663  
   664  ## Creating test pod
   665  
   666  Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system.
   667  
   668  See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster.
   669  
   670  For more complete applications, please look in the [examples directory](../../../examples/).
   671  
   672  ## Helping commands for debugging
   673  
   674  List all keys in etcd:
   675  
   676       etcdctl ls --recursive
   677  
   678  List fleet machines
   679  
   680      fleetctl list-machines
   681  
   682  Check system status of services on master:
   683  
   684      systemctl status kube-apiserver
   685      systemctl status kube-controller-manager
   686      systemctl status kube-scheduler
   687      systemctl status kube-register
   688  
   689  Check system status of services on a node:
   690  
   691      systemctl status kube-kubelet
   692      systemctl status docker.service
   693  
   694  List Kubernetes
   695  
   696      kubectl get pods
   697      kubectl get nodes
   698  
   699  
   700  Kill all pods:
   701  
   702      for i in `kubectl get pods | awk '{print $1}'`; do kubectl stop pod $i; done
   703  
   704  
   705  <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
   706  [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/coreos/bare_metal_offline.md?pixel)]()
   707  <!-- END MUNGE: GENERATED_ANALYTICS -->