github.com/enmand/kubernetes@v1.2.0-alpha.0/docs/getting-started-guides/fedora/fedora_manual_config.md (about) 1 <!-- BEGIN MUNGE: UNVERSIONED_WARNING --> 2 3 <!-- BEGIN STRIP_FOR_RELEASE --> 4 5 <img src="http://kubernetes.io/img/warning.png" alt="WARNING" 6 width="25" height="25"> 7 <img src="http://kubernetes.io/img/warning.png" alt="WARNING" 8 width="25" height="25"> 9 <img src="http://kubernetes.io/img/warning.png" alt="WARNING" 10 width="25" height="25"> 11 <img src="http://kubernetes.io/img/warning.png" alt="WARNING" 12 width="25" height="25"> 13 <img src="http://kubernetes.io/img/warning.png" alt="WARNING" 14 width="25" height="25"> 15 16 <h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2> 17 18 If you are using a released version of Kubernetes, you should 19 refer to the docs that go with that version. 20 21 <strong> 22 The latest 1.0.x release of this document can be found 23 [here](http://releases.k8s.io/release-1.0/docs/getting-started-guides/fedora/fedora_manual_config.md). 24 25 Documentation for other releases can be found at 26 [releases.k8s.io](http://releases.k8s.io). 27 </strong> 28 -- 29 30 <!-- END STRIP_FOR_RELEASE --> 31 32 <!-- END MUNGE: UNVERSIONED_WARNING --> 33 Getting started on [Fedora](http://fedoraproject.org) 34 ----------------------------------------------------- 35 36 **Table of Contents** 37 38 - [Prerequisites](#prerequisites) 39 - [Instructions](#instructions) 40 41 ## Prerequisites 42 43 1. You need 2 or more machines with Fedora installed. 44 45 ## Instructions 46 47 This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc... 48 49 This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious. 50 51 The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker. 52 53 **System Information:** 54 55 Hosts: 56 57 ``` 58 fed-master = 192.168.121.9 59 fed-node = 192.168.121.65 60 ``` 61 62 **Prepare the hosts:** 63 64 * Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with kubernetes-0.18 and beyond. 65 * The [--enablerepo=updates-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive. 66 * If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below. 67 68 ```sh 69 yum -y install --enablerepo=updates-testing kubernetes 70 ``` 71 72 * Install etcd and iptables 73 74 ```sh 75 yum -y install etcd iptables 76 ``` 77 78 * Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping. 79 80 ```sh 81 echo "192.168.121.9 fed-master 82 192.168.121.65 fed-node" >> /etc/hosts 83 ``` 84 85 * Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain: 86 87 ```sh 88 # Comma separated list of nodes in the etcd cluster 89 KUBE_MASTER="--master=http://fed-master:8080" 90 91 # logging to stderr means we get it in the systemd journal 92 KUBE_LOGTOSTDERR="--logtostderr=true" 93 94 # journal message level, 0 is debug 95 KUBE_LOG_LEVEL="--v=0" 96 97 # Should this cluster be allowed to run privileged docker containers 98 KUBE_ALLOW_PRIV="--allow-privileged=false" 99 ``` 100 101 * Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install. 102 103 ```sh 104 systemctl disable iptables-services firewalld 105 systemctl stop iptables-services firewalld 106 ``` 107 108 **Configure the Kubernetes services on the master.** 109 110 * Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything. 111 112 ```sh 113 # The address on the local server to listen to. 114 KUBE_API_ADDRESS="--address=0.0.0.0" 115 116 # Comma separated list of nodes in the etcd cluster 117 KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001" 118 119 # Address range to use for services 120 KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" 121 122 # Add your own! 123 KUBE_API_ARGS="" 124 ``` 125 126 * Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused". Note that Fedora 22 uses etcd 2.0, One of the changes in etcd 2.0 is that now uses port 2379 and 2380 (as opposed to etcd 0.46 which userd 4001 and 7001). 127 128 ```sh 129 ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001" 130 ``` 131 132 * Create /var/run/kubernetes on master: 133 134 ```sh 135 mkdir /var/run/kubernetes 136 chown kube:kube /var/run/kubernetes 137 chmod 750 /var/run/kubernetes 138 ``` 139 140 * Start the appropriate services on master: 141 142 ```sh 143 for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do 144 systemctl restart $SERVICES 145 systemctl enable $SERVICES 146 systemctl status $SERVICES 147 done 148 ``` 149 150 * Addition of nodes: 151 152 * Create following node.json file on Kubernetes master node: 153 154 ```json 155 { 156 "apiVersion": "v1", 157 "kind": "Node", 158 "metadata": { 159 "name": "fed-node", 160 "labels":{ "name": "fed-node-label"} 161 }, 162 "spec": { 163 "externalID": "fed-node" 164 } 165 } 166 ``` 167 168 Now create a node object internally in your Kubernetes cluster by running: 169 170 ```console 171 $ kubectl create -f ./node.json 172 173 $ kubectl get nodes 174 NAME LABELS STATUS 175 fed-node name=fed-node-label Unknown 176 ``` 177 178 Please note that in the above, it only creates a representation for the node 179 _fed-node_ internally. It does not provision the actual _fed-node_. Also, it 180 is assumed that _fed-node_ (as specified in `name`) can be resolved and is 181 reachable from Kubernetes master node. This guide will discuss how to provision 182 a Kubernetes node (fed-node) below. 183 184 **Configure the Kubernetes services on the node.** 185 186 ***We need to configure the kubelet on the node.*** 187 188 * Edit /etc/kubernetes/kubelet to appear as such: 189 190 ```sh 191 ### 192 # Kubernetes kubelet (node) config 193 194 # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) 195 KUBELET_ADDRESS="--address=0.0.0.0" 196 197 # You may leave this blank to use the actual hostname 198 KUBELET_HOSTNAME="--hostname-override=fed-node" 199 200 # location of the api-server 201 KUBELET_API_SERVER="--api-servers=http://fed-master:8080" 202 203 # Add your own! 204 #KUBELET_ARGS="" 205 ``` 206 207 * Start the appropriate services on the node (fed-node). 208 209 ```sh 210 for SERVICES in kube-proxy kubelet docker; do 211 systemctl restart $SERVICES 212 systemctl enable $SERVICES 213 systemctl status $SERVICES 214 done 215 ``` 216 217 * Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_. 218 219 ```console 220 kubectl get nodes 221 NAME LABELS STATUS 222 fed-node name=fed-node-label Ready 223 ``` 224 225 * Deletion of nodes: 226 227 To delete _fed-node_ from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information): 228 229 ```sh 230 kubectl delete -f ./node.json 231 ``` 232 233 *You should be finished!* 234 235 **The cluster should be running! Launch a test pod.** 236 237 You should have a functional cluster, check out [101](../../../docs/user-guide/walkthrough/README.md)! 238 239 240 <!-- BEGIN MUNGE: GENERATED_ANALYTICS --> 241 []() 242 <!-- END MUNGE: GENERATED_ANALYTICS -->