github.com/jlmeeker/kismatic@v1.10.1-0.20180612190640-57f9005a1f1a/docs/design-decisions/ingress.md (about)

     1  # Basic Ingress for Kismatic
     2  
     3  Status: Implemented
     4  
     5  The very first question anybody has upon installing Kubernetes is “how do I access my workloads?”
     6  
     7  With overlay or policy-enforcing networking in play, this question becomes even more imperative. Answers such as “join the Kubernetes Pod network,” “SSH to one of the nodes” or “Use a NodePort service” all have flaws from a security and usability perspective.
     8  
     9  
    10  For Layer 7 (HTTP/S) services, the best answer available is “use an Ingress.” [Ingress](http://kubernetes.io/docs/user-guide/ingress/) allows an HTTP server to be used along with port and path mapping to present an http service. Ingress nodes sit between the pod network and the local network, brokering HTTPx traffic from the local network into the pod network, with the ability to terminate TLS.
    11  
    12  For Kismatic to support Ingress, we will present a new class of node: `ingress`  
    13  This node will contain:
    14  * `kubelet` and be part of the kubernetes cluster, by default the **kubelet will be unschedulable** on the ingress nodes
    15  * The certificates required to communicate with the kubernetes cluster
    16  * a [default backend](https://github.com/kubernetes/contrib/tree/master/404-server) required for the ingress controller
    17    * The backend will run as a [Deamon Set](http://kubernetes.io/docs/admin/daemons/) on the `ingress` nodes, with a kubernetes service fronting it
    18  * an [Nginx Ingress Controller](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx) that will listen on ports **80** and **443**
    19    * The controller will run as a [Deamon Set](http://kubernetes.io/docs/admin/daemons/) on the `ingress` nodes with `hostPort: 80` and `hostPort: 443`
    20    * The controller will run with `hostNetwork: true`, see [issue 23920](https://github.com/kubernetes/kubernetes/issues/23920)
    21    * The controller has a `/healthz` endpoint that will return a `200` status when its alive
    22    * The controller will respond with a `404` when a requested endpoint is not mapped with a ingress resource
    23  
    24  For HA configurations it is recommended to have **2 or more** ingress nodes and a load balancer configured with the nodes' addresses, using the `/healthz` endpoint to maintain a list of healthy nodes
    25  
    26  ### Plan File changes
    27  
    28  ```
    29  ...
    30  worker:
    31    expected_count: 3
    32    nodes:
    33    - host: node1.somehost.com
    34      ip: 8.8.8.1
    35      internalip: 8.8.8.1
    36    - host: node2.somehost.com
    37      ip: 8.8.8.2
    38      internalip: 8.8.8.2
    39    - host: node3.somehost.com
    40      ip: 8.8.8.3
    41      internalip: 8.8.8.3
    42  ingress:
    43    expected_count: 2
    44    nodes:
    45    - host: node4.somehost.com
    46      ip: 8.8.8.4
    47      internalip: 8.8.8.4
    48    - host: node1.somehost.com
    49      ip: 8.8.8.1
    50      internalip: 8.8.8.1
    51  ```
    52  
    53  To support new node types an optional `ingress` section will be added    
    54  When an ingress section is not provided, the ingress controller will NOT be setup  
    55  `ingress` can have 1 or more nodes, these nodes can be unique from the other roles or can be shared
    56  * On an `ingress` node the kubelet will be **unschedulable**, ie. `node4.somehost.com` from the example
    57  * If the node is only shared with `etcd` or/and `master ` the kubelet will be **unschedulable**
    58  * If the `ingress` node is also a `worker` the kubelet will be **schedulable**, ie. `node1.somehost.com` from the example
    59  
    60  ### Example Ingress Resources
    61  Assumptions:
    62  * at least 1 `ingress` node was provided when setting up the cluster
    63  * a service named `echoserver` with `port: 80` is running in the cluster
    64  * replace `mydomain.com` with your actual domain
    65  * you configured `mydomain.com` to resolve to your ingress node(s)
    66  
    67  To expose via HTTP on port 80 of the ingress node, `kubectl apply`:
    68  ```
    69  apiVersion: extensions/v1beta1
    70  kind: Ingress
    71  metadata:
    72    name: echoserver
    73    annotations:
    74      kubernetes.io/ingress.class: "nginx"
    75  spec:
    76    rules:
    77    - host: mydomain.com
    78      http:
    79        paths:
    80        - path: /echoserver
    81          backend:
    82            serviceName: echoserver
    83            servicePort: 80
    84  
    85  ```
    86  
    87  To expose via HTTPS on port 443 of the ingress node, `kubectl apply`:
    88  ```
    89  echo "
    90  apiVersion: v1
    91  kind: Secret
    92  metadata:
    93    namespace: echoserver
    94    name: mydomain.com-tls
    95  data:
    96    tls.crt: `base64 /tmp/tls.crt`
    97    tls.key: `base64 /tmp/tls.key`
    98  " | kubectl create -f -
    99  ```
   100  where `tmp/tls.crt` and `/tmp/tls.key` are the certificates generated with the `mydomain.com` CN
   101  ```
   102  apiVersion: extensions/v1beta1
   103  kind: Ingress
   104  metadata:
   105    name: echoserver
   106    annotations:
   107      kubernetes.io/ingress.class: "nginx"
   108  spec:
   109    tls:
   110    - hosts:
   111      -  mydomain.com
   112      secretName:  mydomain.com-tls
   113    rules:
   114    - host: mydomain.com
   115      http:
   116        paths:
   117        - path: /echoserver
   118          backend:
   119            serviceName: echoserver
   120            servicePort: 80
   121  ```
   122  
   123  After running the above, your service will be accessible vi `http://mydomain.com/echoserver` and `https://mydomain.com/echoserver`
   124  
   125  ### Out of Scope
   126  * Integrating with any cloud provider for Load Balance functionality - this enhancement should be added along with the kubernetes API server HA
   127  * Automatic HTTPs cert generation, the domain owner will either already have certificates or an existing workflow to create new certificates
   128    * [kube-lego](https://github.com/jetstack/kube-lego) was evaluated as a possible integration point with Let's Encrypt but the domain needs to be already configured with [ACME](https://letsencrypt.github.io/acme-spec/) to function
   129  * Any functionality after setting up the ingress controller, the user of the cluster will still need create ingress resources