github.com/KyaXTeam/consul@v1.4.5/website/source/docs/guides/minikube.md (about)

     1  ---
     2  layout: "docs"
     3  page_title: "Minikube"
     4  sidebar_current: "docs-guides-minikube"
     5  description: |-
     6    Consul can be installed to the Kubernetes minikube tool for local development.
     7  ---
     8  
     9  # Consul Installation to Minikube via Helm
    10  
    11  <script src="https://fast.wistia.com/embed/medias/qwhi1gvkeq.jsonp" async></script><script src="https://fast.wistia.com/assets/external/E-v1.js" async></script><div class="wistia_responsive_padding" style="padding:56.25% 0 0 0;position:relative;"><div class="wistia_responsive_wrapper" style="height:100%;left:0;position:absolute;top:0;width:100%;"><div class="wistia_embed wistia_async_qwhi1gvkeq videoFoam=true" style="height:100%;position:relative;width:100%"><div class="wistia_swatch" style="height:100%;left:0;opacity:0;overflow:hidden;position:absolute;top:0;transition:opacity 200ms;width:100%;"><img src="https://fast.wistia.com/embed/medias/qwhi1gvkeq/swatch" style="filter:blur(5px);height:100%;object-fit:contain;width:100%;" alt="" onload="this.parentNode.style.opacity=1;" /></div></div></div></div>
    12  
    13  In this guide, you'll start a local Kubernetes cluster with minikube. You'll install Consul with only a few commands, then deploy two custom services that use Consul to discover each other over encrypted TLS via Consul Connect. Finally, you'll tighten down Consul Connect so that only the approved applications can communicate with each other.
    14  
    15  [Demo code](https://github.com/hashicorp/demo-consul-101) is available.
    16  
    17  - [Task 1: Start Minikube and Install Consul with Helm](#task-1-start-minikube-and-install-consul-with-helm)
    18  - [Task 2: Deploy a Consul Aware Application to the Cluster](#task-2-deploy-a-consul-aware-application-to-the-cluster)
    19  - [Task 3: Configure Consul Connect](#task-3-use-consul-connect)
    20  
    21  ## Prerequisites
    22  
    23  Let's install Consul on Kubernetes with minikube. This is a relatively quick and easy way to try out Consul on your local machine without the need for any cloud credentials. You'll be able to use most Consul features right away.
    24  
    25  First, you'll need to follow the directions for [installing minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/), including VirtualBox or similar.
    26  
    27  You'll also need to install `kubectl` and `helm`.
    28  
    29  Mac users can install `helm` and `kubectl` with Homebrew.
    30  
    31  ```sh
    32  $ brew install kubernetes-cli
    33  $ brew install kubernetes-helm
    34  ```
    35  
    36  Windows users can use Chocolatey with the same package names:
    37  
    38  ```sh
    39  $ choco install kubernetes-cli
    40  $ choco install kubernetes-helm
    41  ```
    42  
    43  For more on Helm, see [helm.sh](https://helm.sh/).
    44  
    45  ## Task 1: Start Minikube and Install Consul with Helm
    46  
    47  ### Step 1: Start Minikube
    48  
    49  Start minikube. You can use the `--memory` option with the equivalent of 4GB to 8GB so there is plenty of memory for all the pods we will run. This may take several minutes. It will download a 100-300MB of dependencies and container images.
    50  
    51  ```
    52  $ minikube start --memory 4096
    53  ```
    54  
    55  Next, let's view the local Kubernetes dashboard with `minikube dashboard`. Even if the previous step completed successfully, you may have to wait a minute or two for minikube to be available. If you see an error, try again after a few minutes.
    56  
    57  Once it spins up, you'll see the dashboard in your web browser. You can view pods, nodes, and other resources.
    58  
    59  ```
    60  $ minikube dashboard
    61  ```
    62  
    63  ![Minikube Dashboard](/assets/images/guides/minikube-dashboard.png "Minikube Dashboard")
    64  
    65  ### Step 2: Install the Consul Helm Chart to the Cluster
    66  
    67  To perform the steps in this lab exercise, clone the [hashicorp/demo-consul-101](https://github.com/hashicorp/demo-consul-101) repository from GitHub. Go into the `demo-consul-101/k8s` directory.
    68  
    69  
    70  ```
    71  $ git clone https://github.com/hashicorp/demo-consul-101.git
    72  
    73  $ cd demo-consul-101/k8s
    74  ```
    75  
    76  Now we're ready to install Consul to the cluster, using the `helm` tool. Initialize Helm with `helm init`. You'll see a note that Tiller (the server-side component) has been installed. You can ignore the policy warning.
    77  
    78  ```
    79  $ helm init
    80  
    81  $HELM_HOME has been configured at /Users/geoffrey/.helm.
    82  ```
    83  
    84  Now we need to install Consul with Helm. To get the freshest copy of the Helm chart, clone the [hashicorp/consul-helm](https://github.com/hashicorp/consul-helm) repository.
    85  
    86  ```
    87  $ git clone https://github.com/hashicorp/consul-helm.git
    88  ```
    89  
    90  The chart works on its own, but we'll override a few values to help things go more smoothly with minikube and to enable useful features.
    91  
    92  We've created `helm-consul-values.yaml` for you with overrides. See `values.yaml` in the Helm chart repository for other possible values.
    93  
    94  We've given a name to the datacenter running this Consul cluster. We've enabled the Consul web UI via a `NodePort`. When deploying to a hosted cloud that implements load balancers, we could use `LoadBalancer` instead. We'll enable secure communication between pods with Connect. We also need to enable `grpc` on the client for Connect to work properly. Finally, specify that this Consul cluster should only run one server (suitable for local development).
    95  
    96  ```yaml
    97  # Choose an optional name for the datacenter
    98  global:
    99    datacenter: minidc
   100  
   101  # Enable the Consul Web UI via a NodePort
   102  ui:
   103    service:
   104      type: "NodePort"
   105  
   106  # Enable Connect for secure communication between nodes
   107  connectInject:
   108    enabled: true
   109  
   110  client:
   111    enabled: true
   112    grpc: true
   113  
   114  # Use only one Consul server for local development
   115  server:
   116    replicas: 1
   117    bootstrapExpect: 1
   118    disruptionBudget:
   119      enabled: true
   120      maxUnavailable: 0
   121  ```
   122  
   123  Now, run `helm install` together with our overrides file and the cloned `consul-helm` chart. It will print a list of all the resources that were created.
   124  
   125  ```
   126  $ helm install -f helm-consul-values.yaml --name hedgehog ./consul-helm
   127  ```
   128  
   129  ~> NOTE: If no `--name` is provided, the chart will create a random name for the installation. To reduce confusion, consider specifying a `--name`.
   130  
   131  ## Task 2: Deploy a Consul-aware Application to the Cluster
   132  
   133  ### Step 1: View the Consul Web UI
   134  
   135  Verify the installation by going back to the Kubernetes dashboard in your web browser. Find the list of services. Several include `consul` in the name and have the `app: consul` label.
   136  
   137  ![Minikube Dashboard with Consul](/assets/images/guides/minikube-dashboard-consul.png "Minikube Dashboard with Consul")
   138  
   139  There are a few differences between running Kubernetes on a hosted cloud vs locally with minikube. You may find that any load balancer resources don't work as expected on a local cluster. But we can still view the Consul UI and other deployed resources.
   140  
   141  Run `minikube service list` to see your services. Find the one with `consul-ui` in the name.
   142  
   143  ```
   144  $ minikube service list
   145  ```
   146  
   147  Run `minikube service` with the `consul-ui` service name as the argument. It will open the service in your web browser.
   148  
   149  ```
   150  $ minikube service hedgehog-consul-ui
   151  ```
   152  
   153  You can now view the Consul web UI with a list of Consul's services, nodes, and other resources.
   154  
   155  ![Minikube Consul UI](/assets/images/guides/minikube-consul-ui.png "Minikube Consul UI")
   156  
   157  ### Step 2: Deploy Custom Applications
   158  
   159  Now let's deploy our application. It's two services: a backend data service that returns a number (`counting` service) and a front-end `dashboard` that pulls from the `counting` service over HTTP and displays the number. The kubernetes part is a single line: `kubectl create -f 04-yaml-connect-envoy`. This is a directory with several YAML files, each defining one or more resources (pods, containers, etc).
   160  
   161  ```
   162  $ kubectl create -f 04-yaml-connect-envoy
   163  ```
   164  
   165  The output shows that they have been created. In reality, they may take a few seconds to spin up. Refresh the Kubernetes dashboard a few times and you'll see that the `counting` and `dashboard` services are running. You can also click a resource to view more data about it.
   166  
   167  ![Services](/assets/images/guides/minikube-services.png "Services")
   168  
   169  ### Step 3: View the Web Application
   170  
   171  For the last step in this initial task, use the Kubernetes `port-forward` feature for the dashboard service running on port `9002`. We already know that the pod is named `dashboard` thanks to the metadata specified in the YAML we deployed.
   172  
   173  ```
   174  $ kubectl port-forward dashboard 9002:9002
   175  ```
   176  
   177  Visit http://localhost:9002 in your web browser. You'll see a running `dashboard` container in the kubernetes cluster that displays a number retrieved from the `counting` service using Consul service discovery and secured over the network by TLS via an Envoy proxy.
   178  
   179  ![Application Dashboard](/assets/images/guides/minikube-app-dashboard.png "Application Dashboard")
   180  
   181  ### Addendum: Review the Code
   182  
   183  Let's take a peek at the code. Relevant to this Kubernetes deployment are two YAML files in the `04` directory. The `counting` service defines an `annotation` in the `metadata` section that instructs Consul to spin up a Consul Connect proxy for this service: `connect-inject`. The relevant port number is found in the `containerPort` section (`9001`). This Pod registers a Consul service that will be available via a secure proxy.
   184  
   185  ```yaml
   186  apiVersion: v1
   187  kind: Pod
   188  metadata:
   189    name: counting
   190    annotations:
   191      "consul.hashicorp.com/connect-inject": "true"
   192  spec:
   193    containers:
   194    - name: counting
   195      image: hashicorp/counting-service:0.0.2
   196      ports:
   197      - containerPort: 9001
   198        name: http
   199  # ...
   200  ```
   201  
   202  The other side is on the `dashboard` service. This declares the same `connect-inject` annotation but also adds another. The `connect-service-upstreams` in the `annotations` section configures Connect such that this Pod will have access to the `counting` service on `localhost` port `9001`. All the rest of the configuration and communication is taken care of by Consul and the Consul Helm chart.
   203  
   204  ```yaml
   205  apiVersion: v1
   206  kind: Pod
   207  metadata:
   208    name: dashboard
   209    labels:
   210      app: "dashboard"
   211    annotations:
   212      "consul.hashicorp.com/connect-inject": "true"
   213      "consul.hashicorp.com/connect-service-upstreams": "counting:9001"
   214  spec:
   215    containers:
   216    - name: dashboard
   217      image: hashicorp/dashboard-service:0.0.3
   218      ports:
   219      - containerPort: 9002
   220        name: http
   221      env:
   222      - name: COUNTING_SERVICE_URL
   223        value: "http://localhost:9001"
   224  # ...
   225  ```
   226  
   227  Within our `dashboard` application, we can access the `counting` service by communicating with `localhost:9001` as seen on the last line of this snippet. Here we are looking at an environment variable that is specific to the Go application running in a container in this Pod. Instead of providing an IP address or even a Consul service URL, we tell the application to talk to `localhost:9001` where our local end of the proxy is ready and listening. Because of the annotation to `counting:9001` earlier, we know that an instance of the `counting` service is on the other end.
   228  
   229  This is what is happening in the cluster and over the network when we view the `dashboard` service in the browser.
   230  
   231  -> TIP: The full source code for the Go-based web services and all code needed to build the Docker images are available in the [repo](https://github.com/hashicorp/demo-consul-101).
   232  
   233  ## Task 3: Use Consul Connect
   234  
   235  ### Step 1: Create an Intention that Denies All Service Communication by Default
   236  
   237  For a final task, let's take this a step further by restricting service communication with intentions. We don't want any service to be able to communicate with any other service; only the ones we specify.
   238  
   239  Begin by navigating to the _Intentions_ screen in the Consul web UI. Click the "Create" button and define an initial intention that blocks all communication between any services by default. Choose `*` as the source and `*` as the destination. Choose the _Deny_ radio button and add an optional description. Click "Save."
   240  
   241  ![Connect Deny](/assets/images/guides/minikube-connect-deny.png "Connect Deny")
   242  
   243  Verify this by returning to the application dashboard where you will see that the "Counting Service is Unreachable."
   244  
   245  ![Application is Unreachable](/assets/images/guides/minikube-connect-unreachable.png "Application is Unreachable")
   246  
   247  ### Step 2: Allow the Application Dashboard to Connect to the Counting Service
   248  
   249  Finally, the easy part. Click the "Create" button again and create an intention that allows the `dashboard` source service to talk to the `counting` destination service. Ensure that the "Allow" radio button is selected. Optionally add a description. Click "Save."
   250  
   251  ![Allow](/assets/images/guides/minikube-connect-allow.png "Allow")
   252  
   253  This action does not require a reboot. It takes effect so quickly that by the time you visit the application dashboard, you'll see that it's successfully communicating with the backend `counting` service again.
   254  
   255  And there we have Consul running on a Kubernetes cluster, as demonstrated by two services which communicate with each other via Consul Connect and an Envoy proxy.
   256  
   257  ![Success](/assets/images/guides/minikube-connect-success.png "Success")
   258  
   259  ## Reference
   260  
   261  For more on Consul's integration with Kubernetes (including multi-cloud, service sync, and other features), see the [Consul with Kubernetes](/docs/platform/k8s/index.html) documentation.