github.com/microsoft/fabrikate@v1.0.0-alpha.1.0.20210115014322-dc09194d0885/README.md (about)

     1  # Fabrikate
     2  
     3  [![Build Status][azure-devops-build-status]][azure-devops-build-link]
     4  [![Go Report Card][go-report-card-badge]][go-report-card]
     5  
     6  Fabrikate helps make operating Kubernetes clusters with a
     7  [GitOps](https://www.weave.works/blog/gitops-operations-by-pull-request)
     8  workflow more productive. It allows you to write
     9  [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) resource
    10  definitions and configuration for multiple environments while leveraging the
    11  broad [Helm chart ecosystem](https://github.com/helm/charts), capture higher
    12  level definitions into abstracted and shareable components, and enable a
    13  [GitOps](https://www.weave.works/blog/gitops-operations-by-pull-request)
    14  deployment workflow that both simplifies and makes deployments more auditable.
    15  
    16  In particular, Fabrikate simplifies the frontend of the GitOps workflow: it
    17  takes a high level description of your deployment, a target environment
    18  configuration (eg. `qa` or `prod`), and renders the Kubernetes resource
    19  manifests for that deployment utilizing templating tools like
    20  [Helm](https://helm.sh). It is intended to run as part of a CI/CD pipeline such
    21  that with every commit to your Fabrikate deployment definition triggers the
    22  generation of Kubernetes resource manifests that an in-cluster GitOps pod like
    23  [Weaveworks' Flux](https://github.com/weaveworks/flux) watches and reconciles
    24  with the current set of applied resource manifests in your Kubernetes cluster.
    25  
    26  ## Installation
    27  
    28  You have a couple options:
    29  
    30  ### Official Release
    31  
    32  Grab the latest releases from the
    33  [releases page](https://github.com/microsoft/fabrikate/releases) and place it
    34  drop it into your `$PATH`.
    35  
    36  ### Building From Source
    37  
    38  You have a couple options to build from source:
    39  
    40  **Using `go get`:**
    41  
    42  Use `go get` to build and install the bleeding edge (i.e `develop`) version into
    43  `$GOPATH/bin`:
    44  
    45  ```bash
    46  (cd && GO111MODULE=on go get github.com/microsoft/fabrikate/cmd/fab@develop)
    47  ```
    48  
    49  **Cloning locally:**
    50  
    51  ```bash
    52  git clone https://github.com/microsoft/fabrikate
    53  cd fabrikate
    54  go build -o fab cmd/fab/main.go
    55  ```
    56  
    57  ## Getting Started
    58  
    59  First, install the latest `fab` cli on your local machine from
    60  [our releases](https://github.com/microsoft/fabrikate/releases), unzipping the
    61  appropriate binary and placing `fab` in your path. The `fab` cli tool, `helm`,
    62  and `git` are the only tools you need to have installed.
    63  
    64  **NOTE** Fabrikate supports Helm 3, do not use Helm 2.
    65  
    66  Let's walk through building an example Fabrikate definition to see how it works
    67  in practice. First off, let's create a directory for our cluster definition:
    68  
    69  ```sh
    70  $ mkdir mycluster
    71  $ cd mycluster
    72  ```
    73  
    74  The first thing I want to do is pull in a common set of observability and
    75  service mesh platforms so I can operate this cluster. My organization has
    76  settled on a
    77  [cloud-native](https://github.com/microsoft/fabrikate-definitions/tree/master/definitions/fabrikate-cloud-native)
    78  stack, and Fabrikate makes it easy to leverage reusable stacks of infrastructure
    79  like this:
    80  
    81  ```sh
    82  $ fab add cloud-native --source https://github.com/microsoft/fabrikate-definitions --path definitions/fabrikate-cloud-native
    83  ```
    84  
    85  Since our directory was empty, this creates a component.yaml file in this
    86  directory:
    87  
    88  ```yaml
    89  name: mycluster
    90  subcomponents:
    91    - name: cloud-native
    92      type: component
    93      source: https://github.com/microsoft/fabrikate-definitions
    94      method: git
    95      path: definitions/fabrikate-cloud-native
    96      branch: master
    97  ```
    98  
    99  A Fabrikate definition, like this one, always contains a `component.yaml` file
   100  in its root that defines how to generate the Kubernetes resource manifests for
   101  its directory tree scope.
   102  
   103  The `cloud-native` component we added is a remote component backed by a git repo
   104  [fabrikate-cloud-native](https://github.com/microsoft/fabrikate-definitions/tree/master/definitions/fabrikate-cloud-native).
   105  Fabrikate definitions use remote definitions like this one to enable multiple
   106  deployments to reuse common components (like this cloud-native infrastructure
   107  stack) from a centrally updated location.
   108  
   109  Looking inside this component at its own root `component.yaml` definition, you
   110  can see that it itself uses a set of remote components:
   111  
   112  ```yaml
   113  name: "cloud-native"
   114  generator: "static"
   115  path: "./manifests"
   116  subcomponents:
   117    - name: "elasticsearch-fluentd-kibana"
   118      source: "../fabrikate-elasticsearch-fluentd-kibana"
   119    - name: "prometheus-grafana"
   120      source: "../fabrikate-prometheus-grafana"
   121    - name: "istio"
   122      source: "../fabrikate-istio"
   123    - name: "kured"
   124      source: "../fabrikate-kured"
   125  ```
   126  
   127  Fabrikate recursively iterates component definitions, so as it processes this
   128  lower level component definition, it will in turn iterate the remote component
   129  definitions used in its implementation. Being able to mix in remote components
   130  like this makes Fabrikate deployments composable and reusable across
   131  deployments.
   132  
   133  Let's look at the component definition for the
   134  [elasticsearch-fluentd-kibana component](https://github.com/microsoft/fabrikate-definitions/tree/master/definitions/fabrikate-elasticsearch-fluentd-kibana):
   135  
   136  ```json
   137  {
   138    "name": "elasticsearch-fluentd-kibana",
   139    "generator": "static",
   140    "path": "./manifests",
   141    "subcomponents": [
   142      {
   143        "name": "elasticsearch",
   144        "generator": "helm",
   145        "source": "https://github.com/helm/charts",
   146        "method": "git",
   147        "path": "stable/elasticsearch"
   148      },
   149      {
   150        "name": "elasticsearch-curator",
   151        "generator": "helm",
   152        "source": "https://github.com/helm/charts",
   153        "method": "git",
   154        "path": "stable/elasticsearch-curator"
   155      },
   156      {
   157        "name": "fluentd-elasticsearch",
   158        "generator": "helm",
   159        "source": "https://github.com/helm/charts",
   160        "method": "git",
   161        "path": "stable/fluentd-elasticsearch"
   162      },
   163      {
   164        "name": "kibana",
   165        "generator": "helm",
   166        "source": "https://github.com/helm/charts",
   167        "method": "git",
   168        "path": "stable/kibana"
   169      }
   170    ]
   171  }
   172  ```
   173  
   174  First, we see that components can be defined in JSON as well as YAML (as you
   175  prefer).
   176  
   177  Secondly, we see that that this component generates resource definitions. In
   178  particular, it will emit a set of static manifests from the path `./manifests`,
   179  and generate the set of resource manifests specified by the inlined
   180  [Helm templates](https://helm.sh/) definitions as it it iterates your deployment
   181  definitions.
   182  
   183  With generalized helm charts like the ones used here, its often necessary to
   184  provide them with configuration values that vary by environment. This component
   185  provides a reasonable set of defaults for its subcomponents in
   186  `config/common.yaml`. Since this component is providing these four logging
   187  subsystems together as a "stack", or preconfigured whole, we can provide
   188  configuration to higher level parts based on this knowledge:
   189  
   190  ```yaml
   191  config:
   192  subcomponents:
   193    elasticsearch:
   194      namespace: elasticsearch
   195      injectNamespace: true
   196      config:
   197        client:
   198          resources:
   199            limits:
   200              memory: "2048Mi"
   201    elasticsearch-curator:
   202      namespace: elasticsearch
   203      injectNamespace: true
   204      config:
   205        cronjob:
   206          successfulJobsHistoryLimit: 0
   207        configMaps:
   208          config_yml: |-
   209            ---
   210            client:
   211              hosts:
   212                - elasticsearch-client.elasticsearch.svc.cluster.local
   213              port: 9200
   214              use_ssl: False
   215    fluentd-elasticsearch:
   216      namespace: fluentd
   217      injectNamespace: true
   218      config:
   219        elasticsearch:
   220          host: "elasticsearch-client.elasticsearch.svc.cluster.local"
   221    kibana:
   222      namespace: kibana
   223      injectNamespace: true
   224      config:
   225        files:
   226          kibana.yml:
   227            elasticsearch.url: "http://elasticsearch-client.elasticsearch.svc.cluster.local:9200"
   228  ```
   229  
   230  This `common` configuration, which applies to all environments, can be mixed
   231  with more specific configuration. For example, let's say that we were deploying
   232  this in Azure and wanted to utilize its `managed-premium` SSD storage class for
   233  Elasticsearch, but only in `azure` deployments. We can build an `azure`
   234  configuration that allows us to do exactly that, and Fabrikate has a convenience
   235  function called `set` that enables to do exactly that:
   236  
   237  ```
   238  $ fab set --environment azure --subcomponent cloud-native.elasticsearch data.persistence.storageClass="managed-premium" master.persistence.storageClass="managed-premium"
   239  ```
   240  
   241  This creates a file called `config/azure.yaml` that looks like this:
   242  
   243  ```yaml
   244  subcomponents:
   245    cloud-native:
   246      subcomponents:
   247        elasticsearch:
   248          config:
   249            data:
   250              persistence:
   251                storageClass: managed-premium
   252            master:
   253              persistence:
   254                storageClass: managed-premium
   255  ```
   256  
   257  Naturally, an observability stack is just the base infrastructure we need, and
   258  our real goal is to deploy a set of microservices. Furthermore, let's assume
   259  that we want to be able to split the incoming traffic for these services between
   260  `canary` and `stable` tiers with [Istio](https://istio.io) so that we can more
   261  safely launch new versions of the service.
   262  
   263  There is a Fabrikate component for that as well called
   264  [fabrikate-istio-service](https://github.com/microsoft/fabrikate-definitions/tree/master/definitions/fabrikate-istio)
   265  that we'll leverage to add this service, so let's do just that:
   266  
   267  ```
   268  $ fab add simple-service --source https://github.com/microsoft/fabrikate-definitions --path definitions/fabrikate-istio
   269  ```
   270  
   271  This component creates these traffic split services using the config applied to
   272  it. Let's create a `prod` config that does this for a `prod` cluster by creating
   273  `config/prod.yaml` and placing the following in it:
   274  
   275  ```yaml
   276  subcomponents:
   277    simple-service:
   278      namespace: services
   279      config:
   280        gateway: my-ingress.istio-system.svc.cluster.local
   281        service:
   282          dns: simple.mycompany.io
   283          name: simple-service
   284          port: 80
   285        configMap:
   286          PORT: 80
   287        tiers:
   288          canary:
   289            image: "timfpark/simple-service:441"
   290            replicas: 1
   291            weight: 10
   292            port: 80
   293            resources:
   294              requests:
   295                cpu: "250m"
   296                memory: "256Mi"
   297              limits:
   298                cpu: "1000m"
   299                memory: "512Mi"
   300  
   301          stable:
   302            image: "timfpark/simple-service:440"
   303            replicas: 3
   304            weight: 90
   305            port: 80
   306            resources:
   307              requests:
   308                cpu: "250m"
   309                memory: "256Mi"
   310              limits:
   311                cpu: "1000m"
   312                memory: "512Mi"
   313  ```
   314  
   315  This defines a service that is exposed on the cluster via a particular gateway
   316  and dns name and port. It also defines a traffic split between two backend
   317  tiers: `canary` (10%) and `stable` (90%). Within these tiers, we also define the
   318  number of replicas and the resources they are allowed to use, along with the
   319  container that is deployed in them. Finally, it also defines a ConfigMap for the
   320  service, which passes along an environmental variable to our app called `PORT`.
   321  
   322  From here we could add definitions for all of our microservices in a similar
   323  manner, but in the interest of keeping this short, we'll just do one of the
   324  services here.
   325  
   326  With this, we have a functionally complete Fabrikate definition for our
   327  deployment. Let's now see how we can use Fabrikate to generate resource
   328  manifests for it.
   329  
   330  First, let's install the remote components and helm charts:
   331  
   332  ```sh
   333  $ fab install
   334  ```
   335  
   336  This installs all of the required components and charts locally and we can now
   337  generate the manifests for our deployment with:
   338  
   339  ```sh
   340  $ fab generate prod azure
   341  ```
   342  
   343  This will iterate through our deployment definition, collect configuration
   344  values from `azure`, `prod`, and `common` (in that priority order) and generate
   345  manifests as it descends breadth first. You can see the generated manifests in
   346  `./generated/prod-azure`, which has the same logical directory structure as your
   347  deployment definition.
   348  
   349  Fabrikate is meant to used as part of a CI / CD pipeline that commits the
   350  generated manifests checked into a repo so that they can be applied from a pod
   351  within the cluster like [Flux](https://github.com/weaveworks/flux), but if you
   352  have a Kubernetes cluster up and running you can also apply them directly with:
   353  
   354  ```sh
   355  $ cd generated/prod-azure
   356  $ kubectl apply --recursive -f .
   357  ```
   358  
   359  This will cause a very large number of containers to spin up (which will take
   360  time to start completely as Kubernetes provisions persistent storage and
   361  downloads the containers themselves), but after three or four minutes, you
   362  should see the full observability stack and Microservices running in your
   363  cluster.
   364  
   365  ## Documentation
   366  
   367  We have complete details about how to use and contribute to Fabrikate in these
   368  documentation items:
   369  
   370  - [Component Definitions](./docs/component.md)
   371  - [Config Definitions](./docs/config.md)
   372  - [Command Reference](./docs/commands.md)
   373  - [Authentication / Personal Access Tokens (PAT) / `access.yaml`](./docs/auth.md)
   374  - [Contributing](./docs/contributing.md)
   375  - [Comparisons against other release management tools](./docs/comparisons.md)
   376  
   377  ## Community
   378  
   379  [Please join us on Slack](https://join.slack.com/t/bedrockco/shared_invite/enQtNjIwNzg3NTU0MDgzLWRiYzQxM2ZmZjQ2NGE2YjA2YTJmMjg3ZmJmOTQwOWY0MTU3NDVkNDJkZDUyMDExZjIxNTg5NWY3MTI3MzFiN2U)
   380  for discussion and/or questions.
   381  
   382  ## Bedrock
   383  
   384  We maintain a sister project called
   385  [Bedrock](https://github.com/microsoft/bedrock). Bedrock provides automata that
   386  make operationalizing Kubernetes clusters with a GitOps deployment workflow
   387  easier, automating a
   388  [GitOps](https://www.weave.works/blog/gitops-operations-by-pull-request)
   389  deployment model leveraging [Flux](https://github.com/weaveworks/flux), and
   390  provides automation for building a CI/CD pipeline that automatically builds
   391  resource manifests from Fabrikate defintions.
   392  
   393  <!-- refs -->
   394  
   395  [azure-devops-build-status]:
   396    https://tpark.visualstudio.com/fabrikate/_apis/build/status/microsoft.fabrikate?branchName=master
   397  [azure-devops-build-link]:
   398    https://tpark.visualstudio.com/fabrikate/_build/latest?definitionId=35&branchName=master
   399  [go-report-card]: https://goreportcard.com/report/github.com/microsoft/fabrikate
   400  [go-report-card-badge]:
   401    https://goreportcard.com/badge/github.com/microsoft/fabrikate