github.com/evanlouie/fabrikate@v0.17.4/README.md (about) 1 # Fabrikate 2 3 [![Build Status][azure-devops-build-status]][azure-devops-build-link] 4 [![Go Report Card][go-report-card-badge]][go-report-card] 5 6 Fabrikate helps make operating Kubernetes clusters with a 7 [GitOps](https://www.weave.works/blog/gitops-operations-by-pull-request) 8 workflow more productive. It allows you to write 9 [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) resource 10 definitions and configuration for multiple environments while leveraging the 11 broad [Helm chart ecosystem](https://github.com/helm/charts), capture higher 12 level definitions into abstracted and shareable components, and enable a 13 [GitOps](https://www.weave.works/blog/gitops-operations-by-pull-request) 14 deployment workflow that both simplifies and makes deployments more auditable. 15 16 In particular, Fabrikate simplifies the frontend of the GitOps workflow: it 17 takes a high level description of your deployment, a target environment 18 configuration (eg. `qa` or `prod`), and renders the Kubernetes resource 19 manifests for that deployment utilizing templating tools like 20 [Helm](https://helm.sh). It is intended to run as part of a CI/CD pipeline such 21 that with every commit to your Fabrikate deployment definition triggers the 22 generation of Kubernetes resource manifests that an in-cluster GitOps pod like 23 [Weaveworks' Flux](https://github.com/weaveworks/flux) watches and reconciles 24 with the current set of applied resource manifests in your Kubernetes cluster. 25 26 ## Getting Started 27 28 First, install the latest `fab` cli on your local machine from 29 [our releases](https://github.com/evanlouie/fabrikate/releases), unzipping the 30 appropriate binary and placing `fab` in your path. The `fab` cli tool, `helm`, 31 and `git` are the only tools you need to have installed. 32 33 Let's walk through building an example Fabrikate definition to see how it works 34 in practice. First off, let's create a directory for our cluster definition: 35 36 ```sh 37 $ mkdir mycluster 38 $ cd mycluster 39 ``` 40 41 The first thing I want to do is pull in a common set of observability and 42 service mesh platforms so I can operate this cluster. My organization has 43 settled on a 44 [cloud-native](https://github.com/evanlouie/fabrikate-definitions/tree/master/definitions/fabrikate-cloud-native) 45 stack, and Fabrikate makes it easy to leverage reusable stacks of infrastructure 46 like this: 47 48 ```sh 49 $ fab add cloud-native --source https://github.com/evanlouie/fabrikate-definitions --path definitions/fabrikate-cloud-native 50 ``` 51 52 Since our directory was empty, this creates a component.yaml file in this 53 directory: 54 55 ```yaml 56 name: mycluster 57 subcomponents: 58 - name: cloud-native 59 type: component 60 source: https://github.com/evanlouie/fabrikate-definitions 61 method: git 62 path: definitions/fabrikate-cloud-native 63 branch: master 64 ``` 65 66 A Fabrikate definition, like this one, always contains a `component.yaml` file 67 in its root that defines how to generate the Kubernetes resource manifests for 68 its directory tree scope. 69 70 The `cloud-native` component we added is a remote component backed by a git repo 71 [fabrikate-cloud-native](https://github.com/evanlouie/fabrikate-definitions/tree/master/definitions/fabrikate-cloud-native). 72 Fabrikate definitions use remote definitions like this one to enable multiple 73 deployments to reuse common components (like this cloud-native infrastructure 74 stack) from a centrally updated location. 75 76 Looking inside this component at its own root `component.yaml` definition, you 77 can see that it itself uses a set of remote components: 78 79 ```yaml 80 name: "cloud-native" 81 generator: "static" 82 path: "./manifests" 83 subcomponents: 84 - name: "elasticsearch-fluentd-kibana" 85 source: "../fabrikate-elasticsearch-fluentd-kibana" 86 - name: "prometheus-grafana" 87 source: "../fabrikate-prometheus-grafana" 88 - name: "istio" 89 source: "../fabrikate-istio" 90 - name: "kured" 91 source: "../fabrikate-kured" 92 ``` 93 94 Fabrikate recursively iterates component definitions, so as it processes this 95 lower level component definition, it will in turn iterate the remote component 96 definitions used in its implementation. Being able to mix in remote components 97 like this makes Fabrikate deployments composable and reusable across 98 deployments. 99 100 Let's look at the component definition for the 101 [elasticsearch-fluentd-kibana component](https://github.com/evanlouie/fabrikate-definitions/tree/master/definitions/fabrikate-elasticsearch-fluentd-kibana): 102 103 ```json 104 { 105 "name": "elasticsearch-fluentd-kibana", 106 "generator": "static", 107 "path": "./manifests", 108 "subcomponents": [ 109 { 110 "name": "elasticsearch", 111 "generator": "helm", 112 "source": "https://github.com/helm/charts", 113 "method": "git", 114 "path": "stable/elasticsearch" 115 }, 116 { 117 "name": "elasticsearch-curator", 118 "generator": "helm", 119 "source": "https://github.com/helm/charts", 120 "method": "git", 121 "path": "stable/elasticsearch-curator" 122 }, 123 { 124 "name": "fluentd-elasticsearch", 125 "generator": "helm", 126 "source": "https://github.com/helm/charts", 127 "method": "git", 128 "path": "stable/fluentd-elasticsearch" 129 }, 130 { 131 "name": "kibana", 132 "generator": "helm", 133 "source": "https://github.com/helm/charts", 134 "method": "git", 135 "path": "stable/kibana" 136 } 137 ] 138 } 139 ``` 140 141 First, we see that components can be defined in JSON as well as YAML (as you 142 prefer). 143 144 Secondly, we see that that this component generates resource definitions. In 145 particular, it will emit a set of static manifests from the path `./manifests`, 146 and generate the set of resource manifests specified by the inlined 147 [Helm templates](https://helm.sh/) definitions as it it iterates your deployment 148 definitions. 149 150 With generalized helm charts like the ones used here, its often necessary to 151 provide them with configuration values that vary by environment. This component 152 provides a reasonable set of defaults for its subcomponents in 153 `config/common.yaml`. Since this component is providing these four logging 154 subsystems together as a "stack", or preconfigured whole, we can provide 155 configuration to higher level parts based on this knowledge: 156 157 ```yaml 158 config: 159 subcomponents: 160 elasticsearch: 161 namespace: elasticsearch 162 injectNamespace: true 163 config: 164 client: 165 resources: 166 limits: 167 memory: "2048Mi" 168 elasticsearch-curator: 169 namespace: elasticsearch 170 injectNamespace: true 171 config: 172 cronjob: 173 successfulJobsHistoryLimit: 0 174 configMaps: 175 config_yml: |- 176 --- 177 client: 178 hosts: 179 - elasticsearch-client.elasticsearch.svc.cluster.local 180 port: 9200 181 use_ssl: False 182 fluentd-elasticsearch: 183 namespace: fluentd 184 injectNamespace: true 185 config: 186 elasticsearch: 187 host: "elasticsearch-client.elasticsearch.svc.cluster.local" 188 kibana: 189 namespace: kibana 190 injectNamespace: true 191 config: 192 files: 193 kibana.yml: 194 elasticsearch.url: "http://elasticsearch-client.elasticsearch.svc.cluster.local:9200" 195 ``` 196 197 This `common` configuration, which applies to all environments, can be mixed 198 with more specific configuration. For example, let's say that we were deploying 199 this in Azure and wanted to utilize its `managed-premium` SSD storage class for 200 Elasticsearch, but only in `azure` deployments. We can build an `azure` 201 configuration that allows us to do exactly that, and Fabrikate has a convenience 202 function called `set` that enables to do exactly that: 203 204 ``` 205 $ fab set --environment azure --subcomponent cloud-native.elasticsearch data.persistence.storageClass="managed-premium" master.persistence.storageClass="managed-premium" 206 ``` 207 208 This creates a file called `config/azure.yaml` that looks like this: 209 210 ```yaml 211 subcomponents: 212 cloud-native: 213 subcomponents: 214 elasticsearch: 215 config: 216 data: 217 persistence: 218 storageClass: managed-premium 219 master: 220 persistence: 221 storageClass: managed-premium 222 ``` 223 224 Naturally, an observability stack is just the base infrastructure we need, and 225 our real goal is to deploy a set of microservices. Furthermore, let's assume 226 that we want to be able to split the incoming traffic for these services between 227 `canary` and `stable` tiers with [Istio](https://istio.io) so that we can more 228 safely launch new versions of the service. 229 230 There is a Fabrikate component for that as well called 231 [fabrikate-istio-service](https://github.com/evanlouie/fabrikate-definitions/tree/master/definitions/fabrikate-istio) 232 that we'll leverage to add this service, so let's do just that: 233 234 ``` 235 $ fab add simple-service --source https://github.com/evanlouie/fabrikate-definitions --path definitions/fabrikate-istio 236 ``` 237 238 This component creates these traffic split services using the config applied to 239 it. Let's create a `prod` config that does this for a `prod` cluster by creating 240 `config/prod.yaml` and placing the following in it: 241 242 ```yaml 243 subcomponents: 244 simple-service: 245 namespace: services 246 config: 247 gateway: my-ingress.istio-system.svc.cluster.local 248 service: 249 dns: simple.mycompany.io 250 name: simple-service 251 port: 80 252 configMap: 253 PORT: 80 254 tiers: 255 canary: 256 image: "timfpark/simple-service:441" 257 replicas: 1 258 weight: 10 259 port: 80 260 resources: 261 requests: 262 cpu: "250m" 263 memory: "256Mi" 264 limits: 265 cpu: "1000m" 266 memory: "512Mi" 267 268 stable: 269 image: "timfpark/simple-service:440" 270 replicas: 3 271 weight: 90 272 port: 80 273 resources: 274 requests: 275 cpu: "250m" 276 memory: "256Mi" 277 limits: 278 cpu: "1000m" 279 memory: "512Mi" 280 ``` 281 282 This defines a service that is exposed on the cluster via a particular gateway 283 and dns name and port. It also defines a traffic split between two backend 284 tiers: `canary` (10%) and `stable` (90%). Within these tiers, we also define the 285 number of replicas and the resources they are allowed to use, along with the 286 container that is deployed in them. Finally, it also defines a ConfigMap for the 287 service, which passes along an environmental variable to our app called `PORT`. 288 289 From here we could add definitions for all of our microservices in a similar 290 manner, but in the interest of keeping this short, we'll just do one of the 291 services here. 292 293 With this, we have a functionally complete Fabrikate definition for our 294 deployment. Let's now see how we can use Fabrikate to generate resource 295 manifests for it. 296 297 First, let's install the remote components and helm charts: 298 299 ```sh 300 $ fab install 301 ``` 302 303 This installs all of the required components and charts locally and we can now 304 generate the manifests for our deployment with: 305 306 ```sh 307 $ fab generate prod azure 308 ``` 309 310 This will iterate through our deployment definition, collect configuration 311 values from `azure`, `prod`, and `common` (in that priority order) and generate 312 manifests as it descends breadth first. You can see the generated manifests in 313 `./generated/prod-azure`, which has the same logical directory structure as your 314 deployment definition. 315 316 Fabrikate is meant to used as part of a CI / CD pipeline that commits the 317 generated manifests checked into a repo so that they can be applied from a pod 318 within the cluster like [Flux](https://github.com/weaveworks/flux), but if you 319 have a Kubernetes cluster up and running you can also apply them directly with: 320 321 ```sh 322 $ cd generated/prod-azure 323 $ kubectl apply --recursive -f . 324 ``` 325 326 This will cause a very large number of containers to spin up (which will take 327 time to start completely as Kubernetes provisions persistent storage and 328 downloads the containers themselves), but after three or four minutes, you 329 should see the full observability stack and Microservices running in your 330 cluster. 331 332 ## Documentation 333 334 We have complete details about how to use and contribute to Fabrikate in these 335 documentation items: 336 337 - [Component Definitions](./docs/component.md) 338 - [Config Definitions](./docs/config.md) 339 - [Command Reference](./docs/commands.md) 340 - [Authentication / Personal Access Tokens (PAT) / `access.yaml`](./docs/auth.md) 341 - [Contributing](./docs/contributing.md) 342 343 ## Community 344 345 [Please join us on Slack](https://join.slack.com/t/bedrockco/shared_invite/enQtNjIwNzg3NTU0MDgzLWRiYzQxM2ZmZjQ2NGE2YjA2YTJmMjg3ZmJmOTQwOWY0MTU3NDVkNDJkZDUyMDExZjIxNTg5NWY3MTI3MzFiN2U) 346 for discussion and/or questions. 347 348 ## Bedrock 349 350 We maintain a sister project called 351 [Bedrock](https://github.com/microsoft/bedrock). Bedrock provides automata that 352 make operationalizing Kubernetes clusters with a GitOps deployment workflow 353 easier, automating a 354 [GitOps](https://www.weave.works/blog/gitops-operations-by-pull-request) 355 deployment model leveraging [Flux](https://github.com/weaveworks/flux), and 356 provides automation for building a CI/CD pipeline that automatically builds 357 resource manifests from Fabrikate defintions. 358 359 <!-- refs --> 360 361 [azure-devops-build-status]: 362 https://tpark.visualstudio.com/fabrikate/_apis/build/status/microsoft.fabrikate?branchName=master 363 [azure-devops-build-link]: 364 https://tpark.visualstudio.com/fabrikate/_build/latest?definitionId=35&branchName=master 365 [go-report-card]: https://goreportcard.com/report/github.com/evanlouie/fabrikate 366 [go-report-card-badge]: 367 https://goreportcard.com/badge/github.com/evanlouie/fabrikate