github.com/jaylevin/jenkins-library@v1.230.4/DEVELOPMENT.md (about) 1 # Development 2 3 **Table of contents:** 4 5 1. [Getting started](#getting-started) 6 1. [Build the project](#build-the-project) 7 1. [Generating step framework](#generating-step-framework) 8 1. [Best practices for writing piper-go steps](#best-practices-for-writing-piper-go-steps) 9 1. [Testing](#testing) 10 1. [Debugging](#debugging) 11 1. [Release](#release) 12 1. [Pipeline Configuration](#pipeline-configuration) 13 1. [Security Setup](#security-setup) 14 15 ## Getting started 16 17 1. [Ramp up your development environment](#ramp-up) 18 1. [Get familiar with Go language](#go-basics) 19 1. Create [a GitHub account](https://github.com/join) 20 1. Setup [GitHub access via SSH](https://help.github.com/articles/connecting-to-github-with-ssh/) 21 1. [Create and checkout a repo fork](#checkout-your-fork) 22 1. Optional: [Get Jenkins related environment](#jenkins-environment) 23 1. Optional: [Get familiar with Jenkins Pipelines as Code](#jenkins-pipelines) 24 25 ### Ramp up 26 27 First you need to set up an appropriate development environment: 28 29 1. Install Go, see [GO Getting Started](https://golang.org/doc/install) 30 1. Install an IDE with Go plugins, see for example [Go in Visual Studio Code](https://code.visualstudio.com/docs/languages/go) 31 32 ### Go basics 33 34 In order to get yourself started, there is a lot of useful information out there. 35 36 As a first step to take we highly recommend the [Golang documentation](https://golang.org/doc/), especially [A Tour of Go](https://tour.golang.org/welcome/1). 37 38 We have a strong focus on high quality software and contributions without adequate tests will not be accepted. 39 There is an excellent resource which teaches Go using a test-driven approach: [Learn Go with Tests](https://github.com/quii/learn-go-with-tests) 40 41 ### Checkout your fork 42 43 The project uses [Go modules](https://blog.golang.org/using-go-modules). Thus please make sure to **NOT** checkout the project into your [`GOPATH`](https://github.com/golang/go/wiki/SettingGOPATH). 44 45 To check out this repository: 46 47 1. Create your own 48 [fork of this repo](https://help.github.com/articles/fork-a-repo/) 49 1. Clone it to your machine, for example like: 50 51 ```shell 52 mkdir -p ${HOME}/projects/jenkins-library 53 cd ${HOME}/projects 54 git clone git@github.com:${YOUR_GITHUB_USERNAME}/jenkins-library.git 55 cd jenkins-library 56 git remote add upstream git@github.com:sap/jenkins-library.git 57 git remote set-url --push upstream no_push 58 ``` 59 60 ### Jenkins environment 61 62 If you want to contribute also to the Jenkins-specific parts like 63 64 * Jenkins library step 65 * Jenkins pipeline integration 66 67 you need to do the following in addition: 68 69 * [Install Groovy](https://groovy-lang.org/install.html) 70 * [Install Maven](https://maven.apache.org/install.html) 71 * Get a local Jenkins installed: Use for example [cx-server](https://github.com/SAP/devops-docker-cx-server) 72 73 ### Jenkins pipelines 74 75 The Jenkins related parts depend on 76 77 * [Jenkins Pipelines as Code](https://jenkins.io/doc/book/pipeline-as-code/) 78 * [Jenkins Shared Libraries](https://jenkins.io/doc/book/pipeline/shared-libraries/) 79 80 You should get familiar with these concepts for contributing to the Jenkins-specific parts. 81 82 ## Build the project 83 84 ### Build the executable suitable for the CI/CD Linux target environments 85 86 Use Docker: 87 88 `docker build -t piper:latest .` 89 90 You can extract the binary using Docker means to your local filesystem: 91 92 ```sh 93 docker create --name piper piper:latest 94 docker cp piper:/build/piper . 95 docker rm piper 96 ``` 97 98 ## Generating step framework 99 100 The steps are generated based on the yaml files in `resources/metadata/` with the following command from the root of the project: 101 102 ```bash 103 go generate 104 ``` 105 106 The yaml format is kept pretty close to Tekton's [task format](https://github.com/tektoncd/pipeline/blob/master/docs/tasks.md). 107 Where the Tekton format was not sufficient some extenstions have been made. 108 109 Examples are: 110 111 * matadata - longDescription 112 * spec - inputs - secrets 113 * spec - containers 114 * spec - sidecars 115 116 There are certain extensions: 117 118 * **aliases** allow alternative parameter names also supporting deeper configuration structures. [Example](https://github.com/SAP/jenkins-library/blob/master/resources/metadata/kubernetesDeploy.yaml) 119 * **resources** allow to read for example from a shared `commonPipelineEnvironment` which contains information which has been provided by a previous step in the pipeline via an output. [Example](https://github.com/SAP/jenkins-library/blob/master/resources/metadata/githubPublishRelease.yaml) 120 * **secrets** allow to specify references to Jenkins credentials which can be used in the `groovy` library. [Example](https://github.com/SAP/jenkins-library/blob/master/resources/metadata/kubernetesDeploy.yaml) 121 * **outputs** allow to write to dedicated outputs like 122 123 * Influx metrics. [Example](https://github.com/SAP/jenkins-library/blob/master/resources/metadata/checkmarxExecuteScan.yaml) 124 * Sharing data via `commonPipelineEnvironment` which can be used by another step as input 125 126 * **conditions** allow for example to specify in which case a certain container is used (depending on a configuration parameter). [Example](https://github.com/SAP/jenkins-library/blob/master/resources/metadata/kubernetesDeploy.yaml) 127 128 ## Best practices for writing piper-go steps 129 130 1. [Logging](#logging) 131 1. [Error handling](#error-handling) 132 1. [HTTP calls](#http-calls) 133 134 Implementing a new step starts by adding a new yaml file in `resources/metadata/` and running 135 the [step generator](#generating-step-framework). This creates most of the boiler-plate code for the 136 step's implementation in `cmd/`. There are four files per step based on the name given within the yaml: 137 138 1. `cmd/<step>.go` - contains the skeleton of your step implementation. 139 1. `cmd/<step>_test.go` - write your unit tests here. 140 1. `cmd/<step>_generated.go` - contains the generated boiler plate code, and a dedicated type definition for your step's options. 141 1. `cmd/<step>_generated_test.go` - contains a simple unit test for the generated part. 142 143 You never edit in the generated parts. If you need to make changes, you make them in the yaml and re-run the step 144 generator (which will of course not overwrite your implementation). 145 146 The file `cmd/<step>.go` initially contains two functions: 147 148 ```golang 149 func step(options stepOptions, telemetryData *telemetry.CustomData) { 150 err := runStep(&options, telemetryData) 151 if err != nil { 152 log.Entry().WithError(err).Fatal("step execution failed") 153 } 154 } 155 156 func runStep(options *stepOptions, telemetryData *telemetry.CustomData) error { 157 } 158 ``` 159 160 The separation into these two functions facilitates unit tests and mocking. From your tests, you could call 161 `runStep()` with mocking instances of needed objects, while inside `step()`, you create runtime instances of these 162 objects. 163 164 ### Logging 165 166 Logging is done via the [sirupsen/logrus](https://github.com/sirupsen/logrus) framework. 167 It can conveniently be accessed through: 168 169 ```golang 170 import ( 171 "github.com/SAP/jenkins-library/pkg/log" 172 ) 173 174 func myStep ... 175 ... 176 log.Entry().Info("This is my info.") 177 ... 178 } 179 ``` 180 181 If a fatal error occurs your code should act similar to: 182 183 ```golang 184 ... 185 if err != nil { 186 log.Entry(). 187 WithError(err). 188 Fatal("failed to execute step ...") 189 } 190 ``` 191 192 Calling `Fatal` results in an `os.Exit(0)` and before exiting some cleanup actions (e.g. writing output data, 193 writing telemetry data if not deactivated by the user, ...) are performed. 194 195 ### Error handling 196 197 In order to better understand the root cause of errors that occur, we wrap errors like 198 199 ```golang 200 f, err := os.Open(path) 201 if err != nil { 202 return errors.Wrapf(err, "open failed for %v", path) 203 } 204 defer f.Close() 205 ``` 206 207 We use [github.com/pkg/errors](https://github.com/pkg/errors) for that. 208 209 It has proven a good practice to bubble up errors until the runtime entry function and only 210 there exit via the logging framework (see also [logging](#logging)). 211 212 ### Error categories 213 214 For errors, we have a convenience function to set a pre-defined category once an error occurs: 215 216 ```golang 217 log.SetErrorCategory(log.ErrorCompliance) 218 ``` 219 220 Error categories are defined in [`pkg/log/ErrorCategory`](pkg/log/errors.go). 221 222 With writing a fatal error 223 224 ```golang 225 log.Entry().WithError(err).Fatal("the error message") 226 ``` 227 228 the category will be written into the file `errorDetails.json` and can be used from there in the further pipeline flow. 229 Writing the file is handled by [`pkg/log/FatalHook`](pkg/log/fatalHook.go). 230 231 ### HTTP calls 232 233 All HTTP(S) interactions with other systems should be leveraging the [`pkg/http`](pkg/http) to enable capabilities provided 234 centrally like automatic retries in case of intermittend HTTP errors or individual and optimized timout or logging capabilities. 235 The HTTP package provides a thin wrapper around the standard golang `net/http` package adding just the right bit of sugar on top to 236 have more control on common behaviors. 237 238 ### Automatic retries 239 240 Automatic retries have been implemented based on [hashicorp's retryable HTTP client for golang](https://github.com/hashicorp/go-retryablehttp) 241 with some extensions and customizations to the HTTP status codes being retried as well as to improve some service specific error situations. 242 The client by default retries 15 times until it gives up and regards a specific communication event as being not recoverable. If you know by heart that 243 your service is much more stable and cloud live without retry handling or a specifically lower amout of retries, you can easily customize behavior via the 244 `ClientOptions` as shown in the sample below: 245 246 ```golang 247 clientOptions := piperhttp.ClientOptions{} 248 clientOptions.MaxRetries = -1 249 httpClient.SetOptions(clientOptions) 250 ``` 251 252 ## Testing 253 254 1. [Mocking](#mocking) 255 1. [Mockable Interface](#mockable-interface) 256 1. [Global function pointers](#global-function-pointers) 257 1. [Test Parallelization](#test-parallelization) 258 259 Unit tests are done using basic `golang` means. 260 261 Additionally, we encourage you to use [github.com/stretchr/testify/assert](https://github.com/stretchr/testify/assert) 262 in order to have slimmer assertions if you like. A good pattern to follow is this: 263 264 ```golang 265 func TestNameOfFunctionUnderTest(t *testing.T) { 266 t.Run("A description of the test case", func(t *testing.T) { 267 // init 268 // test 269 // assert 270 }) 271 t.Run("Another test case", func(t *testing.T) { 272 // init 273 // test 274 // assert 275 }) 276 } 277 ``` 278 279 This will also structure the test output for better readability. 280 281 ### Mocking 282 283 Tests should be written only for the code of your step implementation, while any 284 external functionality should be mocked, in order to test all code paths including 285 the error cases. 286 287 There are (at least) two approaches for this: 288 289 #### Mockable Interface 290 291 In this approach you declare an interface that contains every external function 292 used within your step that you need to be able to mock. In addition, you declare a struct 293 which holds the data you need during runtime, and implement the interface with the "real" 294 functions. Here is an example to illustrate: 295 296 ```golang 297 import ( 298 "github.com/SAP/jenkins-library/pkg/piperutils" 299 ) 300 301 type myStepUtils interface { 302 fileExists(path string) (bool, error) 303 fileRead(path string) ([]byte, error) 304 } 305 306 type myUtilsData struct { 307 fileUtils piperutils.Files 308 } 309 310 func (u *myUtilsData) fileExists(path string) (bool, error) { 311 return u.fileUtils.FileExists(path) 312 } 313 314 func (u *myUtilsData) fileRead(path string) ([]byte, error) { 315 return u.fileUtils.FileRead(path) 316 } 317 ``` 318 319 Then you create the runtime version of the utils data in your top-level entry function and 320 pass it to your `run*()` function: 321 322 ```golang 323 func step(options stepOptions, _ *telemetry.CustomData) { 324 utils := myUtilsData{ 325 fileUtils: piperutils.Files{}, 326 } 327 err := runStep(&options, &utils) 328 ... 329 } 330 331 func runStep(options *stepOptions, utils myStepUtils) error { 332 ... 333 exists, err := utils.fileExists(path) 334 ... 335 } 336 ``` 337 338 In your tests, you would provide a mocking implementation of this interface and pass 339 instances of that to the functions under test. To better illustrate this, here is an example 340 for the interface above implemented in the `<step>_test.go` file: 341 342 ```golang 343 type mockUtilsBundle struct { 344 files map[string][]byte 345 } 346 347 func newMockUtilsBundle() mockUtilsBundle { 348 utils := mockUtilsBundle{} 349 utils.files = map[string][]byte{} 350 return utils 351 } 352 353 func (m *mockUtilsBundle) fileExists(path string) (bool, error) { 354 content := m.files[path] 355 return content != nil, nil 356 } 357 358 func (m *mockUtilsBundle) fileRead(path string) ([]byte, error) { 359 content := m.files[path] 360 if content == nil { 361 return nil, fmt.Errorf("could not read '%s': %w", path, os.ErrNotExist) 362 } 363 return content, nil 364 } 365 366 // This is how it would be used in tests: 367 368 func TestSomeFunction() { 369 t.Run("Happy path", func(t *testing.T) { 370 // init 371 utils := newMockUtilsBundle() 372 utils.files["some/path/file.xml"] = []byte(´content of the file´) 373 // test 374 err := someFunction(&utils) 375 // assert 376 assert.NoError(t, err) 377 }) 378 t.Run("Error path", func(t *testing.T) { 379 // init 380 utils := newMockUtilsBundle() 381 // test 382 err := someFunction(&utils) 383 // assert 384 assert.EqualError(t, err, "could not read 'some/path/file.xml'") 385 }) 386 } 387 ``` 388 389 #### Global Function Pointers 390 391 An alternative approach are global function pointers: 392 393 ```golang 394 import ( 395 FileUtils "github.com/SAP/jenkins-library/pkg/piperutils" 396 ) 397 398 var fileUtilsExists = FileUtils.FileExists 399 400 func someFunction(options *stepOptions) error { 401 ... 402 exists, err := fileUtilsExists(path) 403 ... 404 } 405 ``` 406 407 In your tests, you can then simply set the function pointer to a mocking implementation: 408 409 ```golang 410 func TestSomeFunction() { 411 t.Run("Happy path", func(t *testing.T) { 412 // init 413 originalFileExists := fileUtilsExists 414 fileUtilsExists = func(filename string) (bool, error) { 415 return true, nil 416 } 417 defer fileUtilsExists = originalFileExists 418 // test 419 err := someFunction(...) 420 // assert 421 assert.NoError(t, err) 422 }) 423 t.Run("Error path", func(t *testing.T) { 424 // init 425 originalFileExists := fileUtilsExists 426 fileUtilsExists = func(filename string) (bool, error) { 427 return false, errors.New("something happened") 428 } 429 defer fileUtilsExists = originalFileExists 430 // test 431 err := someFunction(...) 432 // assert 433 assert.EqualError(t, err, "something happened") 434 }) 435 } 436 ``` 437 438 Both approaches have their own benefits. Global function pointers require less preparation 439 in the actual implementation and give great flexibility in the tests, while mocking interfaces 440 tend to result in more code re-use and slim down the tests. The mocking implementation of a 441 utils interface can facilitate implementations of related functions to be based on shared data. 442 443 ### Test Parallelization 444 445 Tests that can be executed in parallel should be marked as such. 446 With the command `t.Parallel()` the test framework can be notified that this test can run in parallel, and it can start running the next test. 447 ([Example in Stackoverflow](https://stackoverflow.com/questions/44325232/are-tests-executed-in-parallel-in-go-or-one-by-one)) 448 Therefore, this command shall be called at the beginning of a test method **and also** in each `t.Run()` sub tests. 449 See also the [documentation](https://golang.org/pkg/testing/#T.Parallel) for `t.Parallel()` and `t.Run()`. 450 451 ```go 452 func TestMethod(t *testing.T) { 453 t.Parallel() // indicates that this method can run parallel to other methods 454 455 t.Run("sub test 1", func(t *testing.T){ 456 t.Parallel() // indicates that this sub test can run parallel to other sub tests 457 // execute test 458 }) 459 460 t.Run("sub test 2", func(t *testing.T){ 461 t.Parallel() // indicates that this sub test can run parallel to other sub tests 462 // execute test 463 }) 464 } 465 ``` 466 467 Go will first execute the non-parallelized tests in sequence and afterwards execute all the parallel tests in parallel, limited by the default number of parallel executions. 468 469 It is important that tests executed in parallel use the variable values actually meant to be visible to them. 470 Especially in table tests, it can happen easily that a variable injected into the `t.Run()`-closure via the outer scope is changed before or while the closure executes. 471 To prevent this, it is possible to create shadowing instances of variables in the body of the test loop. 472 (See [blog about it](https://eleni.blog/2019/05/11/parallel-test-execution-in-go/).) 473 At the minimum, you need to capture the test case value from the loop iteration variable, by shadowing this variable in the loop body. 474 Inside the `t.Run()` closure, this shadow copy is visible, and cannot be overwritten by later loop iterations. 475 If you do not make this shadowing copy, what is visible in the closure is the variable which gets re-assigned with a new value in each loop iteration. 476 The value of this variable is then not fixed for the test run. 477 478 ```go 479 func TestMethod(t *testing.T) { 480 t.Parallel() // indicates that this method can parallel to other methods 481 testCases := []struct { 482 Name string 483 }{ 484 { 485 Name: "Name1" 486 }, 487 { 488 Name: "Name2" 489 }, 490 } 491 492 for _, testCase := range testCases { // testCase defined here is re-assigned in each iteration 493 testCase := testCase // define new variable within loop to detach from overwriting of the outer testCase variable by next loop iteration 494 // The same variable name "testCase" is used for convenience. 495 t.Run(testCase.Name, func(t *testing.T) { 496 t.Parallel() // indicates that this sub test can run parallel to other sub tests 497 // execute test 498 }) 499 } 500 } 501 ``` 502 503 ### Test pipeline for your fork (Jenkins) 504 505 Piper is ececuting the steps of each stage within a container. If you want to test your developments you have to ensure they are part of the image which is used in your test pipeline. 506 507 #### Testing Pipeline or Stage Definition changes (Jenkins) 508 509 As the pipeline and stage definitions (e.g. \*Pipeline\*Stage\*.groovy files in the vars folder) are directly executed you can easily test them just by referencing to your repo/branch/tag in the jenkinsfile. 510 511 ```groovy 512 @Library('my-piper-lib-os-fork@MyTest') _ 513 514 abapEnvironmentPipeline script: this 515 ``` 516 517 #### Testing changes on Step Level (Jenkins) 518 519 To trigger the creation of a "custom" container with your changes you can reuse a feature in piper which is originally meant for executing the integration tests. If the environment variables 'REPOSITORY_UNDER_TEST' (pointing to your forked repo) and 'LIBRARY_VERSION_UNDER_TEST' (pointing to a tag in your forked repo) are set a corresponding container gets created on the fly upon first usage in the pipeline. The drawback is that this takes extra time (1-2 minutes) you have to spend for every execution of the pipeline. 520 521 ```groovy 522 @Library('piper-lib-os') _ 523 524 env.REPOSITORY_UNDER_TEST = 'myfork' // e.g. 'myUser/jenkins-library' 525 env.LIBRARY_VERSION_UNDER_TEST = 'MyTag' 526 527 abapEnvironmentPipeline script: this 528 ``` 529 530 #### Using Parameterized Pipelines (Jenkins) 531 532 For test purpose it can be useful to utilize a parameterized pipeline. E.g. to toggle creation of the custom container: 533 534 ```groovy 535 @Library('my-piper-lib-os-fork@MyTest') _ 536 537 properties([ 538 parameters([ 539 booleanParam(name: 'toggleSomething', defaultValue: false, description: 'dito'), 540 booleanParam(name: 'testPiperFork', defaultValue: false, description: 'dito'), 541 string(name: 'repoUnderTest', defaultValue: '<MyUser>/jenkins-library', description: 'dito'), 542 string(name: 'tag', defaultValue: 'MyTest', description: 'dito') 543 ]) 544 ]) 545 546 if (params.testPiperFork == true) { 547 env.REPOSITORY_UNDER_TEST = params.repoUnderTest 548 env.LIBRARY_VERSION_UNDER_TEST = params.tag 549 } 550 551 abapEnvironmentPipeline script: this 552 ``` 553 554 or skipping steps/stages with the help of extensions: 555 556 ```groovy 557 void call(Map piperParams) { 558 echo "Start - Extension for stage: ${piperParams.stageName}" 559 560 if (params.toggleSomething == true) { 561 // do something 562 echo "now execute original stage as defined in the template" 563 piperParams.originalStage() 564 } else { 565 // do something else 566 // e.g. only this singele step of the stage 567 somePiperStep( script: piperParams.script, someConfigParameter: '<...>' ) 568 } 569 570 echo "End - Extension for stage: ${piperParams.stageName}" 571 } 572 return this 573 ``` 574 575 ## Debugging 576 577 Debugging can be initiated with VS code fairly easily. Compile the binary with specific compiler flags to turn off optimizations `go build -gcflags "all=-N -l" -o piper.exe`. 578 579 Modify the `launch.json` located in folder `.vscode` of your project root to point with `program` exactly to the binary that you just built with above command - must be an absolute path. Add any arguments required for the execution of the Piper step to `args`. What is separated with a blank on the command line must go into a separate string. 580 581 ```javascript 582 { 583 // Use IntelliSense to learn about possible attributes. 584 // Hover to view descriptions of existing attributes. 585 // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 586 "version": "0.2.0", 587 "configurations": [ 588 { 589 "name": "Launch", 590 "type": "go", 591 "request": "launch", 592 "mode": "exec", 593 "program": "C:/CF@HCP/git/jenkins-library-public/piper.exe", 594 "env": {}, 595 "args": ["checkmarxExecuteScan", "--password", "abcd", "--username", "1234", "--projectName", "testProject4711", "--serverUrl", "https://cx.server.com/"] 596 } 597 ] 598 } 599 ``` 600 601 Finally, set your breakpoints and use the `Launch` button in the VS code UI to start debugging. 602 603 ## Release 604 605 Releases are performed using [Project "Piper" Action](https://github.com/SAP/project-piper-action). 606 We release on schedule (once a week) and on demand. 607 To perform a release, the respective action must be invoked for which a convenience script is available in `contrib/perform-release.sh`. 608 It requires a personal access token for GitHub with `repo` scope. 609 Example usage `PIPER_RELEASE_TOKEN=THIS_IS_MY_TOKEN contrib/perform-release.sh`. 610 611 ## Pipeline Configuration 612 613 The pipeline configuration is organized in a hierarchical manner and configuration parameters are incorporated from multiple sources. 614 In general, there are four sources for configurations: 615 616 1. Directly passed step parameters 617 1. Project specific configuration placed in `.pipeline/config.yml` 618 1. Custom default configuration provided in `customDefaults` parameter of the project config or passed as parameter to the step `setupCommonPipelineEnvironment` 619 1. Default configuration from Piper library 620 621 For more information and examples on how to configure a project, please refer to the [configuration documentation](https://sap.github.io/jenkins-library/configuration/). 622 623 ### Groovy vs. Go step configuration 624 625 The configuration of a project is, as of now, resolved separately for Groovy and Go steps. 626 There are, however, dependencies between the steps responsible for resolving the configuration. 627 The following provides an overview of the central components and their dependencies. 628 629 #### setupCommonPipelineEnvironment (Groovy) 630 631 The step `setupCommonPipelineEnvironment` initializes the `commonPipelineEnvironment` and `DefaultValueCache`. 632 Custom default configurations can be provided as parameters to `setupCommonPipelineEnvironment` or via the `customDefaults` parameter in project configuration. 633 634 #### DefaultValueCache (Groovy) 635 636 The `DefaultValueCache` caches the resolved (custom) default pipeline configuration and the list of configurations that contributed to the result. 637 On initialization, it merges the provided custom default configurations with the default configuration from Piper library, as per the hierarchical order. 638 639 Note, the list of configurations cached by `DefaultValueCache` is used to pass path to the (custom) default configurations of each Go step. 640 It only contains the paths of configurations which are **not** provided via `customDefaults` parameter of the project configuration, since the Go layer already resolves configurations provided via `customDefaults` parameter independently. 641 642 ## Additional Developer Hints 643 644 You can find additional hints at [documentation/developer-hints](./documentation/developer_hints) 645 646 ## Security Setup 647 648 Here some hints and tricks are described to enhance the security within the development process. 649 650 1. [Signing Commits](#signing-commits) 651 652 ### Signing Commits 653 654 In git, commits can be [signed](https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work) to guarantee that that changes were made by the person named in the commit. 655 The name and email used for commits can be easily modified in the local git setup and afterwards it cannot be distinguished anymore if the commit was done by the real person or by some potential attacker. 656 657 In Windows, this can be done via [GnuPG](https://www.gnupg.org/(en)/download/index.html). 658 Download and install the tool. 659 Via the manager tool *Kleopatra* a new key pair can be easily created with a little wizard. 660 Make sure that the name and email are the ones used in your git. 661 662 The public key must then be added to the github's GPG section. 663 The private key should be kept in a backup as this signature is bound to you and not your machine. 664 665 The only thing left are some changes in the *.gitconfig* file. 666 The file shall be located in your user directory. 667 It might look something like the following. 668 All parts that are not relevant for signing were removed. 669 670 ``` 671 [user] 672 name = My Name 673 email = my.name@sap.com 674 # Hash or email of you GPG key 675 signingkey = D3CF72CC4006DE245C049566242831AEEE9DA2DD 676 [commit] 677 # enable signing for commits 678 gpgsign = true 679 [tag] 680 # enable signing for tags (note the capital S) 681 gpgSign = true 682 [gpg] 683 # Windows was not able to find the private key. Setting the gpg command to use solved this. 684 program = C:\\Program Files (x86)\\GnuPG\\bin\\gpg.exe 685 ``` 686 687 Add the three to four lines to you git config and this will do the necessary such that all your commits will be signed.