github.com/hugh712/snapd@v0.0.0-20200910133618-1a99902bd583/HACKING.md (about) 1 # Hacking on snapd 2 3 Hacking on snapd is fun and straightforward. The code is extensively 4 unit tested and we use the [spread](https://github.com/snapcore/spread) 5 integration test framework for the integration/system level tests. 6 7 ## Development 8 9 ### Supported Go versions 10 11 From snapd 2.38, snapd supports Go 1.9 and onwards. For earlier snapd 12 releases, snapd supports Go 1.6. 13 14 ### Setting up a GOPATH 15 16 When working with the source of Go programs, you should define a path within 17 your home directory (or other workspace) which will be your `GOPATH`. `GOPATH` 18 is similar to Java's `CLASSPATH` or Python's `~/.local`. `GOPATH` is documented 19 [online](http://golang.org/pkg/go/build/) and inside the go tool itself 20 21 go help gopath 22 23 Various conventions exist for naming the location of your `GOPATH`, but it 24 should exist, and be writable by you. For example 25 26 export GOPATH=${HOME}/work 27 mkdir $GOPATH 28 29 will define and create `$HOME/work` as your local `GOPATH`. The `go` tool 30 itself will create three subdirectories inside your `GOPATH` when required; 31 `src`, `pkg` and `bin`, which hold the source of Go programs, compiled packages 32 and compiled binaries, respectively. 33 34 Setting `GOPATH` correctly is critical when developing Go programs. Set and 35 export it as part of your login script. 36 37 Add `$GOPATH/bin` to your `PATH`, so you can run the go programs you install: 38 39 PATH="$PATH:$GOPATH/bin" 40 41 (note `$GOPATH` can actually point to multiple locations, like `$PATH`, so if 42 your `$GOPATH` is more complex than a single entry you'll need to adjust the 43 above). 44 45 ### Getting the snapd sources 46 47 The easiest way to get the source for `snapd` is to use the `go get` command. 48 49 go get -d -v github.com/snapcore/snapd/... 50 51 This command will checkout the source of `snapd` and inspect it for any unmet 52 Go package dependencies, downloading those as well. `go get` will also build 53 and install `snapd` and its dependencies. To also build and install `snapd` 54 itself into `$GOPATH/bin`, omit the `-d` flag. More details on the `go get` 55 flags are available using 56 57 go help get 58 59 At this point you will have the git local repository of the `snapd` source at 60 `$GOPATH/src/github.com/snapcore/snapd`. The source for any 61 dependent packages will also be available inside `$GOPATH`. 62 63 ### Dependencies handling 64 65 Go dependencies are handled via `govendor`. Get it via: 66 67 go get -u github.com/kardianos/govendor 68 69 After a fresh checkout, move to the snapd source directory: 70 71 cd $GOPATH/src/github.com/snapcore/snapd 72 73 And then, run: 74 75 govendor sync 76 77 You can use the script `get-deps.sh` to run the two previous steps. 78 79 If a dependency need updating 80 81 govendor fetch github.com/path/of/dependency 82 83 Other dependencies are handled via distribution packages and you should ensure 84 that dependencies for your distribution are installed. For example, on Ubuntu, 85 run: 86 87 sudo apt-get build-dep ./ 88 89 ### Building 90 91 To build, once the sources are available and `GOPATH` is set, you can just run 92 93 go build -o /tmp/snap github.com/snapcore/snapd/cmd/snap 94 95 to get the `snap` binary in /tmp (or without -o to get it in the current 96 working directory). Alternatively: 97 98 go install github.com/snapcore/snapd/cmd/snap/... 99 100 to have it available in `$GOPATH/bin` 101 102 Similarly, to build the `snapd` REST API daemon, you can run 103 104 go build -o /tmp/snapd github.com/snapcore/snapd/cmd/snapd 105 106 ### Contributing 107 108 Contributions are always welcome! Please make sure that you sign the 109 Canonical contributor license agreement at 110 http://www.ubuntu.com/legal/contributors 111 112 Snapd can be found on GitHub, so in order to fork the source and contribute, 113 go to https://github.com/snapcore/snapd. Check out [GitHub's help 114 pages](https://help.github.com/) to find out how to set up your local branch, 115 commit changes and create pull requests. 116 117 We value good tests, so when you fix a bug or add a new feature we highly 118 encourage you to create a test in `$source_test.go`. See also the section 119 about Testing. 120 121 ### Testing 122 123 To run the various tests that we have to ensure a high quality source just run: 124 125 ./run-checks 126 127 This will check if the source format is consistent, that it builds, all tests 128 work as expected and that "go vet" has nothing to complain. 129 130 The source format follows the `gofmt -s` formating. Please run this on your 131 source files if `run-checks` complains about the format. 132 133 You can run an individual test for a sub-package by changing into that 134 directory and: 135 136 go test -check.f $testname 137 138 If a test hangs, you can enable verbose mode: 139 140 go test -v -check.vv 141 142 (or -check.v for less verbose output). 143 144 There is more to read about the testing framework on the [website](https://labix.org/gocheck) 145 146 ### Running spread tests 147 148 To run the spread tests locally via QEMU, you need the latest version of 149 [spread](https://github.com/snapcore/spread). You can get spread, QEMU, and the 150 build tools to build QEMU images with: 151 152 $ sudo apt update && sudo apt install -y qemu-kvm autopkgtest 153 $ curl https://niemeyer.s3.amazonaws.com/spread-amd64.tar.gz | tar -xz -C $GOPATH/bin 154 155 #### Building spread VM images 156 157 To run the spread tests via QEMU you need to create VM images in the 158 `~/.spread/qemu` directory: 159 160 $ mkdir -p ~/.spread/qemu 161 $ cd ~/.spread/qemu 162 163 Assuming you are building on Ubuntu 18.04 LTS (Bionic Beaver) (or a later 164 development release like Ubuntu 19.04 (Disco Dingo)), run the following to 165 build a 64-bit Ubuntu 16.04 LTS (Xenial Xerus) VM to run the spread tests on: 166 167 $ autopkgtest-buildvm-ubuntu-cloud -r xenial 168 $ mv autopkgtest-xenial-amd64.img ubuntu-16.04-64.img 169 170 To build an Ubuntu 14.04 (Trusty Tahr) based VM, use: 171 172 $ autopkgtest-buildvm-ubuntu-cloud -r trusty --post-command='sudo apt-get install -y --install-recommends linux-generic-lts-xenial && update-grub' 173 $ mv autopkgtest-trusty-amd64.img ubuntu-14.04-64.img 174 175 This is because we need at least 4.4+ kernel for snapd to run on Ubuntu 14.04 176 LTS, which is available through the `linux-generic-lts-xenial` package. 177 178 If you are running Ubuntu 16.04 LTS, use 179 `adt-buildvm-ubuntu-cloud` instead of `autopkgtest-buildvm-ubuntu-cloud` (the 180 latter replaced the former in 18.04): 181 182 $ adt-buildvm-ubuntu-cloud -r xenial 183 $ mv adt-xenial-amd64-cloud.img ubuntu-16.04-64.img 184 185 #### Downloading spread VM images 186 187 Alternatively, instead of building the QEMU images manually, you can download 188 pre-built and somewhat maintained images from 189 [spread.zygoon.pl](spread.zygoon.pl). The images will need to be extracted 190 with `gunzip` and placed into `~/.spread/qemu` as above. 191 192 #### Running spread with QEMU 193 194 Finally, you can run the spread tests for Ubuntu 16.04 LTS 64-bit with: 195 196 $ spread -v qemu:ubuntu-16.04-64 197 198 To run for a different system, replace `ubuntu-16.04-64` with a different system 199 name. 200 201 For quick reuse you can use: 202 203 $ spread -reuse qemu:ubuntu-16.04-64 204 205 It will print how to reuse the systems. Make sure to use 206 `export REUSE_PROJECT=1` in your environment too. 207 208 209 ### Testing snapd 210 211 To test the `snapd` REST API daemon on a snappy system you need to 212 transfer it to the snappy system and then run: 213 214 sudo systemctl stop snapd.service snapd.socket 215 sudo SNAPD_DEBUG=1 SNAPD_DEBUG_HTTP=3 ./snapd 216 217 To debug interaction with the snap store, you can set `SNAP_DEBUG_HTTP`. 218 It is a bitfield: dump requests: 1, dump responses: 2, dump bodies: 4. 219 220 (make hack: In case you get some security profiles errors when trying to install or refresh a snap, 221 maybe you need to replace system installed snap-seccomp with the one aligned to the snapd that 222 you are testing. To do this, simply backup /usr/lib/snapd/snap-seccomp and overwrite it with 223 the testing one. Don't forget to rollback to the original when finish testing) 224 225 ### Running nested tests 226 227 Nested tests are used to validate features which cannot be tested on regular tests. 228 229 The nested test suites work different from the other test suites in snapd. In this case each test runs in a new image 230 which is created following the rules defined for the test. 231 232 The nested tests are executed using spread tool. See the following examples using the qemu and google backends. 233 234 . `qemu: spread qemu-nested:ubuntu-20.04-64:tests/nested/core20/tpm` 235 . `google: spread google-nested:ubuntu-20.04-64:tests/nested/core20/tpm` 236 237 The nested system in all the cases is selected based on the host system. The folloing lines show the relation between host and nested systemd (same applies for classic nested tests): 238 239 . ubuntu-16.04-64 => ubuntu-core-16-64 240 . ubuntu-18.04-64 => ubuntu-core-18-64 241 . ubuntu-20.04-64 => ubuntu-core-20-64 242 243 The tools used for creating and hosting the nested vms are: 244 245 . ubuntu-image snap is used to building the images 246 . QEMU is used for the virtualization (with kvm acceleration) 247 248 Nested test suite is composed by the following 4 suites: 249 250 classic: the nested suite contains an image of a classic system downloaded from cloud-images.ubuntu.com 251 core: it tests a core nested system and the images are generated by using ubuntu-image snap 252 core20: this is similar to core suite but tests on it are focused on UC20 253 manual: tests on this suite create a non generic image with spedific conditions 254 255 The nested suites use some environment variables to configure the suite and the tests inside it, the most important ones are the described bellow: 256 257 NESTED_WORK_DIR: It is path to the directory where all the nested assets and images are stored 258 NESTED_TYPE: Use core for ubuntu core nested systems or classic instead. 259 NESTED_CORE_CHANNEL: The images are created using ubuntu-image snap, use it to define the default branch 260 NESTED_CORE_REFRESH_CHANNEL: The images can be refreshed to a specific channel, use it to specify the channel 261 NESTED_USE_CLOUD_INIT: Use cloud init to make initial system configuration instead of user assertion 262 NESTED_ENABLE_KVM: Enable kvm in the qemu command line 263 NESTED_ENABLE_TPM: re boot in the nested vm in case it is supported (just supported on UC20) 264 NESTED_ENABLE_SECURE_BOOT: Enable secure boot in the nested vm in case it is supported (just supported on UC20) 265 NESTED_BUILD_SNAPD_FROM_CURRENT: Build and use either core or snapd snapd from current branch 266 NESTED_CUSTOM_IMAGE_URL: Download and use an custom image from this url 267 268 269 # Quick intro to hacking on snap-confine 270 271 Hey, welcome to the nice, low-level world of snap-confine 272 273 274 ## Building the code locally 275 276 To get started from a pristine tree you want to do this: 277 278 ``` 279 ./mkversion.sh 280 cd cmd/ 281 autoreconf -i -f 282 ./configure --prefix=/usr --libexecdir=/usr/lib/snapd --enable-nvidia-multiarch --with-host-arch-triplet="$(dpkg-architecture -qDEB_HOST_MULTIARCH)" 283 ``` 284 285 This will drop makefiles and let you build stuff. You may find the `make hack` 286 target, available in `cmd/snap-confine` handy, it installs the locally built 287 version on your system and reloads the apparmor profile. 288 289 Note, the above configure options assume you are on Ubuntu and are generally 290 necessary to run/test graphical applications with your local version of 291 snap-confine. The `--with-host-arch-triplet` option sets your specific 292 architecture and `--enable-nvidia-multiarch` allows the host's graphics drivers 293 and libraries to be shared with snaps. If you are on a distro other than 294 Ubuntu, try `--enable-nvidia-biarch` (though you'll likely need to add further 295 system-specific options too). 296 297 ## Submitting patches 298 299 Please run `(cd cmd; make fmt)` before sending your patches for the "C" part of 300 the source code.