github.com/stulluk/snapd@v0.0.0-20210611110309-f6d5d5bd24b0/HACKING.md (about) 1 # Hacking on snapd 2 3 Hacking on snapd is fun and straightforward. The code is extensively 4 unit tested and we use the [spread](https://github.com/snapcore/spread) 5 integration test framework for the integration/system level tests. 6 7 ## Development 8 9 ### Supported Go versions 10 11 From snapd 2.38, snapd supports Go 1.9 and onwards. For earlier snapd 12 releases, snapd supports Go 1.6. 13 14 ### Setting up a GOPATH 15 16 When working with the source of Go programs, you should define a path within 17 your home directory (or other workspace) which will be your `GOPATH`. `GOPATH` 18 is similar to Java's `CLASSPATH` or Python's `~/.local`. `GOPATH` is documented 19 [online](http://golang.org/pkg/go/build/) and inside the go tool itself 20 21 go help gopath 22 23 Various conventions exist for naming the location of your `GOPATH`, but it 24 should exist, and be writable by you. For example 25 26 export GOPATH=${HOME}/work 27 mkdir $GOPATH 28 29 will define and create `$HOME/work` as your local `GOPATH`. The `go` tool 30 itself will create three subdirectories inside your `GOPATH` when required; 31 `src`, `pkg` and `bin`, which hold the source of Go programs, compiled packages 32 and compiled binaries, respectively. 33 34 Setting `GOPATH` correctly is critical when developing Go programs. Set and 35 export it as part of your login script. 36 37 Add `$GOPATH/bin` to your `PATH`, so you can run the go programs you install: 38 39 PATH="$PATH:$GOPATH/bin" 40 41 (note `$GOPATH` can actually point to multiple locations, like `$PATH`, so if 42 your `$GOPATH` is more complex than a single entry you'll need to adjust the 43 above). 44 45 ### Getting the snapd sources 46 47 The easiest way to get the source for `snapd` is to use the `go get` command. 48 49 go get -d -v github.com/snapcore/snapd/... 50 51 This command will checkout the source of `snapd` and inspect it for any unmet 52 Go package dependencies, downloading those as well. `go get` will also build 53 and install `snapd` and its dependencies. To also build and install `snapd` 54 itself into `$GOPATH/bin`, omit the `-d` flag. More details on the `go get` 55 flags are available using 56 57 go help get 58 59 At this point you will have the git local repository of the `snapd` source at 60 `$GOPATH/src/github.com/snapcore/snapd`. The source for any 61 dependent packages will also be available inside `$GOPATH`. 62 63 ### Dependencies handling 64 65 Go dependencies are handled via `govendor`. Get it via: 66 67 go get -u github.com/kardianos/govendor 68 69 After a fresh checkout, move to the snapd source directory: 70 71 cd $GOPATH/src/github.com/snapcore/snapd 72 73 And then, run: 74 75 govendor sync 76 77 You can use the script `get-deps.sh` to run the two previous steps. 78 79 If a dependency need updating 80 81 govendor fetch github.com/path/of/dependency 82 83 Other dependencies are handled via distribution packages and you should ensure 84 that dependencies for your distribution are installed. For example, on Ubuntu, 85 run: 86 87 sudo apt-get build-dep ./ 88 89 ### Building 90 91 To build, once the sources are available and `GOPATH` is set, you can just run 92 93 go build -o /tmp/snap github.com/snapcore/snapd/cmd/snap 94 95 to get the `snap` binary in /tmp (or without -o to get it in the current 96 working directory). Alternatively: 97 98 go install github.com/snapcore/snapd/cmd/snap/... 99 100 to have it available in `$GOPATH/bin` 101 102 Similarly, to build the `snapd` REST API daemon, you can run 103 104 go build -o /tmp/snapd github.com/snapcore/snapd/cmd/snapd 105 106 ### Contributing 107 108 Contributions are always welcome! Please make sure that you sign the 109 Canonical contributor license agreement at 110 http://www.ubuntu.com/legal/contributors 111 112 Snapd can be found on GitHub, so in order to fork the source and contribute, 113 go to https://github.com/snapcore/snapd. Check out [GitHub's help 114 pages](https://help.github.com/) to find out how to set up your local branch, 115 commit changes and create pull requests. 116 117 We value good tests, so when you fix a bug or add a new feature we highly 118 encourage you to create a test in `$source_test.go`. See also the section 119 about Testing. 120 121 ### Testing 122 123 To run the various tests that we have to ensure a high quality source just run: 124 125 ./run-checks 126 127 This will check if the source format is consistent, that it builds, all tests 128 work as expected and that "go vet" has nothing to complain. 129 130 The source format follows the `gofmt -s` formating. Please run this on your 131 source files if `run-checks` complains about the format. 132 133 You can run an individual test for a sub-package by changing into that 134 directory and: 135 136 go test -check.f $testname 137 138 If a test hangs, you can enable verbose mode: 139 140 go test -v -check.vv 141 142 (or -check.v for less verbose output). 143 144 There is more to read about the testing framework on the [website](https://labix.org/gocheck) 145 146 ### Running spread tests 147 148 To run the spread tests locally via QEMU, you need the latest version of 149 [spread](https://github.com/snapcore/spread). You can get spread, QEMU, and the 150 build tools to build QEMU images with: 151 152 $ sudo apt update && sudo apt install -y qemu-kvm autopkgtest 153 $ curl https://storage.googleapis.com/snapd-spread-tests/spread/spread-amd64.tar.gz | tar -xz -C $GOPATH/bin 154 155 #### Building spread VM images 156 157 To run the spread tests via QEMU you need to create VM images in the 158 `~/.spread/qemu` directory: 159 160 $ mkdir -p ~/.spread/qemu 161 $ cd ~/.spread/qemu 162 163 Assuming you are building on Ubuntu 18.04 LTS (Bionic Beaver) (or a later 164 development release like Ubuntu 19.04 (Disco Dingo)), run the following to 165 build a 64-bit Ubuntu 16.04 LTS (Xenial Xerus) VM to run the spread tests on: 166 167 $ autopkgtest-buildvm-ubuntu-cloud -r xenial 168 $ mv autopkgtest-xenial-amd64.img ubuntu-16.04-64.img 169 170 To build an Ubuntu 14.04 (Trusty Tahr) based VM, use: 171 172 $ autopkgtest-buildvm-ubuntu-cloud -r trusty --post-command='sudo apt-get install -y --install-recommends linux-generic-lts-xenial && update-grub' 173 $ mv autopkgtest-trusty-amd64.img ubuntu-14.04-64.img 174 175 This is because we need at least 4.4+ kernel for snapd to run on Ubuntu 14.04 176 LTS, which is available through the `linux-generic-lts-xenial` package. 177 178 If you are running Ubuntu 16.04 LTS, use 179 `adt-buildvm-ubuntu-cloud` instead of `autopkgtest-buildvm-ubuntu-cloud` (the 180 latter replaced the former in 18.04): 181 182 $ adt-buildvm-ubuntu-cloud -r xenial 183 $ mv adt-xenial-amd64-cloud.img ubuntu-16.04-64.img 184 185 #### Downloading spread VM images 186 187 Alternatively, instead of building the QEMU images manually, you can download 188 pre-built and somewhat maintained images from 189 [spread.zygoon.pl](spread.zygoon.pl). The images will need to be extracted 190 with `gunzip` and placed into `~/.spread/qemu` as above. 191 192 #### Running spread with QEMU 193 194 Finally, you can run the spread tests for Ubuntu 16.04 LTS 64-bit with: 195 196 $ spread -v qemu:ubuntu-16.04-64 197 198 To run for a different system, replace `ubuntu-16.04-64` with a different system 199 name. 200 201 For quick reuse you can use: 202 203 $ spread -reuse qemu:ubuntu-16.04-64 204 205 It will print how to reuse the systems. Make sure to use 206 `export REUSE_PROJECT=1` in your environment too. 207 208 #### Running UC20 spread with QEMU 209 210 Ubuntu Core 20 on amd64 has a requirement to use UEFI, so there are a few 211 additional steps needed to run spread with the ubuntu-core-20-64 systems locally 212 using QEMU. For one, upstream spread currently does not support specifying what 213 kind of BIOS to use with the VM, so you have to build spread from this PR: 214 https://github.com/snapcore/spread/pull/95, and then use the environment 215 variable `SPREAD_QEMU_BIOS` to specify an UEFI BIOS to use with the VM, for 216 example the one from the OVMF package. To get OVMF on Ubuntu, you can just 217 install the `ovmf` package via `apt`. After installing OVMF, you can then run 218 spread like so: 219 220 $ SPREAD_QEMU_BIOS=/usr/share/OVMF/OVMF_CODE.fd spread -v qemu:ubuntu-core-20-64 221 222 This will enable testing UC20 with the spread, albeit without secure boot 223 support. None of the native UC20 tests currently require secure boot however, 224 all tests around secure boot are nested, see the section below about running the 225 nested tests. 226 227 Also, due to the in-flux state of spread support for booting UEFI VM's like 228 this, you can test ubuntu-core-20-64 only by themselves and not with any other 229 system concurrently since the environment variable is global for all systems in 230 the spread run. This will be fixed in a future release of spread. 231 232 ### Testing snapd 233 234 To test the `snapd` REST API daemon on a snappy system you need to 235 transfer it to the snappy system and then run: 236 237 sudo systemctl stop snapd.service snapd.socket 238 sudo SNAPD_DEBUG=1 SNAPD_DEBUG_HTTP=3 ./snapd 239 240 To debug interaction with the snap store, you can set `SNAP_DEBUG_HTTP`. 241 It is a bitfield: dump requests: 1, dump responses: 2, dump bodies: 4. 242 243 (make hack: In case you get some security profiles errors when trying to install or refresh a snap, 244 maybe you need to replace system installed snap-seccomp with the one aligned to the snapd that 245 you are testing. To do this, simply backup /usr/lib/snapd/snap-seccomp and overwrite it with 246 the testing one. Don't forget to rollback to the original when finish testing) 247 248 ### Running nested tests 249 250 Nested tests are used to validate features which cannot be tested on regular tests. 251 252 The nested test suites work different from the other test suites in snapd. In this case each test runs in a new image 253 which is created following the rules defined for the test. 254 255 The nested tests are executed using spread tool. See the following examples using the qemu and google backends. 256 257 . `qemu: spread qemu-nested:ubuntu-20.04-64:tests/nested/core20/tpm` 258 . `google: spread google-nested:ubuntu-20.04-64:tests/nested/core20/tpm` 259 260 The nested system in all the cases is selected based on the host system. The folloing lines show the relation between host and nested systemd (same applies for classic nested tests): 261 262 . ubuntu-16.04-64 => ubuntu-core-16-64 263 . ubuntu-18.04-64 => ubuntu-core-18-64 264 . ubuntu-20.04-64 => ubuntu-core-20-64 265 266 The tools used for creating and hosting the nested vms are: 267 268 . ubuntu-image snap is used to building the images 269 . QEMU is used for the virtualization (with kvm acceleration) 270 271 Nested test suite is composed by the following 4 suites: 272 273 classic: the nested suite contains an image of a classic system downloaded from cloud-images.ubuntu.com 274 core: it tests a core nested system and the images are generated by using ubuntu-image snap 275 core20: this is similar to core suite but tests on it are focused on UC20 276 manual: tests on this suite create a non generic image with spedific conditions 277 278 The nested suites use some environment variables to configure the suite and the tests inside it, the most important ones are the described bellow: 279 280 NESTED_WORK_DIR: It is path to the directory where all the nested assets and images are stored 281 NESTED_TYPE: Use core for ubuntu core nested systems or classic instead. 282 NESTED_CORE_CHANNEL: The images are created using ubuntu-image snap, use it to define the default branch 283 NESTED_CORE_REFRESH_CHANNEL: The images can be refreshed to a specific channel, use it to specify the channel 284 NESTED_USE_CLOUD_INIT: Use cloud init to make initial system configuration instead of user assertion 285 NESTED_ENABLE_KVM: Enable kvm in the qemu command line 286 NESTED_ENABLE_TPM: re boot in the nested vm in case it is supported (just supported on UC20) 287 NESTED_ENABLE_SECURE_BOOT: Enable secure boot in the nested vm in case it is supported (just supported on UC20) 288 NESTED_BUILD_SNAPD_FROM_CURRENT: Build and use either core or snapd snapd from current branch 289 NESTED_CUSTOM_IMAGE_URL: Download and use an custom image from this url 290 291 292 # Quick intro to hacking on snap-confine 293 294 Hey, welcome to the nice, low-level world of snap-confine 295 296 297 ## Building the code locally 298 299 To get started from a pristine tree you want to do this: 300 301 ``` 302 ./mkversion.sh 303 cd cmd/ 304 autoreconf -i -f 305 ./configure --prefix=/usr --libexecdir=/usr/lib/snapd --enable-nvidia-multiarch --with-host-arch-triplet="$(dpkg-architecture -qDEB_HOST_MULTIARCH)" 306 ``` 307 308 This will drop makefiles and let you build stuff. You may find the `make hack` 309 target, available in `cmd/snap-confine` handy, it installs the locally built 310 version on your system and reloads the apparmor profile. 311 312 Note, the above configure options assume you are on Ubuntu and are generally 313 necessary to run/test graphical applications with your local version of 314 snap-confine. The `--with-host-arch-triplet` option sets your specific 315 architecture and `--enable-nvidia-multiarch` allows the host's graphics drivers 316 and libraries to be shared with snaps. If you are on a distro other than 317 Ubuntu, try `--enable-nvidia-biarch` (though you'll likely need to add further 318 system-specific options too). 319 320 ## Submitting patches 321 322 Please run `(cd cmd; make fmt)` before sending your patches for the "C" part of 323 the source code.