github.com/ethanhsieh/snapd@v0.0.0-20210615102523-3db9b8e4edc5/HACKING.md (about) 1 # Hacking on snapd 2 3 Hacking on snapd is fun and straightforward. The code is extensively 4 unit tested and we use the [spread](https://github.com/snapcore/spread) 5 integration test framework for the integration/system level tests. 6 7 ## Development 8 9 ### Supported Go versions 10 11 From snapd 2.38, snapd supports Go 1.9 and onwards. For earlier snapd 12 releases, snapd supports Go 1.6. 13 14 ### Setting up a GOPATH 15 16 When working with the source of Go programs, you should define a path within 17 your home directory (or other workspace) which will be your `GOPATH`. `GOPATH` 18 is similar to Java's `CLASSPATH` or Python's `~/.local`. `GOPATH` is documented 19 [online](http://golang.org/pkg/go/build/) and inside the go tool itself 20 21 go help gopath 22 23 Various conventions exist for naming the location of your `GOPATH`, but it 24 should exist, and be writable by you. For example 25 26 export GOPATH=${HOME}/work 27 mkdir $GOPATH 28 29 will define and create `$HOME/work` as your local `GOPATH`. The `go` tool 30 itself will create three subdirectories inside your `GOPATH` when required; 31 `src`, `pkg` and `bin`, which hold the source of Go programs, compiled packages 32 and compiled binaries, respectively. 33 34 Setting `GOPATH` correctly is critical when developing Go programs. Set and 35 export it as part of your login script. 36 37 Add `$GOPATH/bin` to your `PATH`, so you can run the go programs you install: 38 39 PATH="$PATH:$GOPATH/bin" 40 41 (note `$GOPATH` can actually point to multiple locations, like `$PATH`, so if 42 your `$GOPATH` is more complex than a single entry you'll need to adjust the 43 above). 44 45 ### Getting the snapd sources 46 47 The easiest way to get the source for `snapd` is to use the `go get` command. 48 49 go get -d -v github.com/snapcore/snapd/... 50 51 This command will checkout the source of `snapd` and inspect it for any unmet 52 Go package dependencies, downloading those as well. `go get` will also build 53 and install `snapd` and its dependencies. To also build and install `snapd` 54 itself into `$GOPATH/bin`, omit the `-d` flag. More details on the `go get` 55 flags are available using 56 57 go help get 58 59 At this point you will have the git local repository of the `snapd` source at 60 `$GOPATH/src/github.com/snapcore/snapd`. The source for any 61 dependent packages will also be available inside `$GOPATH`. 62 63 ### Dependencies handling 64 65 Go dependencies are handled via `govendor`. Get it via: 66 67 go get -u github.com/kardianos/govendor 68 69 After a fresh checkout, move to the snapd source directory: 70 71 cd $GOPATH/src/github.com/snapcore/snapd 72 73 And then, run: 74 75 govendor sync 76 77 You can use the script `get-deps.sh` to run the two previous steps. 78 79 If a dependency need updating 80 81 govendor fetch github.com/path/of/dependency 82 83 Other dependencies are handled via distribution packages and you should ensure 84 that dependencies for your distribution are installed. For example, on Ubuntu, 85 run: 86 87 sudo apt-get build-dep ./ 88 89 ### Building 90 91 To build, once the sources are available and `GOPATH` is set, you can just run 92 93 go build -o /tmp/snap github.com/snapcore/snapd/cmd/snap 94 95 to get the `snap` binary in /tmp (or without -o to get it in the current 96 working directory). Alternatively: 97 98 go install github.com/snapcore/snapd/cmd/snap/... 99 100 to have it available in `$GOPATH/bin` 101 102 Similarly, to build the `snapd` REST API daemon, you can run 103 104 go build -o /tmp/snapd github.com/snapcore/snapd/cmd/snapd 105 106 ### Contributing 107 108 Contributions are always welcome! Please make sure that you sign the 109 Canonical contributor license agreement at 110 http://www.ubuntu.com/legal/contributors 111 112 Snapd can be found on GitHub, so in order to fork the source and contribute, 113 go to https://github.com/snapcore/snapd. Check out [GitHub's help 114 pages](https://help.github.com/) to find out how to set up your local branch, 115 commit changes and create pull requests. 116 117 We value good tests, so when you fix a bug or add a new feature we highly 118 encourage you to create a test in `$source_test.go`. See also the section 119 about Testing. 120 121 ### Testing 122 123 To run the various tests that we have to ensure a high quality source just run: 124 125 ./run-checks 126 127 This will check if the source format is consistent, that it builds, all tests 128 work as expected and that "go vet" has nothing to complain. 129 130 The source format follows the `gofmt -s` formating. Please run this on your 131 source files if `run-checks` complains about the format. 132 133 You can run an individual test for a sub-package by changing into that 134 directory and: 135 136 go test -check.f $testname 137 138 If a test hangs, you can enable verbose mode: 139 140 go test -v -check.vv 141 142 (or -check.v for less verbose output). 143 144 Note, the yamlordereddictloader python package is needed to carry out the tests format check. 145 146 There is more to read about the testing framework on the [website](https://labix.org/gocheck) 147 148 ### Running spread tests 149 150 To run the spread tests locally via QEMU, you need the latest version of 151 [spread](https://github.com/snapcore/spread). You can get spread, QEMU, and the 152 build tools to build QEMU images with: 153 154 $ sudo apt update && sudo apt install -y qemu-kvm autopkgtest 155 $ curl https://storage.googleapis.com/snapd-spread-tests/spread/spread-amd64.tar.gz | tar -xz -C $GOPATH/bin 156 157 #### Building spread VM images 158 159 To run the spread tests via QEMU you need to create VM images in the 160 `~/.spread/qemu` directory: 161 162 $ mkdir -p ~/.spread/qemu 163 $ cd ~/.spread/qemu 164 165 Assuming you are building on Ubuntu 18.04 LTS (Bionic Beaver) (or a later 166 development release like Ubuntu 19.04 (Disco Dingo)), run the following to 167 build a 64-bit Ubuntu 16.04 LTS (Xenial Xerus) VM to run the spread tests on: 168 169 $ autopkgtest-buildvm-ubuntu-cloud -r xenial 170 $ mv autopkgtest-xenial-amd64.img ubuntu-16.04-64.img 171 172 To build an Ubuntu 14.04 (Trusty Tahr) based VM, use: 173 174 $ autopkgtest-buildvm-ubuntu-cloud -r trusty --post-command='sudo apt-get install -y --install-recommends linux-generic-lts-xenial && update-grub' 175 $ mv autopkgtest-trusty-amd64.img ubuntu-14.04-64.img 176 177 This is because we need at least 4.4+ kernel for snapd to run on Ubuntu 14.04 178 LTS, which is available through the `linux-generic-lts-xenial` package. 179 180 If you are running Ubuntu 16.04 LTS, use 181 `adt-buildvm-ubuntu-cloud` instead of `autopkgtest-buildvm-ubuntu-cloud` (the 182 latter replaced the former in 18.04): 183 184 $ adt-buildvm-ubuntu-cloud -r xenial 185 $ mv adt-xenial-amd64-cloud.img ubuntu-16.04-64.img 186 187 #### Downloading spread VM images 188 189 Alternatively, instead of building the QEMU images manually, you can download 190 pre-built and somewhat maintained images from 191 [spread.zygoon.pl](spread.zygoon.pl). The images will need to be extracted 192 with `gunzip` and placed into `~/.spread/qemu` as above. 193 194 #### Running spread with QEMU 195 196 Finally, you can run the spread tests for Ubuntu 16.04 LTS 64-bit with: 197 198 $ spread -v qemu:ubuntu-16.04-64 199 200 To run for a different system, replace `ubuntu-16.04-64` with a different system 201 name. 202 203 For quick reuse you can use: 204 205 $ spread -reuse qemu:ubuntu-16.04-64 206 207 It will print how to reuse the systems. Make sure to use 208 `export REUSE_PROJECT=1` in your environment too. 209 210 #### Running UC20 spread with QEMU 211 212 Ubuntu Core 20 on amd64 has a requirement to use UEFI, so there are a few 213 additional steps needed to run spread with the ubuntu-core-20-64 systems locally 214 using QEMU. For one, upstream spread currently does not support specifying what 215 kind of BIOS to use with the VM, so you have to build spread from this PR: 216 https://github.com/snapcore/spread/pull/95, and then use the environment 217 variable `SPREAD_QEMU_BIOS` to specify an UEFI BIOS to use with the VM, for 218 example the one from the OVMF package. To get OVMF on Ubuntu, you can just 219 install the `ovmf` package via `apt`. After installing OVMF, you can then run 220 spread like so: 221 222 $ SPREAD_QEMU_BIOS=/usr/share/OVMF/OVMF_CODE.fd spread -v qemu:ubuntu-core-20-64 223 224 This will enable testing UC20 with the spread, albeit without secure boot 225 support. None of the native UC20 tests currently require secure boot however, 226 all tests around secure boot are nested, see the section below about running the 227 nested tests. 228 229 Also, due to the in-flux state of spread support for booting UEFI VM's like 230 this, you can test ubuntu-core-20-64 only by themselves and not with any other 231 system concurrently since the environment variable is global for all systems in 232 the spread run. This will be fixed in a future release of spread. 233 234 ### Testing snapd 235 236 To test the `snapd` REST API daemon on a snappy system you need to 237 transfer it to the snappy system and then run: 238 239 sudo systemctl stop snapd.service snapd.socket 240 sudo SNAPD_DEBUG=1 SNAPD_DEBUG_HTTP=3 ./snapd 241 242 To debug interaction with the snap store, you can set `SNAP_DEBUG_HTTP`. 243 It is a bitfield: dump requests: 1, dump responses: 2, dump bodies: 4. 244 245 (make hack: In case you get some security profiles errors when trying to install or refresh a snap, 246 maybe you need to replace system installed snap-seccomp with the one aligned to the snapd that 247 you are testing. To do this, simply backup /usr/lib/snapd/snap-seccomp and overwrite it with 248 the testing one. Don't forget to rollback to the original when finish testing) 249 250 ### Running nested tests 251 252 Nested tests are used to validate features which cannot be tested on regular tests. 253 254 The nested test suites work different from the other test suites in snapd. In this case each test runs in a new image 255 which is created following the rules defined for the test. 256 257 The nested tests are executed using spread tool. See the following examples using the qemu and google backends. 258 259 . `qemu: spread qemu-nested:ubuntu-20.04-64:tests/nested/core20/tpm` 260 . `google: spread google-nested:ubuntu-20.04-64:tests/nested/core20/tpm` 261 262 The nested system in all the cases is selected based on the host system. The folloing lines show the relation between host and nested systemd (same applies for classic nested tests): 263 264 . ubuntu-16.04-64 => ubuntu-core-16-64 265 . ubuntu-18.04-64 => ubuntu-core-18-64 266 . ubuntu-20.04-64 => ubuntu-core-20-64 267 268 The tools used for creating and hosting the nested vms are: 269 270 . ubuntu-image snap is used to building the images 271 . QEMU is used for the virtualization (with kvm acceleration) 272 273 Nested test suite is composed by the following 4 suites: 274 275 classic: the nested suite contains an image of a classic system downloaded from cloud-images.ubuntu.com 276 core: it tests a core nested system and the images are generated by using ubuntu-image snap 277 core20: this is similar to core suite but tests on it are focused on UC20 278 manual: tests on this suite create a non generic image with spedific conditions 279 280 The nested suites use some environment variables to configure the suite and the tests inside it, the most important ones are the described bellow: 281 282 NESTED_WORK_DIR: It is path to the directory where all the nested assets and images are stored 283 NESTED_TYPE: Use core for ubuntu core nested systems or classic instead. 284 NESTED_CORE_CHANNEL: The images are created using ubuntu-image snap, use it to define the default branch 285 NESTED_CORE_REFRESH_CHANNEL: The images can be refreshed to a specific channel, use it to specify the channel 286 NESTED_USE_CLOUD_INIT: Use cloud init to make initial system configuration instead of user assertion 287 NESTED_ENABLE_KVM: Enable kvm in the qemu command line 288 NESTED_ENABLE_TPM: re boot in the nested vm in case it is supported (just supported on UC20) 289 NESTED_ENABLE_SECURE_BOOT: Enable secure boot in the nested vm in case it is supported (just supported on UC20) 290 NESTED_BUILD_SNAPD_FROM_CURRENT: Build and use either core or snapd snapd from current branch 291 NESTED_CUSTOM_IMAGE_URL: Download and use an custom image from this url 292 293 294 # Quick intro to hacking on snap-confine 295 296 Hey, welcome to the nice, low-level world of snap-confine 297 298 299 ## Building the code locally 300 301 To get started from a pristine tree you want to do this: 302 303 ``` 304 ./mkversion.sh 305 cd cmd/ 306 autoreconf -i -f 307 ./configure --prefix=/usr --libexecdir=/usr/lib/snapd --enable-nvidia-multiarch --with-host-arch-triplet="$(dpkg-architecture -qDEB_HOST_MULTIARCH)" 308 ``` 309 310 This will drop makefiles and let you build stuff. You may find the `make hack` 311 target, available in `cmd/snap-confine` handy, it installs the locally built 312 version on your system and reloads the apparmor profile. 313 314 Note, the above configure options assume you are on Ubuntu and are generally 315 necessary to run/test graphical applications with your local version of 316 snap-confine. The `--with-host-arch-triplet` option sets your specific 317 architecture and `--enable-nvidia-multiarch` allows the host's graphics drivers 318 and libraries to be shared with snaps. If you are on a distro other than 319 Ubuntu, try `--enable-nvidia-biarch` (though you'll likely need to add further 320 system-specific options too). 321 322 ## Submitting patches 323 324 Please run `(cd cmd; make fmt)` before sending your patches for the "C" part of 325 the source code.