github.com/google/syzkaller@v0.0.0-20251211124644-a066d2bc4b02/docs/setup_syzbot.md (about) 1 # How to set up syzbot 2 3 This doc will be useful to you: 4 - should you wish to hack on user interface bits like the dashboard / mailing list integration or 5 - should you wish to continuously run a separate syzbot dashboard for your own kernels 6 7 Note: For most development purposes you don't need a full syzbot setup. The meat of syzkaller is really located 8 in syz-manager and syz-executor. You can run syz-manager directly which is usually what you will want to do during 9 fuzzer development. [See this documentation for syz-manager setup instructions](setup.md). 10 11 This doc assumes that you: 12 - have a GCP account and billing setup 13 - created a GCP project for running syzbot in 14 - are running a reasonably modern linux distro 15 - locally installed `gcloud`, `ssh`, `go` and `build-essential` 16 - may need to install `google-cloud-sdk-app-engine-go` for the GAE deployment to work 17 - ran `gcloud auth login` to run authenticated gcloud commands 18 - read [go/syzbot-setup](https://goto.google.com/syzbot-setup) if you are a Googler 19 20 While most syzkaller bits happily run on various operating systems, the syzbot dashboard does not. The dashboard is a Google App Engine or GAE project. GAE allows developers to develop web applications without needing to worry about the underlying servers. Instead developers just push their code and GAE takes care of web servers, load balancers and more. Hence this document is more Google Cloud focused than the rest of our documentation. 21 22 We will also deploy a syz-ci instance. syz-ci keeps track of the syzkaller and kernel repositories and continuously rebuilds the kernel under test, itself and other syzkaller components when new commits land in the upstream repositories. syz-ci also takes care of (re)starting syz-manager instances, which in turn (re)start VMs fuzzing the target kernel. For simplicity we will run everything in this doc on GCP even though syz-ci could run elsewhere. 23 24  25 26 27 ## Deploying Syz-ci 28 29 [local] First prepare an initial syz-ci build locally (later syz-ci rebuilds itself) and a rootfs: 30 31 ```sh 32 # Most syzkaller components can be build even outside of the GOPATH, however 33 # the syzbot app engine deployment only works from the GOPATH right now.. 34 export GOOGLE_GO=$HOME/gopath/src/github.com/google/ 35 mkdir -p $GOOGLE_GO 36 git clone https://github.com/google/syzkaller.git 37 mv syzkaller $GOOGLE_GO/ 38 cd $GOOGLE_GO/syzkaller 39 make ci 40 41 cd ~/repos 42 git clone git://git.buildroot.net/buildroot 43 cd buildroot 44 $GOOGLE_GO/syzkaller/tools/create-buildroot-image.sh 45 ``` 46 47 [local] Enable various services in the project, create a VM, storage bucket, scp assets and login: 48 49 ```sh 50 export PROJECT='your-gcp-project' 51 export CI_HOSTNAME='ci-linux' 52 export GOOGLE_GO=$HOME/gopath/src/github.com/google/ 53 54 gcloud services enable compute.googleapis.com --project="$PROJECT" 55 gcloud compute instances create "$CI_HOSTNAME" --image-family=debian-11 --image-project=debian-cloud --machine-type=e2-standard-16 --zone=us-central1-a --boot-disk-size=250 --scopes=cloud-platform --project="$PROJECT" 56 57 # Enabling compute.googleapis.com created a service account. We allow the syz-ci VM 58 # to assume the permissions of that service account. As syz-ci needs query / create / delete 59 # other VMs in the project, we need to give the new service account various permissions 60 gcloud services enable iam.googleapis.com --project $PROJECT 61 SERVICE_ACCOUNT=`gcloud iam service-accounts list --filter 'displayName:Compute Engine default service account' --format='value(email)' --project $PROJECT` 62 gcloud projects add-iam-policy-binding "$PROJECT" --role="roles/editor" --member="serviceAccount:$SERVICE_ACCOUNT" --quiet 63 64 gcloud services enable storage-api.googleapis.com --project="$PROJECT" 65 gsutil mb -p "$PROJECT" "gs://$PROJECT-bucket" 66 67 gcloud services enable cloudbuild.googleapis.com --project="$PROJECT" 68 69 # We need to wait a bit for the VM to become accessible. Let's just… 70 sleep 10 71 72 # Copy in buildroot 73 gcloud compute scp --zone us-central1-a --project="$PROJECT" ~/repos/buildroot/output/images/disk.img "$CI_HOSTNAME":~/ 74 75 # Copy in syz-ci binary 76 gcloud compute scp --zone us-central1-a --project="$PROJECT" $GOOGLE_GO/syzkaller/bin/syz-ci "$CI_HOSTNAME":~/ 77 78 # Prepare syz-ci config 79 cat <<EOF > /tmp/config.ci 80 { 81 "name": "$CI_HOSTNAME", 82 "http": ":80", 83 "manager_port_start": 50010, 84 "syzkaller_repo": "https://github.com/google/syzkaller.git", 85 "managers": [ 86 { 87 "repo": "git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git", 88 "repo_alias": "upstream", 89 "userspace": "disk.img", 90 "kernel_config": "config/linux/upstream-apparmor-kasan.config", 91 "manager_config": { 92 "name": "ci-upstream-kasan-gce", 93 "target": "linux/amd64", 94 "procs": 6, 95 "type": "gce", 96 "vm": { 97 "count": 5, 98 "machine_type": "e2-standard-2", 99 "gcs_path": "$PROJECT-bucket/disks" 100 }, 101 "disable_syscalls": [ "perf_event_open*" ] 102 } 103 } 104 ] 105 } 106 EOF 107 gcloud compute scp --zone us-central1-a --project="$PROJECT" /tmp/config.ci "$CI_HOSTNAME":~/ 108 109 # ssh into the syz-ci machine. Will be required in the next step. 110 gcloud compute ssh "$CI_HOSTNAME" --zone us-central1-a --project="$PROJECT" 111 ``` 112 113 [syz-ci] Let's install and configure the syz-ci service on our syz-ci VM: 114 115 ```sh 116 sudo apt install -y wget git docker.io build-essential 117 118 # We need a recent go version, not yet available in debian 11 119 wget 'https://go.dev/dl/go1.18.linux-amd64.tar.gz' 120 sudo tar -zxvf go1.18.linux-amd64.tar.gz -C /usr/local/ 121 echo "export PATH=/usr/local/go/bin:${PATH}" | sudo tee /etc/profile.d/go.sh 122 source /etc/profile.d/go.sh 123 124 sudo mkdir /syzkaller 125 sudo mv ~/syz-ci /syzkaller/ 126 sudo mv ~/disk.img /syzkaller/ 127 sudo mv ~/config.ci /syzkaller/ 128 sudo ln -s /syzkaller/gopath/src/github.com/google/syzkaller/dashboard/config /syzkaller/config 129 130 # Pull docker container used by syz-ci for building the linux kernel 131 # We also do this on systemd start, but the first pull might take a long time, 132 # resulting in startup timeouts if we don't pull here once first. 133 sudo /usr/bin/docker pull gcr.io/syzkaller/syzbot 134 135 cat <<EOF > /tmp/syz-ci.service 136 [Unit] 137 Description=syz-ci 138 Requires=docker.service 139 After=docker.service 140 141 [Service] 142 Type=simple 143 User=root 144 ExecStartPre=-/usr/bin/docker rm --force syz-ci 145 ExecStartPre=/usr/bin/docker pull gcr.io/syzkaller/syzbot 146 ExecStartPre=/usr/bin/docker image prune --filter="dangling=true" -f 147 # --privileged is required for pkg/osutil sandboxing, 148 # otherwise unshare syscall fails with EPERM. 149 # Consider giving it finer-grained permissions, 150 # or maybe running an unpriv container is better than 151 # our sandboxing (?) then we could instead add 152 # --env SYZ_DISABLE_SANDBOXING=yes. 153 # However, we will also need to build GCE images, 154 # which requires access to loop devices, mount, etc. 155 # Proxying /dev is required for image build, 156 # otherwise partition devices (/dev/loop0p1) 157 # don't appear inside of the container. 158 # Host network is required because syz-manager inside 159 # of the container will create GCE VMs which will 160 # connect back to the syz-manager using this VM's IP 161 # and syz-manager port generated inside of the container. 162 # Without host networking the port is not open on the machine. 163 ExecStart=/usr/bin/docker run --rm --name syz-ci \ 164 --privileged \ 165 --network host \ 166 --volume /var/run/docker.sock:/var/run/docker.sock \ 167 --volume /syzkaller:/syzkaller \ 168 --volume /dev:/dev \ 169 --workdir /syzkaller \ 170 --env HOME=/syzkaller \ 171 gcr.io/syzkaller/syzbot \ 172 /syzkaller/syz-ci -config config.ci 173 ExecStop=/usr/bin/docker stop -t 600 syz-ci 174 Restart=always 175 RestartSec=10 176 KillMode=mixed 177 178 [Install] 179 WantedBy=multi-user.target 180 EOF 181 sudo mv /tmp/syz-ci.service /etc/systemd/system/ 182 sudo systemctl daemon-reload 183 sudo systemctl restart syz-ci 184 sudo systemctl enable syz-ci 185 sudo journalctl -fu syz-ci 186 ``` 187 188 Check the syc-ci journal logs at this point to see if the service comes up fine. Now syz-ci needs to do a bunch of time consuming stuff like building the kernel under test, so be patient. 189 190 If you want to hack on syz-ci you can stop here. Otherwise the next section builds on the syz-ci instructions and extends the setup with a dashboard deployment. 191 192 ## Deploying Syzbot dashboard 193 194 [locally] deploy the dashboard to Google App Engine: 195 196 ```sh 197 export PROJECT='your-gcp-project' 198 export CI_HOSTNAME='ci-linux' 199 # A random string used by the syz-ci to authenticate against the dashboard 200 export CI_KEY='fill-with-random-ci-key-string' 201 # A random string used by the syz-manager to authenticate against the dashboard 202 export MANAGER_KEY='fill-with-random-manager-key-string' 203 # A random string used for hashing, can be anything, but once fixed it can't 204 # be changed as it becomes a part of persistent bug identifiers. 205 export KEY='fill-with-random-key-string' 206 # This email will receive all of the crashes found by your instance. 207 export EMAIL='syzkaller@example.com' 208 209 gcloud app create --region us-central --project $PROJECT --quiet 210 211 # Grant the app engine service account access to Datastore 212 SERVICE_ACCOUNT=`gcloud iam service-accounts list --filter 'displayName:App Engine default service account' --format='value(email)' --project $PROJECT` 213 gcloud projects add-iam-policy-binding "$PROJECT" \ 214 --member="serviceAccount:$SERVICE_ACCOUNT" \ 215 --role="roles/editor" 216 gcloud projects add-iam-policy-binding "$PROJECT" \ 217 --member="serviceAccount:$SERVICE_ACCOUNT" \ 218 --role="roles/datastore.owner" 219 220 GOOGLE_GO=$HOME/gopath/src/github.com/google/ 221 cd $GOOGLE_GO/syzkaller 222 223 # Enable some crons for sending emails and such 224 gcloud services enable cloudscheduler.googleapis.com --project $PROJECT 225 gcloud app deploy ./dashboard/app/cron.yaml --project $PROJECT --quiet 226 227 # Create required Datastore indexes. Requires a few minutes to 228 # generate before they (and hence syzbot) become usable 229 gcloud datastore indexes create ./dashboard/app/index.yaml --project $PROJECT --quiet 230 231 cat <<EOF > ./dashboard/app/config_not_prod.go 232 package main 233 import ( 234 "time" 235 "github.com/google/syzkaller/dashboard/dashapi" 236 ) 237 const ( 238 reportingUpstream = "upstream" 239 moderationDailyLimit = 30 240 internalDailyLimit = 30 241 reportingDelay = 0 242 domainLinux = "linux" 243 ) 244 func init() { 245 checkConfig(prodConfig) 246 mainConfig = prodConfig 247 } 248 var prodConfig = &GlobalConfig{ 249 AccessLevel: AccessPublic, 250 AuthDomains: []string{"@google.com"}, 251 CoverPath: "https://storage.googleapis.com/syzkaller/cover/", 252 Clients: map[string]string{ 253 "$CI_HOSTNAME": "$CI_KEY", 254 }, 255 Obsoleting: ObsoletingConfig{ 256 MinPeriod: 90 * 24 * time.Hour, 257 MaxPeriod: 120 * 24 * time.Hour, 258 NonFinalMinPeriod: 60 * 24 * time.Hour, 259 NonFinalMaxPeriod: 90 * 24 * time.Hour, 260 }, 261 DefaultNamespace: "upstream", 262 Namespaces: map[string]*Config{ 263 "upstream": { 264 AccessLevel: AccessPublic, 265 DisplayTitle: "Linux", 266 SimilarityDomain: domainLinux, 267 Key: "$KEY", 268 Clients: map[string]string{ 269 "ci-upstream-kasan-gce": "$MANAGER_KEY", 270 }, 271 Repos: []KernelRepo{ 272 { 273 URL: "git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git", 274 Branch: "master", 275 Alias: "upstream", 276 ReportingPriority: 9, 277 }, 278 }, 279 MailWithoutReport: true, 280 ReportingDelay: reportingDelay, 281 WaitForRepro: 0, 282 Managers: map[string]ConfigManager{}, 283 Reporting: []Reporting{ 284 { 285 AccessLevel: AccessPublic, 286 Name: reportingUpstream, 287 DailyLimit: 30, 288 Config: &EmailConfig{ 289 Email: "$EMAIL", 290 SubjectPrefix: "[syzbot-test]", 291 MailMaintainers: false, 292 }, 293 }, 294 }, 295 TransformCrash: func(build *Build, crash *dashapi.Crash) bool { 296 return true 297 }, 298 NeedRepro: func(bug *Bug) bool { 299 return true 300 }, 301 }, 302 }, 303 } 304 EOF 305 306 # Deploy the actual dashboard GAE application 307 GOPATH=~/gopath GO111MODULE=off gcloud beta app deploy ./dashboard/app/app.yaml --project "$PROJECT" --quiet 308 ``` 309 310 ### Integrating Syz-ci with syzbot 311 312 [locally] Prepare config and login to syz-ci VM: 313 314 ```sh 315 export PROJECT='your-gcp-project' 316 export CI_HOSTNAME='ci-linux' 317 export CI_KEY='fill-with-random-ci-key-string' 318 export MANAGER_KEY='fill-with-random-manager-key-string' 319 export DASHBOARD_FQDN=`gcloud app describe --project $PROJECT --format 'value(defaultHostname)'` 320 321 cat <<EOF > /tmp/config.ci 322 { 323 "name": "$CI_HOSTNAME", 324 "http": ":80", 325 "manager_port_start": 50010, 326 "dashboard_addr": "https://$DASHBOARD_FQDN", 327 "dashboard_client": "$CI_HOSTNAME", 328 "dashboard_key": "$CI_KEY", 329 "syzkaller_repo": "https://github.com/google/syzkaller.git", 330 "managers": [ 331 { 332 "repo": "git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git", 333 "repo_alias": "upstream", 334 "dashboard_client": "ci-upstream-kasan-gce", 335 "dashboard_key": "$MANAGER_KEY", 336 "userspace": "disk.img", 337 "kernel_config": "config/linux/upstream-apparmor-kasan.config", 338 "manager_config": { 339 "name": "ci-upstream", 340 "target": "linux/amd64", 341 "procs": 6, 342 "type": "gce", 343 "vm": { 344 "count": 5, 345 "machine_type": "e2-standard-2", 346 "gcs_path": "$PROJECT-bucket/disks" 347 }, 348 "disable_syscalls": [ "perf_event_open*" ] 349 } 350 } 351 ] 352 } 353 EOF 354 gcloud compute scp --zone us-central1-a --project="$PROJECT" /tmp/config.ci "$CI_HOSTNAME":~/ 355 356 gcloud compute ssh "$CI_HOSTNAME" --zone us-central1-a --project="$PROJECT" 357 ``` 358 359 [syz-ci] Reconfigure syz-ci to start sending results to the dashboard: 360 361 ```sh 362 sudo mv ~/config.ci /syzkaller/ 363 sudo systemctl restart syz-ci 364 sudo journalctl -fu syz-ci 365 ``` 366 367 [locally] Open the dashboard in your browser: 368 ``` 369 gcloud app browse --project=$PROJECT 370 ``` 371 Once syzkaller finds the first crashes they should show up here. This might take a while.