github.com/lalkh/containerd@v1.4.3/docs/getting-started.md (about)

     1  # Getting started with containerd
     2  
     3  There are many different ways to use containerd.
     4  If you are a developer working on containerd you can use the `ctr` tool to quickly test features and functionality without writing extra code.
     5  However, if you want to integrate containerd into your project we have an easy to use client package that allows you to work with containerd.
     6  
     7  In this guide we will pull and run a redis server with containerd using the client package.
     8  We will assume that you are running a modern linux host for this example with a compatible build of `runc`.
     9  Please refer to [RUNC.md](/RUNC.md) for the currently supported version of `runc`.
    10  This project requires Go 1.9.x or above.
    11  If you need to install Go or update your currently installed one, please refer to Go install page at https://golang.org/doc/install.
    12  
    13  ## Starting containerd
    14  
    15  You can download one of the latest builds for containerd on the [github releases](https://github.com/containerd/containerd/releases) page and then use your favorite process supervisor to get the daemon started.
    16  If you are using systemd, we have a `containerd.service` file at the root of the repository that you can use.
    17  
    18  The daemon also uses a configuration file located in `/etc/containerd/config.toml` for specifying daemon level options.
    19  A sample configuration file looks like this:
    20  
    21  ```toml
    22  oom_score = -999
    23  
    24  [debug]
    25          level = "debug"
    26  
    27  [metrics]
    28          address = "127.0.0.1:1338"
    29  
    30  [plugins.linux]
    31          runtime = "runc"
    32          shim_debug = true
    33  ```
    34  
    35  The default configuration can be generated via `containerd config default > /etc/containerd/config.toml`.
    36  
    37  ## Connecting to containerd
    38  
    39  We will start a new `main.go` file and import the containerd root package that contains the client.
    40  
    41  
    42  ```go
    43  package main
    44  
    45  import (
    46  	"log"
    47  
    48  	"github.com/containerd/containerd"
    49  )
    50  
    51  func main() {
    52  	if err := redisExample(); err != nil {
    53  		log.Fatal(err)
    54  	}
    55  }
    56  
    57  func redisExample() error {
    58  	client, err := containerd.New("/run/containerd/containerd.sock")
    59  	if err != nil {
    60  		return err
    61  	}
    62  	defer client.Close()
    63  	return nil
    64  }
    65  ```
    66  
    67  This will create a new client with the default containerd socket path.
    68  Because we are working with a daemon over GRPC we need to create a `context` for use with calls to client methods.
    69  containerd is also namespaced for callers of the API.
    70  We should also set a namespace for our guide after creating the context.
    71  
    72  ```go
    73  	ctx := namespaces.WithNamespace(context.Background(), "example")
    74  ```
    75  
    76  Having a namespace for our usage ensures that containers, images, and other resources without containerd do not conflict with other users of a single daemon.
    77  
    78  ## Pulling the redis image
    79  
    80  Now that we have a client to work with we need to pull an image.
    81  We can use the redis image based on Alpine Linux from the DockerHub.
    82  
    83  ```go
    84  	image, err := client.Pull(ctx, "docker.io/library/redis:alpine", containerd.WithPullUnpack)
    85  	if err != nil {
    86  		return err
    87  	}
    88  ```
    89  
    90  The containerd client uses the `Opts` pattern for many of the method calls.
    91  We use the `containerd.WithPullUnpack` so that we not only fetch and download the content into containerd's content store but also unpack it into a snapshotter for use as a root filesystem.
    92  
    93  Let's put the code together that will pull the redis image based on alpine linux from Dockerhub and then print the name of the image on the console's output.
    94  
    95  ```go
    96  package main
    97  
    98  import (
    99          "context"
   100          "log"
   101  
   102          "github.com/containerd/containerd"
   103          "github.com/containerd/containerd/namespaces"
   104  )
   105  
   106  func main() {
   107          if err := redisExample(); err != nil {
   108                  log.Fatal(err)
   109          }
   110  }
   111  
   112  func redisExample() error {
   113          client, err := containerd.New("/run/containerd/containerd.sock")
   114          if err != nil {
   115                  return err
   116          }
   117          defer client.Close()
   118  
   119          ctx := namespaces.WithNamespace(context.Background(), "example")
   120          image, err := client.Pull(ctx, "docker.io/library/redis:alpine", containerd.WithPullUnpack)
   121          if err != nil {
   122                  return err
   123          }
   124          log.Printf("Successfully pulled %s image\n", image.Name())
   125  
   126          return nil
   127  }
   128  ```
   129  
   130  ```bash
   131  > go build main.go
   132  > sudo ./main
   133  
   134  2017/08/13 17:43:21 Successfully pulled docker.io/library/redis:alpine image
   135  ```
   136  
   137  ## Creating an OCI Spec and Container
   138  
   139  Now that we have an image to base our container off of, we need to generate an OCI runtime specification that the container can be based off of as well as the new container.
   140  
   141  containerd provides reasonable defaults for generating OCI runtime specs.
   142  There is also an `Opt` for modifying the default config based on the image that we pulled.
   143  
   144  The container will be based off of the image, use the runtime information in the spec that was just created, and we will allocate a new read-write snapshot so the container can store any persistent information.
   145  
   146  ```go
   147  	container, err := client.NewContainer(
   148  		ctx,
   149  		"redis-server",
   150  		containerd.WithNewSnapshot("redis-server-snapshot", image),
   151  		containerd.WithNewSpec(oci.WithImageConfig(image)),
   152  	)
   153  	if err != nil {
   154  		return err
   155  	}
   156  	defer container.Delete(ctx, containerd.WithSnapshotCleanup)
   157  ```
   158  
   159  If you have an existing OCI specification created you can use `containerd.WithSpec(spec)` to set it on the container.
   160  
   161  When creating a new snapshot for the container we need to provide a snapshot ID as well as the Image that the container will be based on.
   162  By providing a separate snapshot ID than the container ID we can easily reuse, existing snapshots across different containers.
   163  
   164  We also add a line to delete the container along with its snapshot after we are done with this example.
   165  
   166  Here is example code to pull the redis image based on alpine linux from Dockerhub, create an OCI spec, create a container based on the spec and finally delete the container.
   167  ```go
   168  package main
   169  
   170  import (
   171          "context"
   172          "log"
   173  
   174          "github.com/containerd/containerd"
   175          "github.com/containerd/containerd/oci"
   176          "github.com/containerd/containerd/namespaces"
   177  )
   178  
   179  func main() {
   180          if err := redisExample(); err != nil {
   181                  log.Fatal(err)
   182          }
   183  }
   184  
   185  func redisExample() error {
   186          client, err := containerd.New("/run/containerd/containerd.sock")
   187          if err != nil {
   188                  return err
   189          }
   190          defer client.Close()
   191  
   192          ctx := namespaces.WithNamespace(context.Background(), "example")
   193          image, err := client.Pull(ctx, "docker.io/library/redis:alpine", containerd.WithPullUnpack)
   194          if err != nil {
   195                  return err
   196          }
   197          log.Printf("Successfully pulled %s image\n", image.Name())
   198  
   199          container, err := client.NewContainer(
   200                  ctx,
   201                  "redis-server",
   202                  containerd.WithNewSnapshot("redis-server-snapshot", image),
   203                  containerd.WithNewSpec(oci.WithImageConfig(image)),
   204          )
   205          if err != nil {
   206                  return err
   207          }
   208          defer container.Delete(ctx, containerd.WithSnapshotCleanup)
   209          log.Printf("Successfully created container with ID %s and snapshot with ID redis-server-snapshot", container.ID())
   210  
   211          return nil
   212  }
   213  ```
   214  
   215  Let's see it in action.
   216  
   217  ```bash
   218  > go build main.go
   219  > sudo ./main
   220  
   221  2017/08/13 18:01:35 Successfully pulled docker.io/library/redis:alpine image
   222  2017/08/13 18:01:35 Successfully created container with ID redis-server and snapshot with ID redis-server-snapshot
   223  ```
   224  
   225  ## Creating a running Task
   226  
   227  One thing that may be confusing at first for new containerd users is the separation between a `Container` and a `Task`.
   228  A container is a metadata object that resources are allocated and attached to.
   229  A task is a live, running process on the system.
   230  Tasks should be deleted after each run while a container can be used, updated, and queried multiple times.
   231  
   232  ```go
   233  	task, err := container.NewTask(ctx, cio.NewCreator(cio.WithStdio))
   234  	if err != nil {
   235  		return err
   236  	}
   237  	defer task.Delete(ctx)
   238  ```
   239  
   240  The new task that we just created is actually a running process on your system.
   241  We use `cio.WithStdio` so that all IO from the container is sent to our `main.go` process.
   242  This is a `cio.Opt` that configures the `Streams` used by `NewCreator` to return a `cio.IO`
   243  for the new task.
   244  
   245  If you are familiar with the OCI runtime actions, the task is currently in the "created" state.
   246  This means that the namespaces, root filesystem, and various container level settings have been initialized but the user defined process, in this example "redis-server", has not been started.
   247  This gives users a chance to setup network interfaces or attach different tools to monitor the container.
   248  containerd also takes this opportunity to monitor your container as well.
   249  Waiting on things like the container's exit status and cgroup metrics are setup at this point.
   250  
   251  If you are familiar with prometheus you can curl the containerd metrics endpoint (in the `config.toml` that we created) to see your container's metrics:
   252  
   253  ```bash
   254  > curl 127.0.0.1:1338/v1/metrics
   255  ```
   256  
   257  Pretty cool right?
   258  
   259  ## Task Wait and Start
   260  
   261  Now that we have a task in the created state we need to make sure that we wait on the task to exit.
   262  It is essential to wait for the task to finish so that we can close our example and cleanup the resources that we created.
   263  You always want to make sure you `Wait` before calling `Start` on a task.
   264  This makes sure that you do not encounter any races if the task has a simple program like `/bin/true` that exits promptly after calling start.
   265  
   266  ```go
   267  	exitStatusC, err := task.Wait(ctx)
   268  	if err != nil {
   269  		return err
   270  	}
   271  
   272  	if err := task.Start(ctx); err != nil {
   273  		return err
   274  	}
   275  ```
   276  
   277  Now we should see the `redis-server` logs in our terminal when we run the `main.go` file.
   278  
   279  ## Killing the task
   280  
   281  Since we are running a long running server we will need to kill the task in order to exit out of our example.
   282  To do this we will simply call `Kill` on the task after waiting a couple of seconds so we have a chance to see the redis-server logs.
   283  
   284  ```go
   285  	time.Sleep(3 * time.Second)
   286  
   287  	if err := task.Kill(ctx, syscall.SIGTERM); err != nil {
   288  		return err
   289  	}
   290  
   291  	status := <-exitStatusC
   292  	code, exitedAt, err := status.Result()
   293  	if err != nil {
   294  		return err
   295  	}
   296  	fmt.Printf("redis-server exited with status: %d\n", code)
   297  ```
   298  
   299  We wait on our exit status channel that we setup to ensure the task has fully exited and we get the exit status.
   300  If you have to reload containers or miss waiting on a task, `Delete` will also return the exit status when you finally delete the task.
   301  We got you covered.
   302  
   303  ```go
   304  status, err := task.Delete(ctx)
   305  ```
   306  
   307  ## Full Example
   308  
   309  Here is the full example that we just put together.
   310  
   311  ```go
   312  package main
   313  
   314  import (
   315  	"context"
   316  	"fmt"
   317  	"log"
   318  	"syscall"
   319  	"time"
   320  
   321  	"github.com/containerd/containerd"
   322  	"github.com/containerd/containerd/cio"
   323  	"github.com/containerd/containerd/oci"
   324  	"github.com/containerd/containerd/namespaces"
   325  )
   326  
   327  func main() {
   328  	if err := redisExample(); err != nil {
   329  		log.Fatal(err)
   330  	}
   331  }
   332  
   333  func redisExample() error {
   334  	// create a new client connected to the default socket path for containerd
   335  	client, err := containerd.New("/run/containerd/containerd.sock")
   336  	if err != nil {
   337  		return err
   338  	}
   339  	defer client.Close()
   340  
   341  	// create a new context with an "example" namespace
   342  	ctx := namespaces.WithNamespace(context.Background(), "example")
   343  
   344  	// pull the redis image from DockerHub
   345  	image, err := client.Pull(ctx, "docker.io/library/redis:alpine", containerd.WithPullUnpack)
   346  	if err != nil {
   347  		return err
   348  	}
   349  
   350  	// create a container
   351  	container, err := client.NewContainer(
   352  		ctx,
   353  		"redis-server",
   354  		containerd.WithImage(image),
   355  		containerd.WithNewSnapshot("redis-server-snapshot", image),
   356  		containerd.WithNewSpec(oci.WithImageConfig(image)),
   357  	)
   358  	if err != nil {
   359  		return err
   360  	}
   361  	defer container.Delete(ctx, containerd.WithSnapshotCleanup)
   362  
   363  	// create a task from the container
   364  	task, err := container.NewTask(ctx, cio.NewCreator(cio.WithStdio))
   365  	if err != nil {
   366  		return err
   367  	}
   368  	defer task.Delete(ctx)
   369  
   370  	// make sure we wait before calling start
   371  	exitStatusC, err := task.Wait(ctx)
   372  	if err != nil {
   373  		fmt.Println(err)
   374  	}
   375  
   376  	// call start on the task to execute the redis server
   377  	if err := task.Start(ctx); err != nil {
   378  		return err
   379  	}
   380  
   381  	// sleep for a lil bit to see the logs
   382  	time.Sleep(3 * time.Second)
   383  
   384  	// kill the process and get the exit status
   385  	if err := task.Kill(ctx, syscall.SIGTERM); err != nil {
   386  		return err
   387  	}
   388  
   389  	// wait for the process to fully exit and print out the exit status
   390  
   391  	status := <-exitStatusC
   392  	code, _, err := status.Result()
   393  	if err != nil {
   394  		return err
   395  	}
   396  	fmt.Printf("redis-server exited with status: %d\n", code)
   397  
   398  	return nil
   399  }
   400  ```
   401  
   402  We can build this example and run it as follows to see our hard work come together.
   403  
   404  ```bash
   405  > go build main.go
   406  > sudo ./main
   407  
   408  1:C 04 Aug 20:41:37.682 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
   409  1:C 04 Aug 20:41:37.682 # Redis version=4.0.1, bits=64, commit=00000000, modified=0, pid=1, just started
   410  1:C 04 Aug 20:41:37.682 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
   411  1:M 04 Aug 20:41:37.682 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
   412  1:M 04 Aug 20:41:37.682 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.
   413  1:M 04 Aug 20:41:37.682 # Current maximum open files is 1024. maxclients has been reduced to 992 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
   414  1:M 04 Aug 20:41:37.683 * Running mode=standalone, port=6379.
   415  1:M 04 Aug 20:41:37.683 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
   416  1:M 04 Aug 20:41:37.684 # Server initialized
   417  1:M 04 Aug 20:41:37.684 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
   418  1:M 04 Aug 20:41:37.684 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
   419  1:M 04 Aug 20:41:37.684 * Ready to accept connections
   420  1:signal-handler (1501879300) Received SIGTERM scheduling shutdown...
   421  1:M 04 Aug 20:41:40.791 # User requested shutdown...
   422  1:M 04 Aug 20:41:40.791 * Saving the final RDB snapshot before exiting.
   423  1:M 04 Aug 20:41:40.794 * DB saved on disk
   424  1:M 04 Aug 20:41:40.794 # Redis is now ready to exit, bye bye...
   425  redis-server exited with status: 0
   426  ```
   427  
   428  In the end, we really did not write that much code when you use the client package.
   429  
   430  I hope this guide helped to get you up and running with containerd.
   431  Feel free to join the [slack channel](https://dockr.ly/community) if you have any questions and like all things, if you want to help contribute to containerd or this guide, submit a pull request.