github.com/ranjib/nomad@v0.1.1-0.20160225204057-97751b02f70b/website/source/docs/drivers/lxc.html.md (about) 1 --- 2 layout: "docs" 3 page_title: "Drivers: LXC" 4 sidebar_current: "docs-drivers-lxc" 5 description: |- 6 The LXC task driver is used to run system containers using LXC. 7 --- 8 9 # LXC Driver 10 11 Name: `LXC` 12 13 The `LXC` driver provides an interface for using [LXC](https://linuxcontainers.org/) for running 14 systemd containers. Currently, the driver supports launching 15 containers and resource isolation, but not dynamic ports. LXC task driver 16 is capabale of running unprivileged containers (i.e as non-root user). 17 18 ## Task Configuration 19 20 The `LXC` driver supports the following configuration in the job spec: 21 22 * `name` - Name of the container that will created 23 * `clone_from` - If present container will be created by cloning this container 24 * `template` - Template that will be used to create the container (only used when `clone_from` is absent) 25 * `distro` - Distro of the container, e.g. Ubuntu (only used when `clone_from` is absent) 26 * `release` - Release for the distro, e.g. vivid for ubuntu (only used when `clone_from` is absent) 27 * `arch` - Arch of the newly created container, e.g. amd64 (only used when `clone_from` is absent) 28 29 Example: 30 31 ``` 32 task "webservice" { 33 driver = "lxc" 34 config { 35 name = "nomad-service-1" 36 template = "download" 37 distro = "ubuntu" 38 release = "vivid" 39 arch = "amd64" 40 } 41 resources { 42 cpu = 500 43 memory = 256 44 } 45 } 46 ``` 47 48 ## Task Directories 49 50 The `LXC` driver currently does not support mounting of the `alloc/` and `local/` directory. 51 52 ## Client Requirements 53 54 The `LXC` driver requires LXC libraries to be installed on the agents. 55 56 ## Client Attributes 57 58 The `LXC` driver will set the following client attributes: 59 60 * `driver.lxc` - Set to `1` if LXC is found on the host node. 61 * `driver.lxc.version` - Version of `LXC` eg: `1.0.8` 62 63 ## Resource Isolation 64 65 ### CPU 66 67 Nomad limits containers' CPU based on CPU shares. CPU shares allow containers 68 to burst past their CPU limits. CPU limits will only be imposed when there is 69 contention for resources. When the host is under load your process may be 70 throttled to stabilize QOS depending on how many shares it has. You can see how 71 many CPU shares are available to your process by reading `NOMAD_CPU_LIMIT`. 72 1000 shares are approximately equal to 1Ghz. 73 74 Please keep the implications of CPU shares in mind when you load test workloads 75 on Nomad. 76 77 ### Memory 78 79 Nomad limits containers' memory usage based on total virtual memory. This means 80 that containers scheduled by Nomad cannot use swap. This is to ensure that a 81 swappy process does not degrade performance for other workloads on the same 82 host. 83 84 Since memory is not an elastic resource, you will need to make sure your 85 container does not exceed the amount of memory allocated to it, or it will be 86 terminated or crash when it tries to malloc. A process can inspect its memory 87 limit by reading `NOMAD_MEMORY_LIMIT`, but will need to track its own memory 88 usage. Memory limit is expressed in megabytes so 1024 = 1Gb. 89 90 ### IO 91 92 Nomad's uses blkio cgroup, enforced via lxc.cgroups.blkioweight LXC config to throttle filesystem IO. 93 94 ### Security 95 96 LXC provides resource isolation by way of [cgroups](https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt) and [namespaces](http://man7.org/linux/man-pages/man7/namespaces.7.html).