github.com/theblckswan/cni@v0.8.1/README.md (about) 1 [![Linux Build Status](https://travis-ci.org/containernetworking/cni.svg?branch=master)](https://travis-ci.org/containernetworking/cni) 2 [![Windows Build Status](https://ci.appveyor.com/api/projects/status/wtrkou8oow7x533e/branch/master?svg=true)](https://ci.appveyor.com/project/cni-bot/cni/branch/master) 3 [![Coverage Status](https://coveralls.io/repos/github/containernetworking/cni/badge.svg?branch=master)](https://coveralls.io/github/containernetworking/cni?branch=master) 4 5 ![CNI Logo](logo.png) 6 7 --- 8 9 # CNI - the Container Network Interface 10 11 ## What is CNI? 12 13 CNI (_Container Network Interface_), a [Cloud Native Computing Foundation](https://cncf.io) project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. 14 CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. 15 Because of this focus, CNI has a wide range of support and the specification is simple to implement. 16 17 As well as the [specification](SPEC.md), this repository contains the Go source code of a [library for integrating CNI into applications](libcni) and an [example command-line tool](cnitool) for executing CNI plugins. A [separate repository contains reference plugins](https://github.com/containernetworking/plugins) and a template for making new plugins. 18 19 The template code makes it straight-forward to create a CNI plugin for an existing container networking project. 20 CNI also makes a good framework for creating a new container networking project from scratch. 21 22 Here are the recordings of two sessions that the CNI maintainers hosted at KubeCon/CloudNativeCon 2019: 23 24 - [Introduction to CNI](https://youtu.be/YjjrQiJOyME) 25 - [CNI deep dive](https://youtu.be/zChkx-AB5Xc) 26 27 ## Why develop CNI? 28 29 Application containers on Linux are a rapidly evolving area, and within this area networking is not well addressed as it is highly environment-specific. 30 We believe that many container runtimes and orchestrators will seek to solve the same problem of making the network layer pluggable. 31 32 To avoid duplication, we think it is prudent to define a common interface between the network plugins and container execution: hence we put forward this specification, along with libraries for Go and a set of plugins. 33 34 ## Who is using CNI? 35 ### Container runtimes 36 - [rkt - container engine](https://coreos.com/blog/rkt-cni-networking.html) 37 - [Kubernetes - a system to simplify container operations](https://kubernetes.io/docs/admin/network-plugins/) 38 - [OpenShift - Kubernetes with additional enterprise features](https://github.com/openshift/origin/blob/master/docs/openshift_networking_requirements.md) 39 - [Cloud Foundry - a platform for cloud applications](https://github.com/cloudfoundry-incubator/cf-networking-release) 40 - [Apache Mesos - a distributed systems kernel](https://github.com/apache/mesos/blob/master/docs/cni.md) 41 - [Amazon ECS - a highly scalable, high performance container management service](https://aws.amazon.com/ecs/) 42 - [Singularity - container platform optimized for HPC, EPC, and AI](https://github.com/sylabs/singularity) 43 - [OpenSVC - orchestrator for legacy and containerized application stacks](https://docs.opensvc.com/latest/fr/agent.configure.cni.html) 44 45 ### 3rd party plugins 46 - [Project Calico - a layer 3 virtual network](https://github.com/projectcalico/calico-cni) 47 - [Weave - a multi-host Docker network](https://github.com/weaveworks/weave) 48 - [Contiv Networking - policy networking for various use cases](https://github.com/contiv/netplugin) 49 - [SR-IOV](https://github.com/hustcat/sriov-cni) 50 - [Cilium - BPF & XDP for containers](https://github.com/cilium/cilium) 51 - [Infoblox - enterprise IP address management for containers](https://github.com/infobloxopen/cni-infoblox) 52 - [Multus - a Multi plugin](https://github.com/Intel-Corp/multus-cni) 53 - [Romana - Layer 3 CNI plugin supporting network policy for Kubernetes](https://github.com/romana/kube) 54 - [CNI-Genie - generic CNI network plugin](https://github.com/Huawei-PaaS/CNI-Genie) 55 - [Nuage CNI - Nuage Networks SDN plugin for network policy kubernetes support ](https://github.com/nuagenetworks/nuage-cni) 56 - [Silk - a CNI plugin designed for Cloud Foundry](https://github.com/cloudfoundry-incubator/silk) 57 - [Linen - a CNI plugin designed for overlay networks with Open vSwitch and fit in SDN/OpenFlow network environment](https://github.com/John-Lin/linen-cni) 58 - [Vhostuser - a Dataplane network plugin - Supports OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) 59 - [Amazon ECS CNI Plugins - a collection of CNI Plugins to configure containers with Amazon EC2 elastic network interfaces (ENIs)](https://github.com/aws/amazon-ecs-cni-plugins) 60 - [Bonding CNI - a Link aggregating plugin to address failover and high availability network](https://github.com/Intel-Corp/bond-cni) 61 - [ovn-kubernetes - an container network plugin built on Open vSwitch (OVS) and Open Virtual Networking (OVN) with support for both Linux and Windows](https://github.com/openvswitch/ovn-kubernetes) 62 - [Juniper Contrail](https://www.juniper.net/cloud) / [TungstenFabric](https://tungstenfabric.io) - Provides overlay SDN solution, delivering multicloud networking, hybrid cloud networking, simultaneous overlay-underlay support, network policy enforcement, network isolation, service chaining and flexible load balancing 63 - [Knitter - a CNI plugin supporting multiple networking for Kubernetes](https://github.com/ZTE/Knitter) 64 - [DANM - a CNI-compliant networking solution for TelCo workloads running on Kubernetes](https://github.com/nokia/danm) 65 - [VMware NSX – a CNI plugin that enables automated NSX L2/L3 networking and L4/L7 Load Balancing; network isolation at the pod, node, and cluster level; and zero-trust security policy for your Kubernetes cluster.](https://docs.vmware.com/en/VMware-NSX-T/2.2/com.vmware.nsxt.ncp_kubernetes.doc/GUID-6AFA724E-BB62-4693-B95C-321E8DDEA7E1.html) 66 - [cni-route-override - a meta CNI plugin that override route information](https://github.com/redhat-nfvpe/cni-route-override) 67 - [Terway - a collection of CNI Plugins based on alibaba cloud VPC/ECS network product](https://github.com/AliyunContainerService/terway) 68 - [Cisco ACI CNI - for on-prem and cloud container networking with consistent policy and security model.](https://github.com/noironetworks/aci-containers) 69 - [Kube-OVN - a CNI plugin that bases on OVN/OVS and provides advanced features like subnet, static ip, ACL, QoS, etc.](https://github.com/alauda/kube-ovn) 70 - [Project Antrea - an Open vSwitch k8s CNI](https://github.com/vmware-tanzu/antrea) 71 - [OVN4NFV-K8S-Plugin - a OVN based CNI controller plugin to provide cloud native based Service function chaining (SFC), Multiple OVN overlay networking](https://github.com/opnfv/ovn4nfv-k8s-plugin) 72 73 The CNI team also maintains some [core plugins in a separate repository](https://github.com/containernetworking/plugins). 74 75 76 ## Contributing to CNI 77 78 We welcome contributions, including [bug reports](https://github.com/containernetworking/cni/issues), and code and documentation improvements. 79 If you intend to contribute to code or documentation, please read [CONTRIBUTING.md](CONTRIBUTING.md). Also see the [contact section](#contact) in this README. 80 81 ## How do I use CNI? 82 83 ### Requirements 84 85 The CNI spec is language agnostic. To use the Go language libraries in this repository, you'll need a recent version of Go. You can find the Go versions covered by our [automated tests](https://travis-ci.org/containernetworking/cni/builds) in [.travis.yaml](.travis.yml). 86 87 ### Reference Plugins 88 89 The CNI project maintains a set of [reference plugins](https://github.com/containernetworking/plugins) that implement the CNI specification. 90 NOTE: the reference plugins used to live in this repository but have been split out into a [separate repository](https://github.com/containernetworking/plugins) as of May 2017. 91 92 ### Running the plugins 93 94 After building and installing the [reference plugins](https://github.com/containernetworking/plugins), you can use the `priv-net-run.sh` and `docker-run.sh` scripts in the `scripts/` directory to exercise the plugins. 95 96 **note - priv-net-run.sh depends on `jq`** 97 98 Start out by creating a netconf file to describe a network: 99 100 ```bash 101 $ mkdir -p /etc/cni/net.d 102 $ cat >/etc/cni/net.d/10-mynet.conf <<EOF 103 { 104 "cniVersion": "0.2.0", 105 "name": "mynet", 106 "type": "bridge", 107 "bridge": "cni0", 108 "isGateway": true, 109 "ipMasq": true, 110 "ipam": { 111 "type": "host-local", 112 "subnet": "10.22.0.0/16", 113 "routes": [ 114 { "dst": "0.0.0.0/0" } 115 ] 116 } 117 } 118 EOF 119 $ cat >/etc/cni/net.d/99-loopback.conf <<EOF 120 { 121 "cniVersion": "0.2.0", 122 "name": "lo", 123 "type": "loopback" 124 } 125 EOF 126 ``` 127 128 The directory `/etc/cni/net.d` is the default location in which the scripts will look for net configurations. 129 130 Next, build the plugins: 131 132 ```bash 133 $ cd $GOPATH/src/github.com/containernetworking/plugins 134 $ ./build_linux.sh # or build_windows.sh 135 ``` 136 137 Finally, execute a command (`ifconfig` in this example) in a private network namespace that has joined the `mynet` network: 138 139 ```bash 140 $ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin 141 $ cd $GOPATH/src/github.com/containernetworking/cni/scripts 142 $ sudo CNI_PATH=$CNI_PATH ./priv-net-run.sh ifconfig 143 eth0 Link encap:Ethernet HWaddr f2:c2:6f:54:b8:2b 144 inet addr:10.22.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 145 inet6 addr: fe80::f0c2:6fff:fe54:b82b/64 Scope:Link 146 UP BROADCAST MULTICAST MTU:1500 Metric:1 147 RX packets:1 errors:0 dropped:0 overruns:0 frame:0 148 TX packets:0 errors:0 dropped:1 overruns:0 carrier:0 149 collisions:0 txqueuelen:0 150 RX bytes:90 (90.0 B) TX bytes:0 (0.0 B) 151 152 lo Link encap:Local Loopback 153 inet addr:127.0.0.1 Mask:255.0.0.0 154 inet6 addr: ::1/128 Scope:Host 155 UP LOOPBACK RUNNING MTU:65536 Metric:1 156 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 157 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 158 collisions:0 txqueuelen:0 159 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 160 ``` 161 162 The environment variable `CNI_PATH` tells the scripts and library where to look for plugin executables. 163 164 ## Running a Docker container with network namespace set up by CNI plugins 165 166 Use the instructions in the previous section to define a netconf and build the plugins. 167 Next, docker-run.sh script wraps `docker run`, to execute the plugins prior to entering the container: 168 169 ```bash 170 $ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin 171 $ cd $GOPATH/src/github.com/containernetworking/cni/scripts 172 $ sudo CNI_PATH=$CNI_PATH ./docker-run.sh --rm busybox:latest ifconfig 173 eth0 Link encap:Ethernet HWaddr fa:60:70:aa:07:d1 174 inet addr:10.22.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 175 inet6 addr: fe80::f860:70ff:feaa:7d1/64 Scope:Link 176 UP BROADCAST MULTICAST MTU:1500 Metric:1 177 RX packets:1 errors:0 dropped:0 overruns:0 frame:0 178 TX packets:0 errors:0 dropped:1 overruns:0 carrier:0 179 collisions:0 txqueuelen:0 180 RX bytes:90 (90.0 B) TX bytes:0 (0.0 B) 181 182 lo Link encap:Local Loopback 183 inet addr:127.0.0.1 Mask:255.0.0.0 184 inet6 addr: ::1/128 Scope:Host 185 UP LOOPBACK RUNNING MTU:65536 Metric:1 186 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 187 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 188 collisions:0 txqueuelen:0 189 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 190 ``` 191 192 ## What might CNI do in the future? 193 194 CNI currently covers a wide range of needs for network configuration due to its simple model and API. 195 However, in the future CNI might want to branch out into other directions: 196 197 - Dynamic updates to existing network configuration 198 - Dynamic policies for network bandwidth and firewall rules 199 200 If these topics are of interest, please contact the team via the mailing list or IRC and find some like-minded people in the community to put a proposal together. 201 202 ## Where are the binaries? 203 204 The plugins moved to a separate repo: 205 https://github.com/containernetworking/plugins, and the releases there 206 include binaries and checksums. 207 208 Prior to release 0.7.0 the `cni` release also included a `cnitool` 209 binary; as this is a developer tool we suggest you build it yourself. 210 211 ## Contact 212 213 For any questions about CNI, please reach out via: 214 - Email: [cni-dev](https://groups.google.com/forum/#!forum/cni-dev) 215 - IRC: #[containernetworking](irc://irc.freenode.net:6667/#containernetworking) channel on [freenode.net](https://freenode.net/) 216 - Slack: #cni on the [CNCF slack](https://slack.cncf.io/). NOTE: the previous CNI Slack (containernetworking.slack.com) has been sunsetted.