Chapter 15: CNI
CNI is the contract a container runtime uses to call into network implementation code: it specifies how the runtime invokes a plugin binary and how the plugin returns its result, not what the plugin does.
A CNI runtime executes plugin binaries. It passes a container ID, a network namespace path, an interface name, optional runtime arguments, and JSON configuration. Plugins mutate the target network attachment and return JSON results. The data path might be a Linux bridge, routes, an overlay, eBPF, cloud networking, device plugins, or a chain of smaller plugins.
Library tags and spec version are separate. Source links in this chapter use containernetworking/cni v1.3.0 and containernetworking/plugins v1.9.1; the CNI spec they implement is version 1.1.0.
Configuration
CNI configuration is JSON. A single network configuration names one plugin. A configuration list names a sequence of plugins and lets each plugin receive the previous result.
The list-level fields are:
cniVersionnames the CNI spec version used for the configuration.cniVersionscan advertise supported versions.nameis the network name.disableCheckanddisableGCdisable those maintenance operations for a list.pluginsis the ordered plugin chain.
Inside each plugin object, type names the binary to execute. Other fields belong to the plugin. Common examples are ipMasq for bridge masquerade behavior, ipam for address management delegation, dns for DNS data in the result path, and capabilities for runtime-provided values such as port mappings.
The spec uses the word "container" broadly. It means the network isolation domain being attached to a network. On Linux that is usually a network namespace path, but the CNI protocol is not restricted to Linux namespace internals.
Invocation
The runtime side executes a plugin binary from CNI_PATH, passes invocation data through environment variables, sends the configuration JSON on stdin, and reads a JSON result from stdout. Failures are structured CNI errors rather than free-form terminal output.
In libcni, the runtime arguments become environment variables like these:
"CNI_COMMAND="+args.Command,
"CNI_NETNS="+args.NetNS,
"CNI_IFNAME="+args.IfName,
The plugin execution path then runs the binary and decodes the returned bytes into a versioned CNI result:
stdoutBytes, err := exec.ExecPlugin(ctx, pluginPath, netconf, args.AsEnv())
return create.Create(resultVersion, fixedBytes)
That is why a CNI plugin does not have to be linked into containerd, kubelet, or any runtime. It has to be executable, discoverable, and able to speak environment variables plus JSON.
Operations
The core operations are verbs sent in CNI_COMMAND.
| Operation | Purpose |
|---|---|
ADD |
Attach the container to the network or apply a plugin's change. |
DEL |
Remove the attachment or undo the plugin's change. |
CHECK |
Verify the expected attachment still exists. |
STATUS |
Report plugin or network availability. |
VERSION |
Report supported CNI versions. |
GC |
Clean stale attachments known to the runtime and plugin. |
ADD is the easy path to understand because it creates visible state. DEL is just as important because interfaces, routes, firewall rules, and IPAM allocations outlive the process that asked for them unless something removes them. CHECK, STATUS, and GC are about drift and maintenance: what exists now, whether the plugin can operate, and which stale attachments can be collected.
Chaining
Plugin chaining is the reason a configuration list is more than a list of independent commands. On ADD, libcni walks the list in order and passes each result into the next plugin:
result, err = c.addNetwork(ctx, list.Name, list.CNIVersion, net, result, rt)
On DEL, libcni walks the list in reverse order:
for i := len(list.Plugins) - 1; i >= 0; i-- {
net := list.Plugins[i]
The order matches ownership. If one plugin creates an interface and a later plugin installs port-forwarding rules for that interface, deletion should remove the port-forwarding rules before the underlying interface disappears. From CNI spec 1.1.0 onward, libcni passes cached results into deletion, so a cleanup call has more context than just a container ID.
The GC path is separate from normal deletion. libcni can read cached attachments, compare them with the runtime's valid attachment set, delete stale attachments, and issue plugin GC operations for CNI version 1.1.0 and later.
Bridge And IPAM
The bridge plugin is a concrete implementation of the local pattern from the previous chapter. Its ADD path can create or reuse a Linux bridge, create a veth pair, move one end into the target namespace, run IPAM, configure addresses and routes, enable forwarding, and install masquerade rules when configured.
The source makes both halves visible:
hostInterface, containerInterface, err := setupVeth(...)
r, err := ipam.ExecAdd(n.IPAM.Type, args.StdinData)
setupVeth handles the link between the host namespace and the target namespace. ipam.ExecAdd delegates address allocation to another plugin named in the ipam block. In a typical bridge configuration, the main plugin wires the device and the IPAM plugin decides which IP address and route data to return.
The host-local IPAM plugin stores allocations on local disk and returns addresses and routes from configured ranges:
ipConf, err := allocator.Get(args.ContainerID, args.IfName, requestedIP)
That local-disk detail is why IPAM cleanup cannot be treated as optional. If DEL does not release an allocation, later containers can run out of addresses even after every visible process has exited.
Chained Plugin Examples
The portmap plugin shows what a chained plugin looks like. It does not create the pod interface. It expects a previous plugin to have produced a result, reads runtime-supplied port mappings, and installs host port forwarding rules through an iptables or nftables backend.
Its guard is blunt:
if netConf.PrevResult == nil {
return fmt.Errorf("must be called as chained plugin")
}
CNI plugins are not equal peers that can run in any order. Some create base network state. Some allocate addresses. Some decorate or enforce behavior around a previous attachment.
containerd's CNI Wrapper
containerd v2.3.0 depends on github.com/containerd/go-cni v1.1.13, github.com/containernetworking/cni v1.3.0, and github.com/containernetworking/plugins v1.9.1. The go-cni package wraps libcni behind a smaller interface shaped for containerd: Setup, SetupSerially, Remove, Check, Load, Status, and GetConfig.
The namespace attach path maps a containerd namespace object into a libcni network-list add:
r, err := n.cni.AddNetworkList(ctx, n.config, ns.config(n.ifName))
containerd CRI knows the pod sandbox ID and the network namespace path; CNI knows how to run the configured plugin chain against that path; the plugin chain owns the host network mutations.
What CNI Does Not Promise
CNI does not promise that every pod can reach every other pod. Kubernetes defines that model, and plugins implement it with different data paths. CNI does not make plugin execution safe to run casually on a developer host; real calls can create interfaces, alter routes, change firewall rules, and write IPAM state.
CNI is a process protocol: given a namespace path and a JSON config, the runtime runs a plugin chain and reads back a versioned result. The next chapter shows how kubelet and containerd prepare the namespace path for a Kubernetes pod.
Sources And Further Reading
- CNI specification: https://github.com/containernetworking/cni/blob/v1.3.0/SPEC.md
- CNI docs site: https://www.cni.dev/docs/spec/
- libcni API: https://github.com/containernetworking/cni/blob/v1.3.0/libcni/api.go
- libcni invoke args: https://github.com/containernetworking/cni/blob/v1.3.0/pkg/invoke/args.go
- libcni plugin execution: https://github.com/containernetworking/cni/blob/v1.3.0/pkg/invoke/exec.go
- CNI bridge plugin: https://github.com/containernetworking/plugins/blob/v1.9.1/plugins/main/bridge/bridge.go
- host-local IPAM plugin: https://github.com/containernetworking/plugins/blob/v1.9.1/plugins/ipam/host-local/main.go
- portmap plugin: https://github.com/containernetworking/plugins/blob/v1.9.1/plugins/meta/portmap/main.go
- containerd
go.mod: https://github.com/containerd/containerd/blob/2976f38ccbfcda5ef1364d63d60b0a304e4bf94a/go.mod - containerd
go-cni: https://github.com/containerd/go-cni/tree/v1.1.13