Chapter 19: Networking And CNI Experiments
These labs build the Linux pieces by hand — namespaces, veth, bridges — before invoking a CNI plugin against the result.
Every command in this chapter is for a disposable Linux VM. These labs create network namespaces, veth devices, bridges, routes, and possibly firewall or CNI state.
Manual Network Namespace
Question: what does a new network namespace contain before a plugin touches it?
Scope: VM-only mutation.
Create a namespace and inspect its starting state:
sudo ip netns add cdb-a
sudo ip -n cdb-a -br link
sudo ip -n cdb-a route
Bring up loopback explicitly:
sudo ip -n cdb-a link set lo up
sudo ip -n cdb-a -br link
A new network namespace has its own link list and route table. Loopback exists, but it is not useful until it is up.
Cleanup:
sudo ip netns del cdb-a
ip netns list
veth Pair
Question: how does a namespace get a link to the outside?
Scope: VM-only mutation.
Create a namespace, create a veth pair, move one end into the namespace, and assign addresses:
sudo ip netns add cdb-a
sudo ip link add cdb-host type veth peer name cdb-eth0
sudo ip link set cdb-eth0 netns cdb-a
sudo ip addr add 10.200.0.1/24 dev cdb-host
sudo ip link set cdb-host up
sudo ip -n cdb-a addr add 10.200.0.2/24 dev cdb-eth0
sudo ip -n cdb-a link set cdb-eth0 up
sudo ip -n cdb-a link set lo up
Inspect both sides:
ip -br addr show cdb-host
sudo ip -n cdb-a -br addr
sudo ip -n cdb-a route
ping -c 2 10.200.0.2
sudo ip netns exec cdb-a ping -c 2 10.200.0.1
The veth driver stores peer pointers in both directions:
rcu_assign_pointer(priv->peer, peer);
rcu_assign_pointer(priv->peer, dev);
One kernel network device transmits to its peer.
Cleanup:
sudo ip netns del cdb-a
sudo ip link del cdb-host 2>/dev/null || true
Deleting the namespace should remove the namespace-owned veth end. The explicit ip link del is there so cleanup is idempotent if the host end still exists.
Bridge
Question: what changes when the host-side veth is attached to a bridge?
Scope: VM-only mutation.
Create a bridge and attach a veth host end:
sudo ip netns add cdb-a
sudo ip link add cdb-br0 type bridge
sudo ip addr add 10.201.0.1/24 dev cdb-br0
sudo ip link set cdb-br0 up
sudo ip link add cdb-vetha type veth peer name cdb-eth0
sudo ip link set cdb-vetha master cdb-br0
sudo ip link set cdb-vetha up
sudo ip link set cdb-eth0 netns cdb-a
sudo ip -n cdb-a addr add 10.201.0.2/24 dev cdb-eth0
sudo ip -n cdb-a link set cdb-eth0 up
sudo ip -n cdb-a link set lo up
Inspect the bridge relationship:
bridge link show
ip -br addr show cdb-br0
sudo ip -n cdb-a route
sudo ip netns exec cdb-a ping -c 2 10.201.0.1
Forwarding and NAT can collide with the VM's default network setup, so this lab stops at bridge attachment.
Cleanup:
sudo ip netns del cdb-a
sudo ip link del cdb-br0
CNI With cnitool
Question: what does CNI add above manual namespace wiring?
Scope: VM-only mutation.
cnitool runs a CNI configuration against an existing network namespace; the namespace is created by the lab first, then the plugin chain mutates it.
Use a local config and plugin directory, not the host defaults:
sudo mkdir -p /tmp/cdb-cni/net.d /tmp/cdb-cni/bin
sudo ip netns add cdb-cni
Whichever plugin the lab uses (a small bridge or ptp config), the runtime variables passed to cnitool are the same:
sudo NETCONFPATH=/tmp/cdb-cni/net.d CNI_PATH=/tmp/cdb-cni/bin cnitool add cdb-net /var/run/netns/cdb-cni
sudo NETCONFPATH=/tmp/cdb-cni/net.d CNI_PATH=/tmp/cdb-cni/bin cnitool check cdb-net /var/run/netns/cdb-cni
sudo NETCONFPATH=/tmp/cdb-cni/net.d CNI_PATH=/tmp/cdb-cni/bin cnitool del cdb-net /var/run/netns/cdb-cni
The experiment should inspect three things after ADD: the target namespace links and routes, the host-side link or bridge state, and the IPAM/cache files the plugin wrote. After DEL, inspect the same three places again.
Cleanup:
sudo ip netns del cdb-cni 2>/dev/null || true
sudo rm -rf /tmp/cdb-cni
DNS Boundary
Question: why can a namespace have working packets but broken names?
Scope: inspect-only or VM-only, depending on how the namespace was created.
DNS is not created by CLONE_NEWNET. Resolver behavior comes from files and orchestration policy. In a VM namespace lab, inspect route reachability separately from resolver configuration:
sudo ip -n cdb-a route
sudo ip netns exec cdb-a cat /etc/resolv.conf
If a lab later adds a custom resolver file, it should say exactly how the process sees that file: bind mount, chroot/rootfs content, runtime-generated pod file, or host file inherited by ip netns exec.
Sources And Further Reading
network_namespaces(7): https://man7.org/linux/man-pages/man7/network_namespaces.7.htmlip-netns(8): https://man7.org/linux/man-pages/man8/ip-netns.8.htmlip-link(8): https://man7.org/linux/man-pages/man8/ip-link.8.htmlip-address(8): https://man7.org/linux/man-pages/man8/ip-address.8.htmlip-route(8): https://man7.org/linux/man-pages/man8/ip-route.8.htmlveth(4): https://man7.org/linux/man-pages/man4/veth.4.html- Linux veth driver: https://github.com/torvalds/linux/blob/57b8e2d666a31fa201432d58f5fe3469a0dd83ba/drivers/net/veth.c
- Linux bridge interface source: https://github.com/torvalds/linux/blob/57b8e2d666a31fa201432d58f5fe3469a0dd83ba/net/bridge/br_if.c
- CNI
cnitool: https://www.cni.dev/docs/cnitool/ - CNI specification: https://github.com/containernetworking/cni/blob/v1.3.0/SPEC.md