Chapter 20: runc And containerd Experiments

The final lab group connects the raw Linux experiments to the runtime stack. runc consumes an OCI bundle and drives the kernel setup. containerd prepares image, snapshot, container, and task state, then talks to a runtime v2 shim.

Run these only in a disposable Linux VM. runc and containerd create real processes, mounts, cgroups, runtime directories, snapshots, and shim processes.

OCI Bundle

Question: what does runc need before it can create a container?

Scope: VM-only mutation.

runc expects an OCI bundle: a directory with config.json and a root filesystem. The README's basic flow is still the right mental model: populate rootfs/, generate a starter spec with runc spec, edit the spec, then run lifecycle commands.

The spec command source generates a starter spec and writes it as config.json:

spec := specconv.Example()
data, err := json.MarshalIndent(spec, "", "\t")
return os.WriteFile(specConfig, data, 0o666)

The lab should inspect these fields before running anything:

find bundle -maxdepth 2 -type f -o -type d | sort
sed -n '1,160p' bundle/config.json
grep -n '"path"\|"args"\|"namespaces"\|"mounts"' bundle/config.json

The rootfs source is a choice with consequences. A prepared static rootfs (a busybox tarball, for example) keeps the lab self-contained. Exporting an image with docker create and docker export matches the runc README but pulls Docker into the dependency list. Pick one and document it in the lab notes; a floating choice makes cleanup unreliable.

runc Lifecycle

Question: what is the difference between create and start?

Scope: VM-only mutation.

The runc commands name the split:

Name:  "run",
Usage: "create and run a container",
Name:  "create",
Usage: "create a container",

runc run is convenience. It creates, starts, waits, and cleans up depending on flags. The lifecycle lab should use create, state, start, kill, and delete so the state transitions are visible:

cd bundle
sudo runc create cdb-runc
sudo runc state cdb-runc
sudo runc start cdb-runc
sudo runc state cdb-runc
pid=$(sudo runc state cdb-runc | sed -n 's/.*"pid": *\([0-9][0-9]*\).*/\1/p')
sudo readlink /proc/"$pid"/ns/{mnt,pid,net}
sudo cat /proc/"$pid"/cgroup
sudo runc kill cdb-runc KILL
sudo runc delete cdb-runc

The process needs to stay alive long enough to inspect it, so set process.args in config.json to ["sleep", "300"] and process.terminal to false before running runc create.

Cleanup verification:

sudo runc state cdb-runc || true   # nonzero = clean

containerd Task

Question: where does the shim appear when containerd starts a task?

Scope: VM-only mutation.

containerd's own docs say ctr is for debugging containerd. That is why this lab uses ctr instead of a friendlier container CLI.

Use a dedicated containerd namespace:

sudo ctr namespaces create cdb-lab 2>/dev/null || true
sudo ctr -n cdb-lab images pull docker.io/library/busybox:latest
sudo ctr -n cdb-lab containers create docker.io/library/busybox:latest cdb-task sleep 300
sudo ctr -n cdb-lab containers ls
sudo ctr -n cdb-lab tasks start -d cdb-task
sudo ctr -n cdb-lab tasks ls

Now inspect the runtime boundary:

ps -ef | grep -E 'containerd-shim-runc-v2|sleep 300' | grep -v grep
sudo ctr -n cdb-lab tasks ps cdb-task

The runtime v2 docs make the ownership clear:

containerd, the daemon, does not directly launch containers.

The shim invokes the OCI runtime, usually runc, and holds the task's control socket while containerd is restart-cycled.

Cleanup:

sudo ctr -n cdb-lab tasks kill cdb-task
sudo ctr -n cdb-lab tasks delete cdb-task
sudo ctr -n cdb-lab containers delete cdb-task
sudo ctr namespaces remove cdb-lab

If task deletion fails, inspect ctr -n cdb-lab tasks ls before forcing anything.

Bundle From containerd

Question: how does containerd's generated runtime bundle relate to the hand-built runc bundle?

Scope: VM-only mutation, mostly inspection after a task exists.

After creating a task, inspect containerd's runtime state directory for the lab namespace and task. The exact path depends on the containerd build and configuration, but the object to find is stable: a runtime v2 bundle containing an OCI config.json for the task.

The inspection target should include:

sudo find /run/containerd -path '*cdb-lab*' -o -path '*cdb-task*'

Once the bundle is found, compare its config.json to the runc bundle from the earlier lab. The generated spec should show where image config, snapshot mounts, runtime options, namespaces, cgroups, and task arguments ended up.

Events And Logs

Question: what can containerd tell us while the task is running?

Scope: inspect-only after the VM-only task setup.

ctr events can show lifecycle events while another shell creates and starts a task:

sudo ctr -n cdb-lab events

Run after the task lab; the events stream is empty without an active task. Debug logs and strace follow the same rule.

Sources And Further Reading