CRI and Cri-O

When Kubernetes first appeared, it was using Docker. It was using it in a way, that Docker was basically a required dependency, as many things inside Kubernetes hardcoded it in different ways. At the time, it was a logical choice - Docker was the most popular and feature complete container manager, so building Kubernetes on top of it was good. But then, people wanted Kubernetes to support other container managers and container runtimes.
The problem is, if your source code is heavily dependent on one concrete tool, it’s really hard to replace this tool with another one in a way, that both old and new tool are supported. As a result, adding support for the alternative container managers proved to be hard - every container manager had its own specifics, that Kubernetes had to know about to be able to support it.
Another issue became apparent - Docker was just too much for Kubernetes. Docker can do the networking, volumes and many other things - and all of those things are already part of the Kubernetes. It stopped making sense to include something this powerful when you only need to do a handful of things with the containers on the Kubernetes node.
And that’s how Kubernetes Container Runtime Interface appeared. back in 2016 The idea of K8s CRI is that instead of bundling and supporting many different container runtimes, those runtimes would simply need to comply the CRI standard. Kubernetes, in return, only had to maintain and support this standard and make sure, that any standard compliant runtime will work well. It doesn’t matter if you are using Docker or Podman or anything else, as long as this tool supports the Kubernetes Container Runtime Interface.
In theory, you don’t even need to run containers - your CRI-enabled tool could, for example, create virtual machines instead of containers. Well, in practice too, because there are many projects that do exactly that and that are outside the scope of this course.
One of the most stable and widely used implementations of the container runtime interface is called Cri-O - and it’s stable enough to be at the core of OpenShift, the Kubernetes distribution used at thousands of companies at small and huge scales. Let’s give Cri-O a try.
CRI-O Hands-on
Installing Cri-O is easy - it’s available on most of the Linux distributions and, similar to Podman, it does rely heavily on Linux to work. Cri-O is not a container runtime - instead, it uses OCI compliant container runtimes, default one being runc.
Once you have both cri-o and crio-tools installed, you need to start the cri-o daemon - unlike Podman, cri-o requries a daemon to work, similar to Docker:
systemctl start cri-o
To talk to cri-o you need crictl - which, confusingly, is not a part of cri-o and, instead, a general CLI to talk to CRI-enabled runtimes, including cri-o. Even though it’s confusing, it only shows that all of these standards and interfaces allow a lot of inter-compatabiltiy between different tools.
While cri-o supports rootless containers to some extent, you need to keep in mind that it’s meant to be a container manager for Kubernetes nodes. On Kubernetes node, you don't have many users and the only job of the node is to run containers. So while you can run cri-o as a different user, in production environments it would most likely be run under root, at least for now.
Cri-O is not capable of builing images, but it can pull them - it can not push, because that’s not the cri-o job. Cri-O also uses the same location for container images as Podman and Buildah, so images built by those tools will be available to Cri-O - and, because those images and Cri-O are following OCI standards, there should be not compatibility issues. Let’s pull our image:
crictl pull quay.io/kshirinkin/dockerless-curl:v1
Cri-O, of course, has a concept of pods, because it is meant to be a container manager for Kubernetes. But, unlike Podman, pods are not something optional - you have to create a pod, because that’s the smallest deployable unit in Kubernetes world. To create a pod, you need a pod definition in either a JSON or YAML format:
name: httpd
namespace: default
attempt: 1
uid: hdishd83djaidwnduwk28bcsb
logDirectory: /tmp
linux:
And we also need a container definition:
metadata:
name: curl
image:
image: quay.io/kshirinkin/dockerless-curl:v1
command: [ sleep, infinity ]
linux:
then to create the pod: crictl runp pod.yaml
. The newly created pod is visible if you run crictl pods -o table
. Now we need to create a container inside this pod:
crictl create 5ab176636cf22 container.yaml pod.yaml
And we also need to start the container: crictl start container-id
.
Finally, we have a running container with cri-o - we can exec into this container, check its logs and do all the other things we would expect from a minimal container manager.
Unlike Podman though, Cri-O is not meant to be used by humans directly. It’s a purpose build container manager for Kubernetes - and that’s where it should be used. It is stable enough to be the default container manager in OpenShift - one of the most popular and widely used Kubernetes distributions. If you look under the hood of any up to date OpenShift cluster, you won’t find a sign of Docker - it’s all Cri-O, working over Kubernetes Container Runtime Interface. But there is more here that we should briefly discuss.
Article Series "The Dockerless Course"
- What’s Wrong With Docker? Introduction to the Dockerless Course
- What Is a Container? Open Container Initiative Explained
- Where Container Images Are Stored: Introduction to Skopeo
- The Standards Behind the Modern Container Images
- Container Bundle Deep Dive
- runc, crun & Container Standards Wrap Up
- Buildah: A Complete Overview
- Container Managers and ContainerD
- Podman: A Complete Overview
- CRI and Cri-O