CRI-O follows the Kubernetes release cycles with respect to its minor versions
(1.x.y
). Patch releases (1.x.z
) for Kubernetes are not in sync with those from
CRI-O, because they are scheduled for each month, whereas CRI-O provides
them only if necessary. If a Kubernetes release goes End of
Life,
then the corresponding CRI-O version can be considered in the same way.
This means that CRI-O also follows the Kubernetes n-2
release version skew
policy when it comes to feature graduation, deprecation or removal. This also
applies to features which are independent from Kubernetes. Nevertheless, feature
backports to supported release branches, which are independent from Kubernetes
or other tools like cri-tools, are still possible. This allows CRI-O to decouple
from the Kubernetes release cycle and have enough flexibility when it comes to
implement new features. Every feature to be backported will be a case by case
decision of the community while the overall compatibility matrix should not be
compromised.
For more information visit the Kubernetes Version Skew Policy.
CRI-O | Kubernetes | Maintenance status |
---|---|---|
main branch |
master branch |
Features from the main Kubernetes repository are actively implemented |
release-1.x branch (v1.x.y ) |
release-1.x branch (v1.x.z ) |
Maintenance is manual, only bugfixes will be backported. |
The release notes for CRI-O are hand-crafted and can be continuously retrieved from our GitHub pages website.
CRI-O is meant to provide an integration path between OCI conformant runtimes and the Kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. The scope of CRI-O is tied to the scope of the CRI.
At a high level, we expect the scope of CRI-O to be restricted to the following functionalities:
- Support multiple image formats including the existing Docker image format
- Support for multiple means to download images including trust & image verification
- Container image management (managing image layers, overlay filesystems, etc)
- Container process lifecycle management
- Monitoring and logging required to satisfy the CRI
- Resource isolation as required by the CRI
- Building, signing and pushing images to various image storages
- A CLI utility for interacting with CRI-O. Any CLIs built as part of this project are only meant for testing this project and there will be no guarantees on the backward compatibility with it.
CRI-O is an implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers.
The plan is to use OCI projects and best of breed libraries for different aspects:
- Runtime: runc (or any OCI runtime-spec implementation) and oci runtime tools
- Images: Image management using containers/image
- Storage: Storage and management of image layers using containers/storage
- Networking: Networking support through the use of CNI
It is currently in active development in the Kubernetes community through the design proposal. Questions and issues should be raised in the Kubernetes sig-node Slack channel.
A roadmap that describes the direction of CRI-O can be found here. The project is tracking all ongoing efforts as part of the Feature Roadmap GitHub project.
CRI-O's CI is split-up between GitHub actions and OpenShift CI (Prow). Relevant virtual machine images used for the prow jobs are built periodically in the jobs:
- periodic-ci-cri-o-cri-o-main-periodics-setup-periodic
- periodic-ci-cri-o-cri-o-main-periodics-setup-fedora-periodic
- periodic-ci-cri-o-cri-o-main-periodics-evented-pleg-periodic
The jobs are maintained from the openshift/release repository
and define workflows used for the particular jobs. The actual job definitions
can be found in the same repository under ci-operator/jobs/cri-o/cri-o/cri-o-cri-o-main-presubmits.yaml
for the main
branch as well as the corresponding files for the release
branches. The base image configuration for those jobs is available in the same
repository under ci-operator/config/cri-o/cri-o.
Command | Description |
---|---|
crio(8) | OCI Kubernetes Container Runtime daemon |
Examples of commandline tools to interact with CRI-O (or other CRI compatible runtimes) are Crictl and Podman.
File | Description |
---|---|
crio.conf(5) | CRI-O Configuration file |
policy.json(5) | Signature Verification Policy File(s) |
registries.conf(5) | Registries Configuration file |
storage.conf(5) | Storage Configuration file |
The security process for reporting vulnerabilities is described in SECURITY.md.
You can configure CRI-O to inject OCI Hooks when creating containers.
We provide useful information for operations and development transfer as it relates to infrastructure that utilizes CRI-O.
For async communication and long running discussions please use issues and pull requests on the GitHub repo. This will be the best place to discuss design and implementation.
For chat communication, we have a channel on the Kubernetes slack that everyone is welcome to join and chat about development.
We maintain a curated list of links related to CRI-O. Did you find something interesting on the web about the project? Awesome, feel free to open up a PR and add it to the list.
To install CRI-O
, you can follow our installation guide.
Alternatively, if you'd rather build CRI-O
from source, checkout our setup
guide.
We also provide a way in building
static binaries of CRI-O
via nix as part of the
cri-o/packaging repository.
Those binaries are available for every successfully built commit on our
Google Cloud Storage Bucket.
This means that the latest commit can be installed via our convenience script:
> curl https://raw.githubusercontent.com/cri-o/packaging/main/get | bash
The script automatically verifies the uploaded sigstore signatures as well, if
the local system has cosign
available in
its $PATH
. The same applies to the SPDX based bill of
materials (SBOM), which gets automatically verified if the
bom tool is in $PATH
.
Besides amd64
, we also support the arm64
, ppc64le
and s390x
bit
architectures. This can be selected via the script, too:
curl https://raw.githubusercontent.com/cri-o/packaging/main/get | bash -s -- -a arm64
It is also possible to select a specific git SHA or tag by:
curl https://raw.githubusercontent.com/cri-o/packaging/main/get | bash -s -- -t v1.21.0
The above script resolves to the download URL of the static binary bundle tarball matching the format:
https://storage.googleapis.com/cri-o/artifacts/cri-o.$ARCH.$REV.tar.gz
Where $ARCH
can be amd64
,arm64
,ppc64le
or s390x
and $REV
can be any git SHA or tag.
Please be aware that using the latest main
SHA might cause a race, because
the CI has not finished publishing the artifacts yet or failed.
We also provide a Software Bill of Materials (SBOM) in the SPDX
format for each bundle. The SBOM is available at the same URL
like the bundle itself, but suffixed with .spdx
:
https://storage.googleapis.com/cri-o/artifacts/cri-o.$ARCH.$REV.tar.gz.spdx
Before you begin, you'll need to start CRI-O
You can run a local version of Kubernetes with CRI-O
using local-up-cluster.sh
:
- Clone the Kubernetes repository
- From the Kubernetes project directory, run:
CGROUP_DRIVER=systemd \
CONTAINER_RUNTIME=remote \
CONTAINER_RUNTIME_ENDPOINT='unix:///var/run/crio/crio.sock' \
./hack/local-up-cluster.sh
For more guidance in running CRI-O
, visit our tutorial page
CRI-O exposes per default the gRPC API to fulfill the Container Runtime Interface (CRI) of Kubernetes. Besides this, there exists an additional HTTP API to retrieve further runtime status information about CRI-O. Please be aware that this API is not considered to be stable and production use-cases should not rely on it.
On a running CRI-O instance, we can access the API via an HTTP transfer tool like curl:
$ sudo curl -v --unix-socket /var/run/crio/crio.sock http://localhost/info | jq
{
"storage_driver": "btrfs",
"storage_root": "/var/lib/containers/storage",
"cgroup_driver": "systemd",
"default_id_mappings": { ... }
}
The following API entry points are currently supported:
Path | Content-Type | Description |
---|---|---|
/info |
application/json |
General information about the runtime, like storage_driver and storage_root . |
/containers/:id |
application/json |
Dedicated container information, like name , pid and image . |
/config |
application/toml |
The complete TOML configuration (defaults to /etc/crio/crio.conf ) used by CRI-O. |
/pause/:id |
application/json |
Pause a running container. |
/unpause/:id |
application/json |
Unpause a paused container. |
The subcommand crio status
can be used to access the API with a dedicated command
line tool. It supports all API endpoints via the dedicated subcommands config
,
info
and containers
, for example:
$ sudo crio status info
cgroup driver: systemd
storage driver: btrfs
storage root: /var/lib/containers/storage
default GID mappings (format <container>:<host>:<size>):
0:0:4294967295
default UID mappings (format <container>:<host>:<size>):
0:0:4294967295
Please refer to the CRI-O Metrics guide.
Please refer to the CRI-O Tracing guide.
Some aspects of the Container Runtime are worth some additional explanation. These details are summarized in a dedicated guide.
Having an issue? There are some tips and tricks for debugging located in our debugging guide
An incomplete list of adopters of CRI-O in production environments can be found here. If you're a user, please help us complete it by submitting a pull-request!
A weekly meeting is held to discuss CRI-O development. It is open to everyone. The details to join the meeting are on the wiki.
For more information on how CRI-O is goverened, take a look at the governance file