[go: up one dir, main page]

Page MenuHomePhabricator

k8s 1.21 magnum template for PAWS
Closed, ResolvedPublic

Description

There is a small cluster template created for 1.21, as well as 1.22 (T325538) though magnum requires the node flavor be set on the template level. As a result we will likely want a separate template for PAWS than for generic clusters, as PAWS will use a somewhat larger node. This ticket is for tracking of the creation of a k8s 1.21 template with larger worker nodes with PAWS as the subject.

openstack coe cluster template create paws-k8s21 \
--image magnum-fedora-coreos-34 \
--external-network wan-transport-eqiad \
--fixed-network lan-flat-cloudinstances2b \
--fixed-subnet cloud-instances2-b-eqiad \
--dns-nameserver 8.8.8.8 \
--network-driver flannel \
--docker-storage-driver overlay2 \
--docker-volume-size 100 \
--master-flavor g3.cores2.ram4.disk20 \
--flavor g3.cores8.ram32.disk20 \
--coe kubernetes \
--labels kube_tag=v1.21.8-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true \
--floating-ip-disabled

Event Timeline

rook changed the task status from Open to In Progress.Jan 4 2023, 6:59 PM
rook changed the status of subtask T326260: Normalize PAWS resource usage from Open to In Progress.

The following seems to work fine:

openstack coe cluster create rook3 --cluster-template paws-k8s21 --master-count 1 --node-count 1

However:

openstack coe cluster create rook3 --cluster-template paws-k8s21 --master-count 3 --node-count 3

Does not work fine. Giving master_count must be 1 when master_lb_enabled is False (HTTP 400) (Request-ID: req-fe0447af-cfae-41b2-990f-f1cc5218ec0c)

Creating a cluster with the lb enabled doesn't immediately help:

openstack coe cluster template create paws-k8s21-pool \
--image magnum-fedora-coreos-34 \
--external-network wan-transport-eqiad \
--fixed-network lan-flat-cloudinstances2b \
--fixed-subnet cloud-instances2-b-eqiad \
--dns-nameserver 8.8.8.8 \
--network-driver flannel \
--docker-storage-driver overlay2 \
--docker-volume-size 100 \
--master-flavor g3.cores2.ram4.disk20 \
--flavor g3.cores8.ram32.disk20 \
--coe kubernetes \
--labels kube_tag=v1.21.8-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true \
--floating-ip-disabled \
--master-lb-enabled
openstack coe cluster create rook3 --cluster-template paws-k8s21-pool --master-count 3 --node-count 3

Fails to build with an illuminating status message of | status_reason | ERROR: Internal Error |

The immediate effect of this is that we cannot deploy clusters with a control plane of more than one node. This will be limiting for some projects, not so limiting for others.

Created dev template with:

openstack coe cluster template create paws-dev-k8s21 \
--image Fedora-CoreOS-34 \
--external-network wan-transport-codfw \
--fixed-subnet cloud-instances2-b-codfw \
--fixed-network lan-flat-cloudinstances2b \
--dns-nameserver 8.8.8.8 \
--network-driver flannel \
--docker-storage-driver overlay2 \
--docker-volume-size 30 \
--master-flavor g2.cores1.ram2.disk20 \
--flavor g2.cores1.ram2.disk20 \
--coe kubernetes \
--labels kube_tag=v1.21.8-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true

Prod template created as above.