Version:
Client: "v1.25.13"
Server: "v1.26.8"
also tested in
Client: v1.29.1
Server: v1.28.4
I create a Deployment and perform kubectl scale and kubectl edit in the given order. The observations and questions are for the field replicas.
1. apply: "replicas" not specified in the yaml; assumes default value of 1;
still a managed field managed by 'kubectl-client-side-apply' & 'kube-controller-manager'
2. scaled from 1 to 3: managed by 'kube-controller-manager' & 'kubectl' subresource 'scale'
3. scaled from 3 to 0: no ownership; unmanaged
4. scaled from 0 to 1: only owned by 'kube-controller-manager'
5. edit from 1 to 0: owned by 'kubectl-edit' only
6. scaled from 0 to 1: owned by 'kube-controller-manager' and 'kubectl' subresource 'scale'
7. scaled from 1 to 0: no ownership; unmanaged
8. edit 0 to 1: owned by 'kube-controller-manager' and 'kubectl-edit'
Q1. In (2), why is the ownership revoked by 'kubectl-client-side-apply'?
Q2. In (3), it seems like very weird behaviour. Does it mean that having a value of 0 shouldn't be owned? What is different about replicas=0?
Q3. In (4), why isn't it also owned by 'kubectl' with subresource 'scale'?
Q4. In (5), unlike (3) when with "kubectl scale", there are no memberships for replicas=0, why is it owned by "kubectl edit" now?
Q5. In (6), unlike (4), how are "kubectl edit" and "kubectl scale" different?
This is the yaml used for reference:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demo-pvc-claim
namespace: demo
labels:
app: nginx
spec:
storageClassName: nutanix-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: clear-nginx-deployment
namespace: demo
spec:
selector:
matchLabels:
app: clear-nginx
template:
metadata:
labels:
app: clear-nginx
spec:
containers:
- name: clear-nginx
image: clearlinux/nginx
volumeMounts:
- mountPath: /var/www/html
name: site-data
ports:
- containerPort: 80
volumes:
- name: site-data
persistentVolumeClaim:
claimName: demo-pvc-claim