Description
What happened:
I have an HPA spec as follows.
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: argocd-server
spec:
minReplicas: 1
maxReplicas: 2
metrics:
- resource:
name: cpu
targetAverageUtilization: 76
type: Resource
- resource:
name: memory
targetAverageValue: 3096Mi
type: Resource
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: argocd-server
After applying it results in the following object. Notice that the spec.metrics
list is re-ordered:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"argocd-server","namespace":"argocd"},"spec":{"maxReplicas":2,"metrics":[{"resource":{"name":"cpu","targetAverageUtilization":75},"type":"Resource"},{"resource":{"name":"memory","targetAverageValue":"3096Mi"},"type":"Resource"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"extensions/v1beta1","kind":"Deployment","name":"argocd-server"}}}
creationTimestamp: "2019-02-13T06:32:00Z"
name: argocd-server
namespace: argocd
resourceVersion: "1244555"
selfLink: /apis/autoscaling/v2beta1/namespaces/argocd/horizontalpodautoscalers/argocd-server
uid: 11d7a94b-2f59-11e9-99d6-a6a55e696d25
spec:
maxReplicas: 2
metrics:
- resource:
name: memory
targetAverageValue: 3096Mi
type: Resource
- resource:
name: cpu
targetAverageUtilization: 75
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: argocd-server
status:
conditions:
- lastTransitionTime: "2019-02-13T06:32:08Z"
message: the HPA controller was able to get the target's current scale
reason: SucceededGetScale
status: "True"
type: AbleToScale
- lastTransitionTime: "2019-02-13T06:32:08Z"
message: 'the HPA was unable to compute the replica count: unable to get metrics
for resource memory: unable to fetch metrics from resource metrics API: the
server could not find the requested resource (get pods.metrics.k8s.io)'
reason: FailedGetResourceMetric
status: "False"
type: ScalingActive
currentMetrics: null
currentReplicas: 1
desiredReplicas: 0
What you expected to happen:
The fact that the list get's re-ordered is problematic for things like GitOps and diffing tools, because in most other cases, the order of lists are significant. In our case, we are correctly identifying that the live object spec is different than the desired object.
My expectation is for HPA to preserve the order of the list.
How to reproduce it (as minimally and precisely as possible):
kubectl apply -f hpa.yaml # example in description
kubectl get hpa.v2beta1.autoscaling argocd-server -o yaml
Anything else we need to know?:
It's unclear to me if HPA controller will order the list in a deterministic way. I hope that it is, because it will at least allow us to provide some guidance to our users about how to store their HPA manifests in git.
Environment:
- Kubernetes version (use
kubectl version
):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-13T23:15:13Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: minikube on darwin
- OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Install tools:
- Others: