При выполнении нагрузочного теста на моем автоскалер Kubernetes я обнаружил, что автоматическое масштабирование реплик останавливается, когда текущее значение mem достигает 7783765333m.
Кто-нибудь еще видел такое поведение?
Я запускаю веб-сервер на своем контейнере и загружаю тестирование с помощью «hey».
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: flux
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
# - type: Resource
# resource:
# name: cpu
# targetAverageUtilization: 500000
- type: Resource
resource:
name: memory
targetAverageValue: 7082560
kind: HorizontalPodAutoscaler
metadata:
annotations:
autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2019-04-02T21:47:41Z","reason":"ReadyForNewScale","message":"the
last scale time was sufficiently old as to warrant a new scale"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2019-04-02T21:38:16Z","reason":"ValidMetricFound","message":"the
HPA was able to successfully calculate a replica count from memory resource"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2019-04-02T21:38:16Z","reason":"DesiredWithinRange","message":"the
desired count is within the acceptable range"}]'
autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":"Resource","resource":{"name":"memory","currentAverageValue":"7783765333m"}}]'
autoscaling.alpha.kubernetes.io/metrics: '[{"type":"Resource","resource":{"name":"memory","targetAverageValue":"7082560"}}]'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"hello-world","namespace":"flux"},"spec":{"maxReplicas":10,"metrics":[{"resource":{"name":"memory","targetAverageValue":7082560},"type":"Resource"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"extensions/v1beta1","kind":"Deployment","name":"hello-world"}}}
creationTimestamp: "2019-04-02T21:37:46Z"
name: hello-world
namespace: flux
resourceVersion: "18502599"
selfLink: /apis/autoscaling/v1/namespaces/flux/horizontalpodautoscalers/hello-world
uid: 8e5f4ca3-558f-11e9-900d-064fc6bb52e2
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: hello-world
status:
currentReplicas: 3
desiredReplicas: 3
lastScaleTime: "2019-04-02T21:41:41Z"
Вы видите, что количество моих реплик больше не увеличивается, несмотря на высокое использование памяти.
$ kubectl get hpa
Tue Apr 2 15:19:13 PDT 2019
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello-world Deployment/hello-world 7783765333m/7082560 1 10 3 41m
Tue Apr 2 15:19:23 PDT 2019
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello-world Deployment/hello-world 7783765333m/7082560 1 10 3 41m
Tue Apr 2 15:19:34 PDT 2019
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello-world Deployment/hello-world 7783765333m/7082560 1 10 3 41m
Tue Apr 2 15:19:44 PDT 2019
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello-world Deployment/hello-world 8568832/7082560 1 10
3 41m
Tue Apr 2 15:19:55 PDT 2019
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello-world Deployment/hello-world 8568832/7082560 1 10 3 42m
Tue Apr 2 15:20:05 PDT 2019
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello-world Deployment/hello-world 8568832/7082560 1 10 3 42m
Tue Apr 2 15:20:16 PDT 2019
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello-world Deployment/hello-world 8568832/7082560 1 10 4 42m
Tue Apr 2 15:20:26 PDT 2019
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello-world Deployment/hello-world 8568832/7082560 1 10 4 42m