Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

autoscaling/v2beta2 is deprecated in k8s 1.23 #449

Closed
kvasilopoulos opened this issue Feb 14, 2023 · 6 comments · Fixed by bentoml/yatai-deployment#114 · May be fixed by #463
Closed

autoscaling/v2beta2 is deprecated in k8s 1.23 #449

kvasilopoulos opened this issue Feb 14, 2023 · 6 comments · Fixed by bentoml/yatai-deployment#114 · May be fixed by #463

Comments

@kvasilopoulos
Copy link

kvasilopoulos commented Feb 14, 2023

Yatai states that it works with Kubernetes clusters with version 1.20 or newer.

We have 1.26 and we get the following error:

I0214 08:25:37.323134       1 leaderelection.go:258] successfully acquired lease yatai-deployment/b292d523.yatai.ai
1.6763631373232365e+09	DEBUG	events	yatai-deployment-5bdcffb66d-rtfs5_6925bb24-d1cc-4a68-bdc5-92d57ff73ece became leader	{"type": "Normal", "object": {"kind":"Lease","namespace":"yatai-deployment","name":"b292d523.yatai.ai","uid":"4c621713-ea5f-4e63-9170-e585f3867a99","apiVersion":"coordination.k8s.io/v1","resourceVersion":"26214648"}, "reason": "LeaderElection"}
1.6763631373233054e+09	INFO	Starting EventSource	{"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment", "source": "kind source: *v2alpha1.BentoDeployment"}
1.676363137323376e+09	INFO	Starting EventSource	{"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment", "source": "kind source: *v1.Deployment"}
1.6763631373233864e+09	INFO	Starting EventSource	{"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment", "source": "kind source: *v2beta2.HorizontalPodAutoscaler"}
1.6763631373233929e+09	INFO	Starting EventSource	{"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment", "source": "kind source: *v1.Service"}
1.6763631373233986e+09	INFO	Starting EventSource	{"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment", "source": "kind source: *v1.Ingress"}
1.6763631373234038e+09	INFO	Starting Controller	{"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment"}
I0214 08:25:38.374123       1 request.go:601] Waited for 1.046202298s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/batch/v1?timeout=32s
1.6763631387267134e+09	ERROR	controller-runtime.source	if kind is a CRD, it should be installed before calling Start	{"kind": "HorizontalPodAutoscaler.autoscaling", "error": "no matches for kind \"HorizontalPodAutoscaler\" in version \"autoscaling/v2beta2\""}
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:139
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:235
k8s.io/apimachinery/pkg/util/wait.poll
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:582
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:547
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:132
I0214 08:25:49.776787       1 request.go:601] Waited for 1.046409144s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/maps.k8s.elastic.co/v1alpha1?timeout=32s
1.676363150129143e+09	ERROR	controller-runtime.source	if kind is a CRD, it should be installed before calling Start	{"kind": "HorizontalPodAutoscaler.autoscaling", "error": "no matches for kind \"HorizontalPodAutoscaler\" in version \"autoscaling/v2beta2\""}
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:139
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:235
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:662
k8s.io/apimachinery/pkg/util/wait.poll
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:596
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:547
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:132
1.6763631517485435e+09	INFO	start cleaning up abandoned runner services	{"func": "doCleanUpAbandonedRunnerServices"}
1.6763631517516932e+09	INFO	finished cleaning up abandoned runner services	{"func": "doCleanUpAbandonedRunnerServices"}

After some digging, I found the following issue kubernetes/ingress-nginx#8599 which states that you should upgrade the autoscaling in v2 if you have k8s higher than 1.23.

$ kubectl get apiservices | grep autoscaling
v1.autoscaling                              Local                        True        81d
v1alpha1.autoscaling.k8s.elastic.co         Local                        True        4d
v2.autoscaling                              Local                        True        81d
@yetone
Copy link
Member

yetone commented Feb 15, 2023

Thanks for your report! I will fix it ASAP

@kvasilopoulos
Copy link
Author

Hi, @yetone do you have an ETA on this? Thanks!

@yetone
Copy link
Member

yetone commented Feb 22, 2023

@kvasilopoulos I will finish this next week

@kvasilopoulos
Copy link
Author

@yetone Thanks!!

@alexisthedude
Copy link

@yetone
Hello!
We also have the same issue. Any update on this fix?

@alexisthedude
Copy link

Hi @yetone
We are pretty stuck waiting for this... Yatai is fantastic and we all loved it so much that we got it in every step of what we do. Yet we need to keep up-to-date with k8s too, so we are up in the air without either Yatai or k8s updates
Is there any ETA on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants