You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to deploy karpenter's ec2nodeclass.karpenter.k8s.aws which recently got a new field spec.kubelet (moved from other crd)
Here is the file in my Git:
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: default-arm
spec:
amiFamily: AL2
amiSelectorTerms:
- id: ami-1234
kubelet:
# set same reservations as in base node group
evictionHard:
memory.available: 0.5Gi
nodefs.available: 10%
nodefs.inodesFree: 10%
evictionMaxPodGracePeriod: 60
metadataOptions:
httpEndpoint: enabled
httpProtocolIPv6: disabled
httpPutResponseHopLimit: 2
httpTokens: required
...
Steps to reproduce
when I apply it with flux, e.g. flux reconcile kustomization --with-source -n flux-system karpenter-configuration, I get the manifest without the kubelet part:
I'm using the latest component versions, e.g. kustomize-controller v1.4.0
kustomize-controller logs doesn't show anything suspicious:
{"level":"info","ts":"2024-12-30T09:50:36.310Z","msg":"All dependencies are ready, proceeding with reconciliation","controller":"kustomization","controllerGroup":"kustomize.toolkit.fluxcd.io","controllerKind":"Kustomization","Kustomization":{"name":"karpenter-configuration","namespace":"flux-system"},"namespace":"flux-system","name":"karpenter-configuration","reconcileID":"e1cfb064-c88c-47aa-b19b-96040df990da"}
{"level":"info","ts":"2024-12-30T09:50:36.580Z","msg":"server-side apply for cluster class types completed","controller":"kustomization","controllerGroup":"kustomize.toolkit.fluxcd.io","controllerKind":"Kustomization","Kustomization":{"name":"karpenter-configuration","namespace":"flux-system"},"namespace":"flux-system","name":"karpenter-configuration","reconcileID":"e1cfb064-c88c-47aa-b19b-96040df990da","output":{"EC2NodeClass/default-arm":"configured"}}
{"level":"info","ts":"2024-12-30T09:50:36.608Z","msg":"server-side apply completed","controller":"kustomization","controllerGroup":"kustomize.toolkit.fluxcd.io","controllerKind":"Kustomization","Kustomization":{"name":"karpenter-configuration","namespace":"flux-system"},"namespace":"flux-system","name":"karpenter-configuration","reconcileID":"e1cfb064-c88c-47aa-b19b-96040df990da","output":{"NodePool/default-arm":"unchanged"},"revision":"main@sha1:130d987c8ac662283295341ab6c3c2666d0bd208"}
{"level":"info","ts":"2024-12-30T09:50:36.681Z","msg":"Reconciliation finished in 371.728782ms, next run in 10m0s","controller":"kustomization","controllerGroup":"kustomize.toolkit.fluxcd.io","controllerKind":"Kustomization","Kustomization":{"name":"karpenter-configuration","namespace":"flux-system"},"namespace":"flux-system","name":"karpenter-configuration","reconcileID":"e1cfb064-c88c-47aa-b19b-96040df990da","revision":"main@sha1:130d987c8ac662283295341ab6c3c2666d0bd208"}
$ flux trace ec2nodeclass.karpenter.k8s.aws/default-arm
Object: EC2NodeClass/default-arm
Status: Managed by Flux
---
Kustomization: karpenter-configuration
Namespace: flux-system
Path: ./kubernetes/karpenter-configuration
Revision: main@sha1:xxx
Status: Last reconciled at 2024-12-30 15:10:43 +0200 EET
Message: Applied revision: main@sha1:xxx
---
GitRepository: main
Namespace: flux-system
URL: xxx
Branch: main
Revision: main@sha1:xxx
Status: Last reconciled at 2024-12-30 14:30:00 +0200 EET
Message: stored artifact for revision 'main@sha1:xxx'
from kubernetes perspective the fields exist just fine:
$ k explain ec2nodeclass.spec.kubelet
GROUP: karpenter.k8s.aws
KIND: EC2NodeClass
VERSION: v1
FIELD: kubelet <Object>
DESCRIPTION:
Kubelet defines args to be used when configuring kubelet on provisioned
nodes.
They are a subset of the upstream types, recognizing not all options may be
supported.
Wherever possible, the types and names should reflect the upstream kubelet
types.
Describe the bug
I'm trying to deploy karpenter's
ec2nodeclass.karpenter.k8s.aws
which recently got a new fieldspec.kubelet
(moved from other crd)Here is the file in my Git:
Steps to reproduce
when I apply it with flux, e.g.
flux reconcile kustomization --with-source -n flux-system karpenter-configuration
, I get the manifest without thekubelet
part:Expected behavior
When I apply it manually with kubectl, it works well:
Screenshots and recordings
No response
OS / Distro
macos
Flux version
v2.4.0
Flux check
► checking prerequisites
✔ Kubernetes 1.30.7-eks-56e63d8 >=1.28.0-0
► checking version in cluster
✔ distribution: flux-
✔ bootstrapped: false
► checking controllers
✔ helm-controller: deployment ready
► xxx/docker.io/fluxcd/helm-controller:v1.1.0
✔ kustomize-controller: deployment ready
► xxx/docker.io/fluxcd/kustomize-controller:v1.4.0
✔ notification-controller: deployment ready
► xxx/docker.io/fluxcd/notification-controller:v1.4.0
✔ source-controller: deployment ready
► xxx/docker.io/fluxcd/source-controller:v1.4.1
► checking crds
✔ alerts.notification.toolkit.fluxcd.io/v1beta3
✔ buckets.source.toolkit.fluxcd.io/v1
✔ gitrepositories.source.toolkit.fluxcd.io/v1
✔ helmcharts.source.toolkit.fluxcd.io/v1
✔ helmreleases.helm.toolkit.fluxcd.io/v2
✔ helmrepositories.source.toolkit.fluxcd.io/v1
✔ kustomizations.kustomize.toolkit.fluxcd.io/v1
✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2
✔ providers.notification.toolkit.fluxcd.io/v1beta3
✔ receivers.notification.toolkit.fluxcd.io/v1
✔ all checks passed
Git provider
No response
Container Registry provider
No response
Additional context
I'm using the latest component versions, e.g. kustomize-controller
v1.4.0
kustomize-controller logs doesn't show anything suspicious:
from kubernetes perspective the fields exist just fine:
local kustomize build also works:
Code of Conduct
The text was updated successfully, but these errors were encountered: