We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is this a BUG REPORT or FEATURE REQUEST?:
Bug report
What happened:
kubectl wait
kubectl wait -n upgrade-manager rollingupgrade/xxx --for condition=complete
Interestingly, status is set, however:
Status: Current Status: completed
This behavior changed somewhere between 0.13, and 1.0.4—in 0.13, the condition is set.
What you expected to happen:
Condition should be set, in addition to status.
How to reproduce it (as minimally and precisely as possible):
See "What happened".
Anything else we need to know?:
Environment:
keikoproj/rolling-upgrade-controller:v1.0.4
$ kubectl version -o yaml clientVersion: buildDate: "2021-12-16T08:38:33Z" compiler: gc gitCommit: 5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e gitTreeState: clean gitVersion: v1.22.5 goVersion: go1.16.12 major: "1" minor: "22" platform: darwin/amd64 serverVersion: buildDate: "2021-10-15T21:46:21Z" compiler: gc gitCommit: f17b810c9e5a82200d28b6210b458497ddfcf31b gitTreeState: clean gitVersion: v1.20.11-eks-f17b81 goVersion: go1.15.15 major: "1" minor: 20+ platform: linux/amd64 WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
Other debugging information (if applicable):
$ kubectl -n upgrade-manager describe rollingupgrade/xxx Name: xxx Namespace: upgrade-manager Labels: <none> Annotations: <none> API Version: upgrademgr.keikoproj.io/v1alpha1 Kind: RollingUpgrade Metadata: Creation Timestamp: 2022-03-01T23:21:38Z Generation: 1 Managed Fields: API Version: upgrademgr.keikoproj.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: .: f:asgName: f:nodeIntervalSeconds: f:postDrain: .: f:postWaitScript: f:script: f:postDrainDelaySeconds: f:postTerminate: .: f:script: f:preDrain: .: f:script: f:strategy: .: f:drainTimeout: f:mode: Manager: kubectl-client-side-apply Operation: Update Time: 2022-03-01T23:21:38Z API Version: upgrademgr.keikoproj.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:spec: f:strategy: f:maxUnavailable: f:type: f:status: .: f:completePercentage: f:currentStatus: f:endTime: f:nodesProcessed: f:startTime: f:totalNodes: f:totalProcessingTime: Manager: manager Operation: Update Time: 2022-03-01T23:21:39Z Resource Version: 4465852 UID: c980e8c7-7995-40b1-9a60-d7cff268cb0f Spec: Asg Name: xxx Node Interval Seconds: 300 Post Drain: Post Wait Script: echo "Pods at postWait:" kubectl get pods --all-namespaces --field-selector spec.nodeName=${INSTANCE_NAME} Script: echo "Pods at PostDrain:" kubectl get pods --all-namespaces --field-selector spec.nodeName=${INSTANCE_NAME} Post Drain Delay Seconds: 90 Post Terminate: Script: echo "PostTerminate:" kubectl get pods --all-namespaces Pre Drain: Script: kubectl get pods --all-namespaces --field-selector spec.nodeName=${INSTANCE_NAME} Strategy: Drain Timeout: 300 Mode: eager Status: Complete Percentage: 100% Current Status: completed End Time: 2022-03-01T23:21:39Z Nodes Processed: 2 Start Time: 2022-03-01T23:21:38Z Total Nodes: 2 Total Processing Time: 1s Events: <none>
$ kubectl -n upgrade-manager logs rolling-upgrade-controller-75c89d58df-qz4p8 {"level":"info","ts":1646176898.8337069,"logger":"controllers.RollingUpgrade","msg":"admitted new rolling upgrade","scalingGroup":"xxx","update strategy":{"type":"randomUpdate","mode":"eager","maxUnavailable":1,"drainTimeout":300},"name":"upgrade-manager/xxx"} {"level":"info","ts":1646176899.2316418,"logger":"controllers.RollingUpgrade","msg":"scaling group details","scalingGroup":"xxx","desiredInstances":2,"launchConfig":"","name":"upgrade-manager/xxx"} {"level":"info","ts":1646176899.231667,"logger":"controllers.RollingUpgrade","msg":"checking if rolling upgrade is completed","name":"upgrade-manager/xxx"} {"level":"info","ts":1646176899.2317128,"logger":"controllers.RollingUpgrade","msg":"no drift in scaling group","name":"upgrade-manager/xxx} {"level":"info","ts":1646176899.2422218,"logger":"controllers.RollingUpgrade","msg":"***Reconciling***"} {"level":"info","ts":1646176899.2422483,"logger":"controllers.RollingUpgrade","msg":"rolling upgrade ended","name":"upgrade-manager/xxx","status":"completed"} {"level":"info","ts":1646176929.242384,"logger":"controllers.RollingUpgrade","msg":"***Reconciling***"} {"level":"info","ts":1646176929.2424302,"logger":"controllers.RollingUpgrade","msg":"rolling upgrade ended","name":"upgrade-manager/xxx","status":"completed"} {"level":"info","ts":1646179004.5837557,"logger":"controllers.RollingUpgrade","msg":"***Reconciling***"} {"level":"info","ts":1646179004.5838003,"logger":"controllers.RollingUpgrade","msg":"rolling upgrade ended","name":"upgrade-manager/xxx","status":"completed"}
The text was updated successfully, but these errors were encountered:
Yes, looks like we are not updating conditions in new controller, we should address this. FYI @shreyas-badiger
Sorry, something went wrong.
RIght. Missed updating the CR. Will Add it. Thanks @aweeks
Thank you!
No branches or pull requests
Is this a BUG REPORT or FEATURE REQUEST?:
Bug report
What happened:
kubectl wait
, egkubectl wait -n upgrade-manager rollingupgrade/xxx --for condition=complete
Interestingly, status is set, however:
This behavior changed somewhere between 0.13, and 1.0.4—in 0.13, the condition is set.
What you expected to happen:
Condition should be set, in addition to status.
How to reproduce it (as minimally and precisely as possible):
See "What happened".
Anything else we need to know?:
Environment:
keikoproj/rolling-upgrade-controller:v1.0.4
Other debugging information (if applicable):
The text was updated successfully, but these errors were encountered: