Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No resource limits in the snapshot controller #620

Closed
LarsBingBong opened this issue Nov 25, 2021 · 7 comments
Closed

No resource limits in the snapshot controller #620

LarsBingBong opened this issue Nov 25, 2021 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@LarsBingBong
Copy link

I can see that there are no resource limit section in the workload spun up by this deployment.

Is that on purpose?

I have a Kubernetes linting tool warning me of this and in general, I thought this to be a best practice. Or is that a special case?

Looking forward to hearing from you.

Thank you very much

@Kartik494
Copy link
Member

@LarsBingBong Could you describe here warnings you got for not having resource limit in deployment file?

@LarsBingBong
Copy link
Author

LarsBingBong commented Nov 27, 2021

Sure ... I'm using datree a Kubernetes YAML linter and quality tool. The warning I get is:

�[K>>  File: infrastructure-services/storage/velero/server/csi-snapshot-support/setup-snapshot-controller.yaml

[V] YAML validation
[V] Kubernetes schema validation

[X] Policy check

❌  Ensure each container has a configured memory request  [1 occurrence]
    — metadata.name: snapshot-controller (kind: Deployment)
💡  Missing property object `requests.memory` - value should be within the accepted boundaries recommended by the organization

❌  Ensure each container has a configured CPU request  [1 occurrence]
    — metadata.name: snapshot-controller (kind: Deployment)
💡  Missing property object `requests.cpu` - value should be within the accepted boundaries recommended by the organization

❌  Ensure each container has a configured memory limit  [1 occurrence]
    — metadata.name: snapshot-controller (kind: Deployment)
💡  Missing property object `limits.memory` - value should be within the accepted boundaries recommended by the organization

❌  Ensure each container has a configured CPU limit  [1 occurrence]
    — metadata.name: snapshot-controller (kind: Deployment)
💡  Missing property object `limits.cpu` - value should be within the accepted boundaries recommended by the organization

❌  Ensure each container has a configured liveness probe  [1 occurrence]
    — metadata.name: snapshot-controller (kind: Deployment)
💡  Missing property object `livenessProbe` - add a properly configured livenessProbe to catch possible deadlocks

❌  Ensure each container has a configured readiness probe  [1 occurrence]
    — metadata.name: snapshot-controller (kind: Deployment)
💡  Missing property object `readinessProbe` - add a properly configured readinessProbe to notify kubelet your Pods are ready for traffic


(Summary)

- Passing YAML validation: 1/1

- Passing Kubernetes (1.18.0) schema validation: 1/1

- Passing policy check: 0/1

This output can be seen in the console of my IDE as Datree executes as part of a git pre-commit hook.

Also, I got this very telling and good answer over on Slack. Mr. Patrick Ohly answering ( part of the answer ):

The problem is that any kind of limit will be rather arbitrary. The actual limits depend on the kind of load that the sidecar will have to handle. This is something that cannot be generalized easily.We have an issue open for the csi hostpath driver example deployments, with no good solution....

I hope that clarifies it.

@xing-yang
Copy link
Collaborator

Add this issue here for reference: kubernetes-csi/csi-driver-host-path#47

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 27, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 29, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants