Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-3659: Update proposal following initial implementation #3919

Closed
wants to merge 2 commits into from

Conversation

KnVerey
Copy link
Contributor

@KnVerey KnVerey commented Mar 21, 2023

/cc @justinsb
/sig cli

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory labels Mar 21, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: KnVerey

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Mar 21, 2023
@@ -1,75 +1,5 @@
<!--
**Note:** When your KEP is complete, all of these comment blocks should be removed.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In accordance with this, I removed all the instructional comments for elements we've already completed.

@@ -479,26 +394,26 @@ treat this manifest as invalid and return an error.
Although ApplySet parent objects can (in theory) be of any type, for both performance and simplicity, we choose to limit supported types to the following:
- ConfigMap
- Secret
- custom resources, where the CRD registering them is labeled with `applyset.k8s.io/role/parent` (The value is currently ignored, but implementors should set an empty value to be forwards-compatible with future evolution of this convention.)
- custom resources, where the CRD registering them is labeled with `applyset.kubernetes.io/is-parent-type` (The value is currently ignored, but implementors should set it to "true" to be forwards-compatible with future evolution of this convention.)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs PR currently instructs people to set it to "true". I think that is the logical default value given the name we ended up going with.


Additionally, ApplySet parents MUST be labelled with:
Additionally, ApplySet parents MUST be annotated with:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might have been an uncaught mistake on my part, but in the real code we are currently checking an annotation. While we don't have any reason to query on this, it might be useful? Only to a limited degree, given that the value includes version info. I could go either way.


Based on performance feedback, we can also consider switching to the alternative `applyset.k8s.io/inventory` hint annotation. Even if we do not trust the GKNN list for deletion purposes (we cannot, as it is not the source of truth), it could be used to optimize certain specific cases, most notably the no-op scenario where the current set exactly matches the list.
Based on performance feedback, we can also consider switching to the alternative `applyset.kubernetes.io/inventory` hint annotation. Even if we do not trust the GKNN list for deletion purposes (we cannot, as it is not the source of truth), it could be used to optimize certain specific cases, most notably the no-op scenario where the current set exactly matches the list.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wdyt about adding testing perf and making a decision on whether to support this as a beta criterion?


```bash
# Simple namespace-scoped apply with prune.
# The parent object will be a Secret named "set1" in the "foo" namespace.
kubectl apply -n foo --prune --applyset=set1 -f .

# Simple apply with prune, with cluster-scoped ApplySet
# The parent object will be the "myapp" Namespace itself.
kubectl apply -n myapp --prune --applyset=namespaces/myapp -f .
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we decided not to allow anything other than Secret/ConfigMap even during the last KEP update, and this seemingly got missed


Since there is no obvious choice for a cluster-scoped built-in resource that could be similarly chosen as the default ApplySet kind, we will allow the kind to optionally be specified in the `--applyset` flag itself: `--applyset=mykind.v1.mygroup/name`. This is the same format used by `kubectl get`. When a GVR is specified in this manner, kubectl will look up the referenced object and attempt to use it as the parent (using SSA as described above for the Secret case). The referenced object MUST already exist on the cluster by the time the pruning phase begins (it may be created by the main apply operation), as it is not possible for kubectl to sanely construct arbitrary object types from scratch.
Since there is no obvious choice for a cluster-scoped built-in resource that could be similarly chosen as the default ApplySet kind, we will allow a GR to optionally be specified in the `--applyset` flag itself: `--applyset=[RESOURCE][.GROUP]/NAME`. This is one of the formats accepted by `kubectl get`. When a GR is specified in this manner, kubectl will look up the referenced object and attempt to use it as the parent (using SSA as described above for the Secret case). The referenced object MUST already exist on the cluster by the time the pruning phase begins (it may be created by the main apply operation), as it is not possible for kubectl to sanely construct arbitrary object types from scratch.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that we never said that the required annotations must also pre-exist. The fact that our implementation currently expects that is a problem for the CR parent flow, and we should revisit that (and potentially be explicit about that somewhere around here).

an error. During implementation of the alpha we will explore to what extent we can
optimize this overlap discovery, particularly in conjunction with
server-side-apply which does not require an object read before applying.
A richer apply tooling than kubectl does will likely establish watches
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing this, because this whole section is only about kubectl.

However, this is out of scope for kubectl and thus we will likely have to
optimize differently for kubectl. In the worst case, we will have to fetch
the objects before applying (with a set of label-filtered LIST requests),
we will explore to what extent that can be amortized over other kubectl
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Being more familiar with the requests made during CSA and SSA, I don't think there's any way to amortize it in SSA.


We differentiate between "adoption" (taking over management of a set of
objects created by another tool), vs "migration" (taking over management of
a set of objects created with the existing pruning mechanism).

We will not support "adoption" of existing ApplySets initially, other than
by re-applying "over the top". Based on user feedback, we may require a flag
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the change from "flag" to "subcommand or flag on repair command" -- same reasoning as above about overloading apply with flags for edge cases.

In the alpha scope, we will explore suitable "migration" tooling for moving
from existing `--prune` objects. Note that migration is not trivial, in that
different users may expect different behaviors with regard to the GKs selected
or the treatment of objects having/lacking the `last-application-configuration`
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what we meant here. The migration command will be subject to the same gotchas in set selection as the old one, but I think all we can do is ask for the same inputs they used on their own command, and go with the results. We could add interactive confirmation maybe?

@@ -528,40 +443,40 @@ We may revisit this and converge on a single, mandatory hint annotation before t
When a tool that wants to list the members of an ApplySet and cannot determine the list of GKs (i.e. because neither hint annotation is used), it may support “discovery”, likely warning that a full cluster crawl is being attempted. Insufficient permissions errors are likely with such functionality, and when they are encountered, the tool should warn that the membership list may be incomplete.

```yaml
applyset.k8s.io/contains-group-kinds: <kind>.<group>[,]` # OPTIONAL
applyset.kubernetes.io/contains-group-resources: <resource>.<group>[,]` # OPTIONAL
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what we actually did in the alpha, but we may want to go back to the original proposal: #3661 (comment)

@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 14, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants