Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bump multus to v4.1.1 #1907

Closed
wants to merge 1 commit into from
Closed

Conversation

kubevirt-bot
Copy link
Collaborator

bump multus to v4.1.1
Executed by Bumper script

bump multus to v4.1.1

Signed-off-by: CNAO Bump Bot <[email protected]>
@kubevirt-bot kubevirt-bot added the release-note Denotes a PR that will be considered when it comes time to generate release notes. label Sep 25, 2024
@kubevirt-bot kubevirt-bot added the dco-signoff: yes Indicates the PR's author has DCO signed all their commits. label Sep 25, 2024
@kubevirt-bot
Copy link
Collaborator Author

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign phoracek for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

sonarcloud bot commented Sep 25, 2024

@oshoval
Copy link
Collaborator

oshoval commented Sep 25, 2024

/cc @EdDev
locally it did manage to deploy multus (and all the others in the example CR) this time (previous PRs were broken)
lets see what CI say

if macvtap and multus dyn networks fail, worth to check bumping k8s version and such

@oshoval
Copy link
Collaborator

oshoval commented Sep 25, 2024

There are a bunch of errors and some regression [1] about kubevirtci that will block updating stuff on macvtap that might be needed
https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubevirt_cluster-network-addons-operator/1907/pull-e2e-cnao-multus-dynamic-networks-functests/1838943179429122048
https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubevirt_cluster-network-addons-operator/1907/pull-e2e-cluster-network-addons-operator-macvtap-cni-functests/1838943178024030208

strategy should be to first make sure we can run the standalone repos (macvtap and multus dynamic networks)
with the new multus and then to bump it here

Alternative will be to make the minimal quick and dirty deployment that would be needed on kubevirt
bypassing CNAO, but it will also be temporary and need some sanity checking for the relevant required affected components if any.

@maiqueb fyi

[1] kubevirt/kubevirtci#1217 (comment)

@oshoval
Copy link
Collaborator

oshoval commented Sep 26, 2024

sumup issues here
#1858 (comment)

for now worked around kubevirt/kubevirtci#1217 (comment)
by pinning kubevirtci version on macvtap

@oshoval
Copy link
Collaborator

oshoval commented Sep 26, 2024

Once #1908 is merged
lets rerun at least the multus dynamic to see if it helps

@oshoval
Copy link
Collaborator

oshoval commented Sep 26, 2024

/retest

@kubevirt-bot
Copy link
Collaborator Author

@kubevirt-bot: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-e2e-cnao-multus-dynamic-networks-functests b5d35e4 link true /test pull-e2e-cnao-multus-dynamic-networks-functests
pull-e2e-cluster-network-addons-operator-macvtap-cni-functests b5d35e4 link true /test pull-e2e-cluster-network-addons-operator-macvtap-cni-functests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@RamLavi
Copy link
Collaborator

RamLavi commented Sep 29, 2024

looks like both lanes fail on tests, where they expect the network-status to have more elements than it really does. I wonder what changed in this version. @ormergi @maiqueb thoughts?

@ormergi
Copy link
Contributor

ormergi commented Sep 29, 2024

looks like both lanes fail on tests, where they expect the network-status to have more elements than it really does. I wonder what changed in this version. @ormergi @maiqueb thoughts?

Its hard to tell w/o the test pod manifest.
Maybe its related to this change in Multus k8snetworkplumbingwg/multus-cni#1336

In general I suggest to change the assertion in the test to use Equality/ConsistOf marchers for the expected network-status.
Currently it uses quantity check it could be broken long ago, status could be anything 😬

@RamLavi
Copy link
Collaborator

RamLavi commented Sep 29, 2024

looks like both lanes fail on tests, where they expect the network-status to have more elements than it really does. I wonder what changed in this version. @ormergi @maiqueb thoughts?

Its hard to tell w/o the test pod manifest. Maybe its related to this change in Multus k8snetworkplumbingwg/multus-cni#1336

In general I suggest to change the assertion in the test to use Equality/ConsistOf marchers for the expected network-status. Currently it uses quantity check it could be broken long ago, status could be anything 😬

simple example from multus-dyn test:

$ kubectl describe pod -n ns1 tiny-winy-pod 
Name:                      tiny-winy-pod
Namespace:                 ns1
Priority:                  0
Service Account:           default
Node:                      node01/192.168.66.101
Start Time:                Sun, 29 Sep 2024 10:22:57 +0300
Labels:                    app=tiny-winy-pod
Annotations:               cni.projectcalico.org/podIP: 10.244.196.172/32
                           cni.projectcalico.org/podIPs: 10.244.196.172/32,fd10:244::c4ab/128
                           k8s.v1.cni.cncf.io/network-status:
                             [{
                                 "name": "ns1/tenant-network",
                                 "interface": "net1",
                                 "mac": "96:c3:2b:e3:49:c8",
                                 "dns": {}
                             }]
                           k8s.v1.cni.cncf.io/networks: [{"name":"tenant-network","namespace":"ns1"}]
Status:                    Terminating (lasts <invalid>)
Termination Grace Period:  30s
IP:                        10.244.196.172
IPs:
  IP:  10.244.196.172
  IP:  fd10:244::c4ab
Containers:
  samplepod:
    Container ID:  cri-o://a7ead57641361a0ad61f76037fc371a0d59afc31dd864ecf407ce3a4c25c4e59
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.26
    Image ID:      k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/ash
      -c
      trap : TERM INT; sleep infinity & wait
    State:          Running
      Started:      Sun, 29 Sep 2024 10:22:58 +0300
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rmwts (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  kube-api-access-rmwts:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age   From               Message
  ----    ------          ----  ----               -------
  Normal  Scheduled       2s    default-scheduler  Successfully assigned ns1/tiny-winy-pod to node01
  Normal  AddedInterface  2s    multus             Add eth0 [10.244.196.172/32 fd10:244::c4ab/128] from k8s-pod-network
  Normal  AddedInterface  2s    multus             Add net1 [] from ns1/tenant-network
  Normal  Pulled          2s    kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.26" already present on machine
  Normal  Created         1s    kubelet            Created container samplepod
  Normal  Started         1s    kubelet            Started container samplepod

@ormergi
Copy link
Contributor

ormergi commented Sep 29, 2024

The network-status seem wrong, I think it should report the default cluster network interface (eth0) as well.
According to descripiton events we can see the pod is attached to default cluster network.

@oshoval
Copy link
Collaborator

oshoval commented Sep 30, 2024

some more info here
k8snetworkplumbingwg/multus-dynamic-networks-controller#254
#1858

PR to bump multus dynamic before this PR (if pass standalone, else we will need them both together)
#1909
there is also macvtap bug to fix first

@maiqueb
Copy link
Contributor

maiqueb commented Sep 30, 2024

looks like both lanes fail on tests, where they expect the network-status to have more elements than it really does. I wonder what changed in this version. @ormergi @maiqueb thoughts?

It's clear we're missing the default cluster network in the status.

Please paste here the CNI result you get when you use calico; let's then use that info to analyze the problem, and add a unit test to the status generation library.
@RamLavi

@oshoval
Copy link
Collaborator

oshoval commented Sep 30, 2024

best to wait first for #1909 to be merged
it reduce the e2e errors from 5 to 1 on multus dynamic, at least locally on kubevirtci with multus 4.1.1
(waiting first to see it pass as standalone PR and if so it will be merged)

@oshoval
Copy link
Collaborator

oshoval commented Sep 30, 2024

/close
lets reopen for rebase and solve the conflict

#1910

@oshoval oshoval closed this Sep 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dco-signoff: yes Indicates the PR's author has DCO signed all their commits. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants