Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up volumes list for ovn-controller pod #182

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

booxter
Copy link
Contributor

@booxter booxter commented Dec 15, 2023

These seem unnecessary; and they block our progress on moving from HostPaths to e.g. PVCs, which would allow us to disable privileged mode (/home hostPaths seems to carry selinux labels that are not available for the containers.)

@openshift-ci openshift-ci bot requested review from slawqo and viroel December 15, 2023 17:14
Copy link
Contributor

openshift-ci bot commented Dec 15, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: booxter

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@booxter
Copy link
Contributor Author

booxter commented Dec 15, 2023

var-run is consumed by config-job for nicMappings etc. So the config job never exits because:

Ovsdb-server seems not be ready yet. Waiting...
ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Ovsdb-server seems not be ready yet. Waiting...
ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Ovsdb-server seems not be ready yet. Waiting...
ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Ovsdb-server seems not be ready yet. Waiting...

Need to reconcile with it. Maybe we don't need a config-job and can squash it into the same statefulset somehow?..

This allows to get rid of a hostMount shared between the job pod and the
main ovsdb-server pod container (to enable communication of vsctl
command with the database socket).

Getting rid of hostMounts is needed to be able to eventually stop
running ovn-controller pods as privileged containers.
This is to prove that this is possible, now that configJob is squashed
into the main ovn-controller pod.
We log to stdout. Nothing else is needed.
This directory can be local to the container and does not need to
persist.

Note the rundir directory is not present in the image, so we have to
switch OVN_RUNDIR to point to /tmp, same as we do for other ovn services
managed by the operator.
I am not aware of anything using the directory.
@booxter
Copy link
Contributor Author

booxter commented Jan 4, 2024

Rebased on top of #195

@booxter
Copy link
Contributor Author

booxter commented Jan 4, 2024

/ok-to-test

@booxter
Copy link
Contributor Author

booxter commented Jan 8, 2024

Some of the volumes can be cleaned up with no bad effect. But at least the volume that passes over the ovsdb-server AF_UNIX socket to configJob via a file cannot be removed until we switch this communication channel to AF_INET sockets. This is technically possible but requires significant work to e.g. deploy SSL certificates for the channel.

This PR may still be useful, but I should drop the removal of the socket volume from it first before we can proceed with a partial volume list cleanup.

@openshift-merge-robot
Copy link
Collaborator

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants