-
Notifications
You must be signed in to change notification settings - Fork 254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Fix Port Leaks #1648
🐛 Fix Port Leaks #1648
Conversation
Hi @spjmurray. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
✅ Deploy Preview for kubernetes-sigs-cluster-api-openstack ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
ada0c5b
to
13d3d5c
Compare
13d3d5c
to
512e572
Compare
@@ -380,14 +380,6 @@ func (s *Service) createInstanceImpl(eventObject runtime.Object, openStackCluste | |||
return createdInstance, nil | |||
} | |||
|
|||
// getPortName appends a suffix to an instance name in order to try and get a unique name per port. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did I move this you will inevitably ask? Well, problem is I want to associate the test with the function that causes the issue in the first place. Problem is I cannot just export this and include it because of circular dependencies. That led me to going, well this should be a networking_test
package anyway, which then led to to not being able to create the networking service due to the client field not being exported, which then became ugly because of a hacky NewWithClientForTestOnly()
function in the production code. Moving it was by far the simplest solution.
instanceName := "foo" | ||
portID1 := "dc6e0ae3-dad6-4240-a9cb-e541916f20d3" | ||
portID2 := "a38ab1cb-c2cc-4c1b-9d1d-696ec73356d2" | ||
portName1 := GetPortName(instanceName, nil, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RE: above comment, I wanted to be absolutely sure that things generated with GetPortName
will get offed. By adding a tight coupling between the test and the call, you'll be less likely to change one in a broken way without being drawn to the test cases, so a win in my opinion,
/ok-to-test |
@dulek I've probably broken everything, but this may be a "better", definitely far more complex solution, but I can always revert back to the KISS version. So I've copied instance creation by generating an |
3619d72
to
e6a1dc4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly I like this version more, thank you! One comment inline.
for _, p := range portList { | ||
if err := s.DeletePort(eventObject, p.ID); err != nil { | ||
return err | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably error out if there are multiple ports of that name? Unless we can get to such a case in a single cluster scenario too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, update incoming. And now this has been philosophically reviewed, I guess I had better test it! Luckily I have a 0.8.0 branch... back in a bit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good news! It does actually work.
When an instance fails to come up, there's a bug where the port doesn't get cleaned up, and eventually you will exhaust your quota. The code does attempt to garbage collect, but I suspect at some point in the past ports were suffixed with an index, but the garbage collection code wasn't made aware of this, so lists nothing. Replace the APi "filtering" that doesn't work with a better algorithm that uses the instance creation code to generate the expected ports, then generate the names and delete if the port exists. Also protected with some nice unit tests.
e6a1dc4
to
b77e660
Compare
/lgtm |
// TODO: whould be nice if gophercloud could be persuaded to accept multiple | ||
// names as is allowed by the API in order to reduce API traffic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ping @pierreprinetti
If you'd like to resurrect that effort? The use cases are mounting.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mdbooth, spjmurray The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@mdbooth thanks! Should I unhold this or is that your job? Speaking of which... do we have a rough estimate for 0.8.0 GA? |
@spjmurray You get the pleasure of removing your hold :) Unless I get bored waiting for you, that is, but that takes a couple of weeks. |
/unhold |
Not mine it's part of the PR template. |
/pony |
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Here's a proposal for "or" junctions in Gophercloud I would also really appreciate candid feedback on how ergonomic my proposal would be. There are many different ways to add this capability to Gophercloud and this is just one of them. |
What this PR does / why we need it:
When an instance fails to come up, there's a bug where the port doesn't get cleaned up, and eventually you will exhaust your quota. The code does attempt to garbage collect, but I suspect at some point in the past ports were suffixed with an index, but the garbage collection code wasn't made aware of this, so lists nothing. Replace the APi "filtering" that doesn't work with a manual filter that does prefix matching.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #1404
Special notes for your reviewer:
TODOs:
/hold