-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Race in tests? loop device count before and after mismatch #67
Comments
Hi! I am not @mdaffin but I can suggest two things.
|
I tried moving the lock to the beginning of the test, but then another type of error showed up. The other thing I tried was to run the tests with
In the test log[1], I could confirm that the /tmp/blabla binary that was being run did have
|
Setting the test threads to 1 can't hurt. Moving the lock up was useful because it shows consistency in how you get the error, right is 1 more than left, regardless. You can get really primitive, and up the sleep value from 1/2 second to, e.g., 10 seconds and see what happens? |
I tried bumping that sleep to 10s, but it still fails in multiple runs. We can see a 10s delay in between some of this test output:
|
Hi,
we are getting a test failure on arm64 in ubuntu mantic with rust-loopdev 0.4.0-3[1]
I added some print statements, ran it in an arm64 vm (which has normally 3 loop devices used already) and it looks like the loop device count is changing under our feet:
This is the changed function:
Looks like
list_device(None)
right after thesetup()
call has one less loop device already. Which one? I don't know, because if I printlist_device(None)
right at the start of the test, then the failure doesn't happen anymore ;)I know this is not the latest version of rust-loopdev, and I checked the git log to see if anything could be a fix for this, but my rust knowledge is close to zero.
Here is a full test run showing the failure[2].
In my test vm, I have to run the tests multiple times (but not many) until I see the failure. It doesn't matter if it has one or two cpus.
The text was updated successfully, but these errors were encountered: