-
-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
caja-file-operations: fix estimate for queued copy #1759
base: master
Are you sure you want to change the base?
caja-file-operations: fix estimate for queued copy #1759
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I applied this by hand on top of the other PR. Built fine and no crash, but a long copy job once started tended to continue reporting zero bytes transfered until done or nearly so
transfer_rate = transfer_info->num_bytes / elapsed; | ||
remaining_time = (total_size - transfer_info->num_bytes) / transfer_rate; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is dangerous, yet nothing really new to this code. if either elapsed
or transfer_rate
is 0, it'll crash.
Admittedly, elapsed
will not be 0 if SECONDS_NEEDED_FOR_RELIABLE_TRANSFER_RATE
is greater than 0, which is gonna be the case but is not necessarily a good thing to depend on.
transfer_rate
not being 0 puzzles me a tad more, because it means we have to have transferred something before this can be safely called.
What about simply changing the condition from && transfer_rate > 0
to || transfer_rate <= 0
? This fixes the zero-division, and might even have been the intention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somehow I didn't see that. I didn't get a crash of all of caja but DID get a general failure of the progress indicator to function. In general, any input that could be zero cannot be divided by without checking for and avoiding the zero case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed SECONDS_NEEDED_FOR_RELIABLE_TRANSFER_RATE
could be changed to zero at a later time, so this shouldn't be assumed to be non-zero. Being a fan of minimally invasive fixes, I reverted the original fix (feel free to discard the original fix and the revert when merging) and instead just changed the condition, as proposed.
As the possible division-by-0 also affects the delete case, I added a fix for this in a separate commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, this definitely needs to be squashed during merge.
To much revert and fixes of fixes commits.
@basicmaster
with using git rebase -i you can edit/delete, sort, etc. very easy
Still wondering why coder don't know this strong git command....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could just be new to it: I had to learn these things one at a time myself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm aware of interactive rebases but didn't want to do a forced push here, as I didn't know whether/how GitHub handles this in a MR. Thus I pushed the revert commit and the improved solution commits, and asked you to discard them later.
As GitHub apparently copes with a force push, I just discarded the initial solution (and its revert) and rebased the branch on current master
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering if there is anything else needed to resolve (and in the end to merge) this. Have you been able to check the rebased solution in the meantime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cwendling Can this conversation be resolved?
01bf8cf
to
fb53663
Compare
Just retested this. Didn't notice any build warnings from the file other than the usual deprecation warnings with default build options. On queuing multiple large files that push the drive to its limit, the first file reads accurate rate and the second (waiting) file does not show a transfer rate. When it starts, the indicated transfer rate starts low and quickly accelerates, on a large enough file would presumably reach the true data rate but copying to /tmp I ran out of file before it could show full speed. That's with disk images and multi-GB filesystem backups as the test files. Seems to me this is a much better behavior than we get now, and I didn't get any refusals of the dialog to work properly in a quick test |
Fixing this properly should be easier: Just reset |
Fixes the condition for showing an estimate of the remaining duration in case a copy operation is queued, correctly considering the current transfer rate.
Aligning to the copy operation case, this fixes the condition for showing an estimate of the remaining duration for delete operations, preventing a possible division by 0 due to a zero transfer rate.
fb53663
to
c879c3f
Compare
@faulesocke: The case to pause a job after it started (e.g. when copying multiple files) has also to be considered here. In that case you definitely need to stop/continue the timer, otherwise you would again start at zero time and get a wrong estimate. |
Any updates on this? |
From my side, this MR is ready to be merged. As far as I can see, I answered/addressed all feedback, but I'm of course happy to do further adjustments, if needed. |
Fixes the condition for showing an estimate of the remaining duration in case a copy operation is queued, aligning to the delete operation case.