Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

caja-file-operations: fix estimate for queued copy #1759

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

basicmaster
Copy link
Contributor

Fixes the condition for showing an estimate of the remaining duration in case a copy operation is queued, aligning to the delete operation case.

Copy link
Member

@lukefromdc lukefromdc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I applied this by hand on top of the other PR. Built fine and no crash, but a long copy job once started tended to continue reporting zero bytes transfered until done or nearly so

Comment on lines 3087 to 3096
transfer_rate = transfer_info->num_bytes / elapsed;
remaining_time = (total_size - transfer_info->num_bytes) / transfer_rate;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is dangerous, yet nothing really new to this code. if either elapsed or transfer_rate is 0, it'll crash.
Admittedly, elapsed will not be 0 if SECONDS_NEEDED_FOR_RELIABLE_TRANSFER_RATE is greater than 0, which is gonna be the case but is not necessarily a good thing to depend on.

transfer_rate not being 0 puzzles me a tad more, because it means we have to have transferred something before this can be safely called.

What about simply changing the condition from && transfer_rate > 0 to || transfer_rate <= 0? This fixes the zero-division, and might even have been the intention.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somehow I didn't see that. I didn't get a crash of all of caja but DID get a general failure of the progress indicator to function. In general, any input that could be zero cannot be divided by without checking for and avoiding the zero case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed SECONDS_NEEDED_FOR_RELIABLE_TRANSFER_RATE could be changed to zero at a later time, so this shouldn't be assumed to be non-zero. Being a fan of minimally invasive fixes, I reverted the original fix (feel free to discard the original fix and the revert when merging) and instead just changed the condition, as proposed.

As the possible division-by-0 also affects the delete case, I added a fix for this in a separate commit.

Copy link
Member

@raveit65 raveit65 Feb 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, this definitely needs to be squashed during merge.
To much revert and fixes of fixes commits.
@basicmaster
with using git rebase -i you can edit/delete, sort, etc. very easy
Still wondering why coder don't know this strong git command....

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could just be new to it: I had to learn these things one at a time myself.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm aware of interactive rebases but didn't want to do a forced push here, as I didn't know whether/how GitHub handles this in a MR. Thus I pushed the revert commit and the improved solution commits, and asked you to discard them later.
As GitHub apparently copes with a force push, I just discarded the initial solution (and its revert) and rebased the branch on current master.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering if there is anything else needed to resolve (and in the end to merge) this. Have you been able to check the rebased solution in the meantime?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cwendling Can this conversation be resolved?

@raveit65 raveit65 force-pushed the fix_file_operations_copy_estimate branch from 01bf8cf to fb53663 Compare January 31, 2024 12:15
@lukefromdc
Copy link
Member

lukefromdc commented Feb 2, 2024

Just retested this. Didn't notice any build warnings from the file other than the usual deprecation warnings with default build options.

On queuing multiple large files that push the drive to its limit, the first file reads accurate rate and the second (waiting) file does not show a transfer rate. When it starts, the indicated transfer rate starts low and quickly accelerates, on a large enough file would presumably reach the true data rate but copying to /tmp I ran out of file before it could show full speed. That's with disk images and multi-GB filesystem backups as the test files.

Seems to me this is a much better behavior than we get now, and I didn't get any refusals of the dialog to work properly in a quick test

@faulesocke
Copy link

Fixing this properly should be easier: Just reset elapsed to zero when the job actually starts. Also reset it when it is manually unpaused by the user (which might be the same code path, haven't looked into it).

Fixes the condition for showing an estimate of the remaining duration in
case a copy operation is queued, correctly considering the current
transfer rate.
Aligning to the copy operation case, this fixes the condition for
showing an estimate of the remaining duration for delete operations,
preventing a possible division by 0 due to a zero transfer rate.
@basicmaster basicmaster force-pushed the fix_file_operations_copy_estimate branch from fb53663 to c879c3f Compare February 3, 2024 01:57
@basicmaster
Copy link
Contributor Author

@faulesocke:
You actually refer to the changes in #1758, aren't you?

The case to pause a job after it started (e.g. when copying multiple files) has also to be considered here. In that case you definitely need to stop/continue the timer, otherwise you would again start at zero time and get a wrong estimate.

@lukefromdc
Copy link
Member

Any updates on this?

@basicmaster
Copy link
Contributor Author

From my side, this MR is ready to be merged. As far as I can see, I answered/addressed all feedback, but I'm of course happy to do further adjustments, if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants